repo_name
stringlengths 8
38
| pr_number
int64 3
47.1k
| pr_title
stringlengths 8
175
| pr_description
stringlengths 2
19.8k
⌀ | author
null | date_created
stringlengths 25
25
| date_merged
stringlengths 25
25
| filepath
stringlengths 6
136
| before_content
stringlengths 54
884k
⌀ | after_content
stringlengths 56
884k
| pr_author
stringlengths 3
21
| previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| comment
stringlengths 2
25.4k
| comment_author
stringlengths 3
29
| __index_level_0__
int64 0
5.1k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | docs/source/example_notebooks/sensitivity_analysis_nonparametric_estimators.ipynb | {
"cells": [
{
"cell_type": "markdown",
"id": "0bbaacaa",
"metadata": {},
"source": [
"# Sensitivity analysis for non-parametric causal estimators\n",
"Sensitivity analysis helps us study how robust an estimated effect is when the assumption of no unobserved confounding is violated. That is, how much bias does our estimate have due to omitting an (unobserved) confounder? Known as the \n",
"*omitted variable bias (OVB)*, it gives us a measure of how the inclusion of an omitted common cause (confounder) would have changed the estimated effect. \n",
"\n",
"This notebook shows how to estimate the OVB for general, non-parametric causal estimators. For gaining intuition, we suggest going through an introductory notebook that describes how to estimate OVB for a a linear estimator: [Sensitivity analysis for linear estimators](https://github.com/py-why/dowhy/blob/master/docs/source/example_notebooks/sensitivity_analysis_testing.ipynb). To recap, in that notebook, we saw how the OVB depended on linear partial R^2 values and used this insight to compute the adjusted estimate values depending on the relative strength of the confounder with the outcome and treatment. We now generalize the technique using the non-parametric partial R^2 and Reisz representers.\n",
"\n",
"\n",
"This notebook is based on *Chernozhukov et al., Long Story Short: Omitted Variable Bias in Causal Machine Learning. https://arxiv.org/abs/2112.13398*. "
]
},
{
"cell_type": "markdown",
"id": "cf30b925",
"metadata": {},
"source": [
"## I. Sensitivity analysis for partially linear models\n",
"We first analyze the sensitivity of a causal estimate when the true data-generating process (DGP) is known to be partially linear. That is, the outcome can be additively decomposed into a linear function of the treatment and a non-linear function of the confounders. We denote the treatment by $T$, outcome by $Y$, observed confounders by $W$ and unobserved confounders by $U$. \n",
"$$ Y = g(T, W, U) + \\epsilon = \\theta T + h(W, U) + \\epsilon $$\n",
"\n",
"However, we cannot estimate the above equation because the confounders $U$ are unobserved. Thus, in practice, a causal estimator uses the following \"short\" equation, \n",
"$$ Y = g_s(T, W) + \\epsilon_s = \\theta_s T + h_s(W) + \\epsilon_s $$\n",
"\n",
"The goal of sensitivity analysis is to answer how far $\\theta_s$ would be from the true $\\theta$. Chernozhukov et al. show that given a special function called Reisz function $\\alpha$, the omitted variable bias, $|\\theta - \\theta_s|$ is bounded by $\\sqrt{E[g-g_s]^2E[\\alpha-\\alpha_s]^2}$. For partial linear models, $\\alpha$ and the \"short\" $\\alpha_s$ are defined as, \n",
"$$ \\alpha := \\frac{T - E[T | W, U] )}{E(T - E[T | W, U]) ^ 2}$$\n",
"$$ \\alpha_s := \\frac{(T - E[T | W] )}{E(T - E[T | W]) ^ 2} $$\n",
"\n",
"The bound can be expressed in terms of the *partial* R^2 of the unobserved confounder $U$ with the treatment and outcome, conditioned on the observed confounders $W$. Recall that R^2 of $U$ wrt some target $Z$ is defined as the ratio of variance of the prediction $E[Z|U]$ with the variance of $Z$, $R^2_{Z\\sim U}=\\frac{\\operatorname{Var}(E[Z|U])}{\\operatorname{Var}(Y)}$. We can define the partial R^2 as an extension that measures the additional gain in explanatory power conditioned on some variables $W$. \n",
"$$ \\eta^2_{Z\\sim U| W} = \\frac{\\operatorname{Var}(E[Z|W, U]) - \\operatorname{Var}(E[Z|W])}{\\operatorname{Var}(Z) - \\operatorname{Var}(E[Z|W])} $$\n",
"\n",
"The bound is given by, \n",
"$$ (\\theta - \\theta_s)^2 = E[g-g_s]^2E[\\alpha-\\alpha_s]^2 = S^2 C_Y^2 C_T^2 $$ \n",
"where, \n",
"$$ S^2 = \\frac{E[(Y-g_s)^2]}{E[\\alpha_s^2]}; \\ \\ C_Y^2 = \\eta^2_{Y \\sim U | T, W}, \\ \\ C_T^2 = \\frac{\\eta^2_{T\\sim U | W}}{1 - \\eta^2_{T\\sim U | W}}$$\n",
"\n",
"\n",
"$S^2$ can be estimated from data. The other two parameters need to be specified manually: they convey the strength of the unobserved confounder $U$ on treatment and outcome. Below we show how to create a sensitivity contour plot by specifying a range of plausible values for $\\eta^2_{Y \\sim U | T, W}$ and $\\eta^2_{T\\sim U | W}$. We also show how to benchmark and set these values as a fraction of the maximum partial R^2 due to any subset of the observed covariates. "
]
},
{
"cell_type": "markdown",
"id": "1b67b63e",
"metadata": {},
"source": [
"### Creating a dataset with unobserved confounding "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bbbab4ea",
"metadata": {},
"outputs": [],
"source": [
"%load_ext autoreload\n",
"%autoreload 2"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ba6f68b9",
"metadata": {},
"outputs": [],
"source": [
"# Required libraries\n",
"import re\n",
"import numpy as np\n",
"import dowhy\n",
"from dowhy import CausalModel\n",
"import dowhy.datasets\n",
"from dowhy.utils.regression import create_polynomial_function"
]
},
{
"cell_type": "markdown",
"id": "2c386282",
"metadata": {},
"source": [
"We create a dataset with linear relationship between treatment and outcome, following the partial linear data-generating process. $\\beta$ is the true causal effect."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "41c60ca7",
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(101) \n",
"data = dowhy.datasets.partially_linear_dataset(beta = 10,\n",
" num_common_causes = 7,\n",
" num_unobserved_common_causes=1,\n",
" strength_unobserved_confounding=10,\n",
" num_samples = 1000,\n",
" num_treatments = 1,\n",
" stddev_treatment_noise = 10,\n",
" stddev_outcome_noise = 5\n",
" )\n",
"display(data)"
]
},
{
"cell_type": "markdown",
"id": "5df879f9",
"metadata": {},
"source": [
"The true ATE for this data-generating process is,"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75882711",
"metadata": {},
"outputs": [],
"source": [
"data[\"ate\"]"
]
},
{
"cell_type": "markdown",
"id": "5308f4dc",
"metadata": {},
"source": [
"To simulate unobserved confounding, we remove one of the common causes from the dataset. \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "636b6a25",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Observed data \n",
"dropped_cols=[\"W0\"]\n",
"user_data = data[\"df\"].drop(dropped_cols, axis = 1)\n",
"# assumed graph\n",
"user_graph = data[\"gml_graph\"]\n",
"for col in dropped_cols:\n",
" user_graph = user_graph.replace('node[ id \"{0}\" label \"{0}\"]'.format(col), '')\n",
" user_graph = re.sub('edge\\[ source \"{}\" target \"[vy][0]*\"\\]'.format(col), \"\", user_graph)\n",
"user_data"
]
},
{
"cell_type": "markdown",
"id": "4ae95e95",
"metadata": {},
"source": [
"### Obtaining a causal estimate using Model, Identify, Estimate steps\n",
"Create a causal model with the \"observed\" data and causal graph."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "207034e5",
"metadata": {},
"outputs": [],
"source": [
"model = CausalModel(\n",
" data=user_data,\n",
" treatment=data[\"treatment_name\"],\n",
" outcome=data[\"outcome_name\"],\n",
" graph=user_graph,\n",
" test_significance=None,\n",
" )\n",
"model.view_model()\n",
"from IPython.display import Image, display\n",
"display(Image(filename=\"causal_model.png\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4eaec5dc",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Identify effect\n",
"identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)\n",
"print(identified_estimand)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "56889b39",
"metadata": {},
"outputs": [],
"source": [
"# Estimate effect\n",
"import econml\n",
"from sklearn.ensemble import GradientBoostingRegressor\n",
"linear_dml_estimate = model.estimate_effect(identified_estimand, \n",
" method_name=\"backdoor.econml.dml.LinearDML\",\n",
" method_params={\n",
" 'init_params': {'model_y':GradientBoostingRegressor(),\n",
" 'model_t': GradientBoostingRegressor(),\n",
" 'linear_first_stages': False\n",
" },\n",
" 'fit_params': {'cache_values': True,}\n",
" })\n",
"print(linear_dml_estimate)"
]
},
{
"cell_type": "markdown",
"id": "891068cb",
"metadata": {},
"source": [
"### Sensitivity Analysis using the Refute step\n",
"After estimation , we need to check how robust our estimate is against the possibility of unobserved confounders. We perform sensitivity analysis for the LinearDML estimator assuming that its assumption on data-generating process holds: the true function for $Y$ is partial linear. For computational efficiency, we set <b>cache_values</b> = <b>True</b> in `fit_params` to cache the results of first stage estimation.\n",
"\n",
"Parameters used:\n",
"\n",
"* <b>method_name</b>: Refutation method name <br>\n",
"* <b>simulation_method</b>: \"non-parametric-partial-R2\" for non Parametric Sensitivity Analysis. \n",
"Note that partial linear sensitivity analysis is automatically chosen if LinearDML estimator is used for estimation. \n",
"* **partial_r2_confounder_treatment**: $\\eta^2_{T\\sim U | W}$, Partial R2 of unobserved confounder with treatment conditioned on all observed confounders. \n",
"* **partial_r2_confounder_outcome**: $\\eta^2_{Y \\sim U | T, W}$, Partial R2 of unobserved confounder with outcome conditioned on treatment and all observed confounders. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2488ccbb",
"metadata": {},
"outputs": [],
"source": [
"refute = model.refute_estimate(identified_estimand, linear_dml_estimate ,\n",
" method_name = \"add_unobserved_common_cause\",\n",
" simulation_method = \"non-parametric-partial-R2\",\n",
" partial_r2_confounder_treatment = np.arange(0, 0.8, 0.1),\n",
" partial_r2_confounder_outcome = np.arange(0, 0.8, 0.1)\n",
" )\n",
"print(refute)"
]
},
{
"cell_type": "markdown",
"id": "81f1d65b",
"metadata": {},
"source": [
"**Intepretation of the plot.** In the above plot, the x-axis shows hypothetical partial R2 values of unobserved confounder(s) with the treatment. The y-axis shows hypothetical partial R2 of unobserved confounder(s) with the outcome. At <x=0,y=0>, the black diamond shows the original estimate (theta_s) without considering the unobserved confounders.\n",
"\n",
"The contour levels represent *adjusted* lower confidence bound estimate of the effect, which would be obtained if the unobserved confounder(s) had been included in the estimation model. The red contour line is the critical threshold where the adjusted effect goes to zero. Thus, confounders with such strength or stronger are sufficient to reverse the sign of the estimated effect and invalidate the estimate's conclusions. This notion can be quantified by outputting the robustness value."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "52b2e904",
"metadata": {},
"outputs": [],
"source": [
"refute.RV"
]
},
{
"cell_type": "markdown",
"id": "3524cb07",
"metadata": {},
"source": [
"The robustness value measures the minimal equal strength of $\\eta^2_{T\\sim U | W}$ and $\\eta^2_{Y \\sim U | T, W}$ such the bound for the average treatment effect would include zero. It can be between 0 and 1. <br>\n",
"A robustness value of 0.45 implies that confounders with $\\eta^2_{T\\sim U | W}$ and $\\eta^2_{Y \\sim U | T, W}$ values less than 0.4 would not be sufficient enough to bring down the estimates to zero. In general, a low robustness value implies that the results can be changed even by the presence of weak confounders whereas a robustness value close to 1 means the treatment effect can handle even strong confounders that may explain all residual variation of the treatment and the outcome."
]
},
{
"cell_type": "markdown",
"id": "a1fbcf48",
"metadata": {},
"source": [
"**Benchmarking.** In general, however, providing a plausible range of partial R2 values is difficult. Instead, we can infer the partial R2 of the unobserved confounder as a multiple of the partial R2 of any subset of observed confounders. So now we just need to specify the effect of unobserved confounding as a multiple/fraction of the observed confounding. This process is known as *benchmarking*."
]
},
{
"cell_type": "markdown",
"id": "7a1f4986",
"metadata": {},
"source": [
"The relevant arguments for bencmarking are:\n",
"- <b>benchmark_common_causes</b>: Names of the observed confounders used to bound the strengths of unobserved confounder<br>\n",
"- <b>effect_fraction_on_treatment</b>: Strength of association between unobserved confounder and treatment compared to benchmark confounders<br>\n",
"- <b>effect_fraction_on_outcome</b>: Strength of association between unobserved confounder and outcome compared to benchmark confounders<br>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "85eefe08",
"metadata": {},
"outputs": [],
"source": [
"refute_bm = model.refute_estimate(identified_estimand, linear_dml_estimate ,\n",
" method_name = \"add_unobserved_common_cause\",\n",
" simulation_method = \"non-parametric-partial-R2\",\n",
" benchmark_common_causes = [\"W1\"],\n",
" effect_fraction_on_treatment = 0.2,\n",
" effect_fraction_on_outcome = 0.2\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "46b54056",
"metadata": {},
"source": [
"The red triangle shows the estimated partial-R^2 of a chosen benchmark observed covariate with the treatment and outcome. In the above call, we chose *W1* as the benchmark covariate. Under assumption that the unobserved confounder cannot be stronger in its effect on treatment and outcome than the observed benchmark covariate (*W1*), the above plot shows that the mean estimated effect will reduce after accounting for unobserved confounding, but still remain substantially above zero.\n"
]
},
{
"cell_type": "markdown",
"id": "070f01c6",
"metadata": {},
"source": [
"**Plot types**. The default `plot_type` is to show the `lower_confidence_bound` under a significance level . Other possible values for the `plot_type` are:\n",
"* `upper_confidence_bound`: preferably used in cases where the unobserved confounder is expected to lower the estimate.\n",
"* `lower_ate_bound`: lower (point) estimate for unconfounded average treatment effect without considering the significance level\n",
"* `upper_ate_bound`: upper (point) estimate for unconfounded average treatment effect without considering the significance level\n",
"* `bias`: the bias of the obtained estimate compared to the true estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "df23a49b",
"metadata": {},
"outputs": [],
"source": [
"refute_bm.plot(plot_type = \"upper_confidence_bound\")\n",
"refute_bm.plot(plot_type = \"bias\")"
]
},
{
"cell_type": "markdown",
"id": "83052508",
"metadata": {},
"source": [
"We can also access the benchmarking results as a data frame."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "934eef00",
"metadata": {},
"outputs": [],
"source": [
"refute_bm.results"
]
},
{
"cell_type": "markdown",
"id": "1fffe64f",
"metadata": {},
"source": [
"## II. Sensitivity Analysis for general non-parametric models\n",
"We now perform sensitivity analysis without making any assumption on the true data-generating process. The sensitivity still depends on the partial R2 of unobserved confounder with outcome, $\\eta^2_{Y \\sim U | T, W}$, and a similar parameter for the confounder-treatment relationship. However, the computation of bounds is more complicated and requires estimation of a special function known as reisz function. Refer to Chernozhukov et al. for details."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7cff3837",
"metadata": {},
"outputs": [],
"source": [
"# Estimate effect using a non-parametric estimator\n",
"from sklearn.ensemble import GradientBoostingRegressor\n",
"estimate_npar = model.estimate_effect(identified_estimand, \n",
" method_name=\"backdoor.econml.dml.KernelDML\",\n",
" method_params={\n",
" 'init_params': {'model_y':GradientBoostingRegressor(),\n",
" 'model_t': GradientBoostingRegressor(), },\n",
" 'fit_params': {},\n",
" })\n",
"print(estimate_npar)"
]
},
{
"cell_type": "markdown",
"id": "5c971de4",
"metadata": {},
"source": [
"To do the sensitivity analysis, we now use the same `non-parametric--partial-R2` method, however the estimation of partial R2 will be based on reisz representers. We use `plugin_reisz=True` to specify that we will be using a plugin reisz function estimator (this is faster and available for binary treatments). Otherwise, we can set it to `False` to estimate reisz function using a loss function."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "946e1237",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"refute_npar = model.refute_estimate(identified_estimand, estimate_npar,\n",
" method_name = \"add_unobserved_common_cause\",\n",
" simulation_method = \"non-parametric-partial-R2\",\n",
" benchmark_common_causes = [\"W1\"],\n",
" effect_fraction_on_treatment = 0.2,\n",
" effect_fraction_on_outcome = 0.2,\n",
" plugin_reisz=True\n",
" )\n",
"print(refute_npar)"
]
},
{
"cell_type": "markdown",
"id": "db63007e",
"metadata": {},
"source": [
"The plot has the same interpretation as before. We obtain a robustness value of 0.66 compared to robustness value of 0.45 for LinearDML estimator.\n",
"\n",
"Note that the robustness value changes, even though the point estimates from LinearDML and KernelDML are similar. This is because we made different assumptions on the true data-generating process. "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| {
"cells": [
{
"cell_type": "markdown",
"id": "0bbaacaa",
"metadata": {},
"source": [
"# Sensitivity analysis for non-parametric causal estimators\n",
"Sensitivity analysis helps us study how robust an estimated effect is when the assumption of no unobserved confounding is violated. That is, how much bias does our estimate have due to omitting an (unobserved) confounder? Known as the \n",
"*omitted variable bias (OVB)*, it gives us a measure of how the inclusion of an omitted common cause (confounder) would have changed the estimated effect. \n",
"\n",
"This notebook shows how to estimate the OVB for general, non-parametric causal estimators. For gaining intuition, we suggest going through an introductory notebook that describes how to estimate OVB for a a linear estimator: [Sensitivity analysis for linear estimators](https://github.com/py-why/dowhy/blob/master/docs/source/example_notebooks/sensitivity_analysis_testing.ipynb). To recap, in that notebook, we saw how the OVB depended on linear partial R^2 values and used this insight to compute the adjusted estimate values depending on the relative strength of the confounder with the outcome and treatment. We now generalize the technique using the non-parametric partial R^2 and Reisz representers.\n",
"\n",
"\n",
"This notebook is based on *Chernozhukov et al., Long Story Short: Omitted Variable Bias in Causal Machine Learning. https://arxiv.org/abs/2112.13398*. "
]
},
{
"cell_type": "markdown",
"id": "cf30b925",
"metadata": {},
"source": [
"## I. Sensitivity analysis for partially linear models\n",
"We first analyze the sensitivity of a causal estimate when the true data-generating process (DGP) is known to be partially linear. That is, the outcome can be additively decomposed into a linear function of the treatment and a non-linear function of the confounders. We denote the treatment by $T$, outcome by $Y$, observed confounders by $W$ and unobserved confounders by $U$. \n",
"$$ Y = g(T, W, U) + \\epsilon = \\theta T + h(W, U) + \\epsilon $$\n",
"\n",
"However, we cannot estimate the above equation because the confounders $U$ are unobserved. Thus, in practice, a causal estimator uses the following \"short\" equation, \n",
"$$ Y = g_s(T, W) + \\epsilon_s = \\theta_s T + h_s(W) + \\epsilon_s $$\n",
"\n",
"The goal of sensitivity analysis is to answer how far $\\theta_s$ would be from the true $\\theta$. Chernozhukov et al. show that given a special function called Reisz function $\\alpha$, the omitted variable bias, $|\\theta - \\theta_s|$ is bounded by $\\sqrt{E[g-g_s]^2E[\\alpha-\\alpha_s]^2}$. For partial linear models, $\\alpha$ and the \"short\" $\\alpha_s$ are defined as, \n",
"$$ \\alpha := \\frac{T - E[T | W, U] )}{E(T - E[T | W, U]) ^ 2}$$\n",
"$$ \\alpha_s := \\frac{(T - E[T | W] )}{E(T - E[T | W]) ^ 2} $$\n",
"\n",
"The bound can be expressed in terms of the *partial* R^2 of the unobserved confounder $U$ with the treatment and outcome, conditioned on the observed confounders $W$. Recall that R^2 of $U$ wrt some target $Z$ is defined as the ratio of variance of the prediction $E[Z|U]$ with the variance of $Z$, $R^2_{Z\\sim U}=\\frac{\\operatorname{Var}(E[Z|U])}{\\operatorname{Var}(Y)}$. We can define the partial R^2 as an extension that measures the additional gain in explanatory power conditioned on some variables $W$. \n",
"$$ \\eta^2_{Z\\sim U| W} = \\frac{\\operatorname{Var}(E[Z|W, U]) - \\operatorname{Var}(E[Z|W])}{\\operatorname{Var}(Z) - \\operatorname{Var}(E[Z|W])} $$\n",
"\n",
"The bound is given by, \n",
"$$ (\\theta - \\theta_s)^2 = E[g-g_s]^2E[\\alpha-\\alpha_s]^2 = S^2 C_Y^2 C_T^2 $$ \n",
"where, \n",
"$$ S^2 = \\frac{E[(Y-g_s)^2]}{E[\\alpha_s^2]}; \\ \\ C_Y^2 = \\eta^2_{Y \\sim U | T, W}, \\ \\ C_T^2 = \\frac{\\eta^2_{T\\sim U | W}}{1 - \\eta^2_{T\\sim U | W}}$$\n",
"\n",
"\n",
"$S^2$ can be estimated from data. The other two parameters need to be specified manually: they convey the strength of the unobserved confounder $U$ on treatment and outcome. Below we show how to create a sensitivity contour plot by specifying a range of plausible values for $\\eta^2_{Y \\sim U | T, W}$ and $\\eta^2_{T\\sim U | W}$. We also show how to benchmark and set these values as a fraction of the maximum partial R^2 due to any subset of the observed covariates. "
]
},
{
"cell_type": "markdown",
"id": "1b67b63e",
"metadata": {},
"source": [
"### Creating a dataset with unobserved confounding "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bbbab4ea",
"metadata": {},
"outputs": [],
"source": [
"%load_ext autoreload\n",
"%autoreload 2"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ba6f68b9",
"metadata": {},
"outputs": [],
"source": [
"# Required libraries\n",
"import re\n",
"import numpy as np\n",
"import dowhy\n",
"from dowhy import CausalModel\n",
"import dowhy.datasets\n",
"from dowhy.utils.regression import create_polynomial_function"
]
},
{
"cell_type": "markdown",
"id": "2c386282",
"metadata": {},
"source": [
"We create a dataset with linear relationship between treatment and outcome, following the partial linear data-generating process. $\\beta$ is the true causal effect."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "41c60ca7",
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(101) \n",
"data = dowhy.datasets.partially_linear_dataset(beta = 10,\n",
" num_common_causes = 7,\n",
" num_unobserved_common_causes=1,\n",
" strength_unobserved_confounding=10,\n",
" num_samples = 1000,\n",
" num_treatments = 1,\n",
" stddev_treatment_noise = 10,\n",
" stddev_outcome_noise = 5\n",
" )\n",
"display(data)"
]
},
{
"cell_type": "markdown",
"id": "5df879f9",
"metadata": {},
"source": [
"The true ATE for this data-generating process is,"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75882711",
"metadata": {},
"outputs": [],
"source": [
"data[\"ate\"]"
]
},
{
"cell_type": "markdown",
"id": "5308f4dc",
"metadata": {},
"source": [
"To simulate unobserved confounding, we remove one of the common causes from the dataset. \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "636b6a25",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Observed data \n",
"dropped_cols=[\"W0\"]\n",
"user_data = data[\"df\"].drop(dropped_cols, axis = 1)\n",
"# assumed graph\n",
"user_graph = data[\"gml_graph\"]\n",
"for col in dropped_cols:\n",
" user_graph = user_graph.replace('node[ id \"{0}\" label \"{0}\"]'.format(col), '')\n",
" user_graph = re.sub('edge\\[ source \"{}\" target \"[vy][0]*\"\\]'.format(col), \"\", user_graph)\n",
"user_data"
]
},
{
"cell_type": "markdown",
"id": "4ae95e95",
"metadata": {},
"source": [
"### Obtaining a causal estimate using Model, Identify, Estimate steps\n",
"Create a causal model with the \"observed\" data and causal graph."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "207034e5",
"metadata": {},
"outputs": [],
"source": [
"model = CausalModel(\n",
" data=user_data,\n",
" treatment=data[\"treatment_name\"],\n",
" outcome=data[\"outcome_name\"],\n",
" graph=user_graph,\n",
" test_significance=None,\n",
" )\n",
"model.view_model()\n",
"from IPython.display import Image, display\n",
"display(Image(filename=\"causal_model.png\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4eaec5dc",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Identify effect\n",
"identified_estimand = model.identify_effect(proceed_when_unidentifiable=True)\n",
"print(identified_estimand)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "56889b39",
"metadata": {},
"outputs": [],
"source": [
"# Estimate effect\n",
"import econml\n",
"from sklearn.ensemble import GradientBoostingRegressor\n",
"linear_dml_estimate = model.estimate_effect(identified_estimand, \n",
" method_name=\"backdoor.econml.dml.dml.LinearDML\",\n",
" method_params={\n",
" 'init_params': {'model_y':GradientBoostingRegressor(),\n",
" 'model_t': GradientBoostingRegressor(),\n",
" 'linear_first_stages': False\n",
" },\n",
" 'fit_params': {'cache_values': True,}\n",
" })\n",
"print(linear_dml_estimate)"
]
},
{
"cell_type": "markdown",
"id": "891068cb",
"metadata": {},
"source": [
"### Sensitivity Analysis using the Refute step\n",
"After estimation , we need to check how robust our estimate is against the possibility of unobserved confounders. We perform sensitivity analysis for the LinearDML estimator assuming that its assumption on data-generating process holds: the true function for $Y$ is partial linear. For computational efficiency, we set <b>cache_values</b> = <b>True</b> in `fit_params` to cache the results of first stage estimation.\n",
"\n",
"Parameters used:\n",
"\n",
"* <b>method_name</b>: Refutation method name <br>\n",
"* <b>simulation_method</b>: \"non-parametric-partial-R2\" for non Parametric Sensitivity Analysis. \n",
"Note that partial linear sensitivity analysis is automatically chosen if LinearDML estimator is used for estimation. \n",
"* **partial_r2_confounder_treatment**: $\\eta^2_{T\\sim U | W}$, Partial R2 of unobserved confounder with treatment conditioned on all observed confounders. \n",
"* **partial_r2_confounder_outcome**: $\\eta^2_{Y \\sim U | T, W}$, Partial R2 of unobserved confounder with outcome conditioned on treatment and all observed confounders. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2488ccbb",
"metadata": {},
"outputs": [],
"source": [
"refute = model.refute_estimate(identified_estimand, linear_dml_estimate ,\n",
" method_name = \"add_unobserved_common_cause\",\n",
" simulation_method = \"non-parametric-partial-R2\",\n",
" partial_r2_confounder_treatment = np.arange(0, 0.8, 0.1),\n",
" partial_r2_confounder_outcome = np.arange(0, 0.8, 0.1)\n",
" )\n",
"print(refute)"
]
},
{
"cell_type": "markdown",
"id": "81f1d65b",
"metadata": {},
"source": [
"**Intepretation of the plot.** In the above plot, the x-axis shows hypothetical partial R2 values of unobserved confounder(s) with the treatment. The y-axis shows hypothetical partial R2 of unobserved confounder(s) with the outcome. At <x=0,y=0>, the black diamond shows the original estimate (theta_s) without considering the unobserved confounders.\n",
"\n",
"The contour levels represent *adjusted* lower confidence bound estimate of the effect, which would be obtained if the unobserved confounder(s) had been included in the estimation model. The red contour line is the critical threshold where the adjusted effect goes to zero. Thus, confounders with such strength or stronger are sufficient to reverse the sign of the estimated effect and invalidate the estimate's conclusions. This notion can be quantified by outputting the robustness value."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "52b2e904",
"metadata": {},
"outputs": [],
"source": [
"refute.RV"
]
},
{
"cell_type": "markdown",
"id": "3524cb07",
"metadata": {},
"source": [
"The robustness value measures the minimal equal strength of $\\eta^2_{T\\sim U | W}$ and $\\eta^2_{Y \\sim U | T, W}$ such the bound for the average treatment effect would include zero. It can be between 0 and 1. <br>\n",
"A robustness value of 0.45 implies that confounders with $\\eta^2_{T\\sim U | W}$ and $\\eta^2_{Y \\sim U | T, W}$ values less than 0.4 would not be sufficient enough to bring down the estimates to zero. In general, a low robustness value implies that the results can be changed even by the presence of weak confounders whereas a robustness value close to 1 means the treatment effect can handle even strong confounders that may explain all residual variation of the treatment and the outcome."
]
},
{
"cell_type": "markdown",
"id": "a1fbcf48",
"metadata": {},
"source": [
"**Benchmarking.** In general, however, providing a plausible range of partial R2 values is difficult. Instead, we can infer the partial R2 of the unobserved confounder as a multiple of the partial R2 of any subset of observed confounders. So now we just need to specify the effect of unobserved confounding as a multiple/fraction of the observed confounding. This process is known as *benchmarking*."
]
},
{
"cell_type": "markdown",
"id": "7a1f4986",
"metadata": {},
"source": [
"The relevant arguments for bencmarking are:\n",
"- <b>benchmark_common_causes</b>: Names of the observed confounders used to bound the strengths of unobserved confounder<br>\n",
"- <b>effect_fraction_on_treatment</b>: Strength of association between unobserved confounder and treatment compared to benchmark confounders<br>\n",
"- <b>effect_fraction_on_outcome</b>: Strength of association between unobserved confounder and outcome compared to benchmark confounders<br>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "85eefe08",
"metadata": {},
"outputs": [],
"source": [
"refute_bm = model.refute_estimate(identified_estimand, linear_dml_estimate ,\n",
" method_name = \"add_unobserved_common_cause\",\n",
" simulation_method = \"non-parametric-partial-R2\",\n",
" benchmark_common_causes = [\"W1\"],\n",
" effect_fraction_on_treatment = 0.2,\n",
" effect_fraction_on_outcome = 0.2\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "46b54056",
"metadata": {},
"source": [
"The red triangle shows the estimated partial-R^2 of a chosen benchmark observed covariate with the treatment and outcome. In the above call, we chose *W1* as the benchmark covariate. Under assumption that the unobserved confounder cannot be stronger in its effect on treatment and outcome than the observed benchmark covariate (*W1*), the above plot shows that the mean estimated effect will reduce after accounting for unobserved confounding, but still remain substantially above zero.\n"
]
},
{
"cell_type": "markdown",
"id": "070f01c6",
"metadata": {},
"source": [
"**Plot types**. The default `plot_type` is to show the `lower_confidence_bound` under a significance level . Other possible values for the `plot_type` are:\n",
"* `upper_confidence_bound`: preferably used in cases where the unobserved confounder is expected to lower the estimate.\n",
"* `lower_ate_bound`: lower (point) estimate for unconfounded average treatment effect without considering the significance level\n",
"* `upper_ate_bound`: upper (point) estimate for unconfounded average treatment effect without considering the significance level\n",
"* `bias`: the bias of the obtained estimate compared to the true estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "df23a49b",
"metadata": {},
"outputs": [],
"source": [
"refute_bm.plot(plot_type = \"upper_confidence_bound\")\n",
"refute_bm.plot(plot_type = \"bias\")"
]
},
{
"cell_type": "markdown",
"id": "83052508",
"metadata": {},
"source": [
"We can also access the benchmarking results as a data frame."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "934eef00",
"metadata": {},
"outputs": [],
"source": [
"refute_bm.results"
]
},
{
"cell_type": "markdown",
"id": "1fffe64f",
"metadata": {},
"source": [
"## II. Sensitivity Analysis for general non-parametric models\n",
"We now perform sensitivity analysis without making any assumption on the true data-generating process. The sensitivity still depends on the partial R2 of unobserved confounder with outcome, $\\eta^2_{Y \\sim U | T, W}$, and a similar parameter for the confounder-treatment relationship. However, the computation of bounds is more complicated and requires estimation of a special function known as reisz function. Refer to Chernozhukov et al. for details."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7cff3837",
"metadata": {},
"outputs": [],
"source": [
"# Estimate effect using a non-parametric estimator\n",
"from sklearn.ensemble import GradientBoostingRegressor\n",
"estimate_npar = model.estimate_effect(identified_estimand, \n",
" method_name=\"backdoor.econml.dml.KernelDML\",\n",
" method_params={\n",
" 'init_params': {'model_y':GradientBoostingRegressor(),\n",
" 'model_t': GradientBoostingRegressor(), },\n",
" 'fit_params': {},\n",
" })\n",
"print(estimate_npar)"
]
},
{
"cell_type": "markdown",
"id": "5c971de4",
"metadata": {},
"source": [
"To do the sensitivity analysis, we now use the same `non-parametric--partial-R2` method, however the estimation of partial R2 will be based on reisz representers. We use `plugin_reisz=True` to specify that we will be using a plugin reisz function estimator (this is faster and available for binary treatments). Otherwise, we can set it to `False` to estimate reisz function using a loss function."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "946e1237",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"refute_npar = model.refute_estimate(identified_estimand, estimate_npar,\n",
" method_name = \"add_unobserved_common_cause\",\n",
" simulation_method = \"non-parametric-partial-R2\",\n",
" benchmark_common_causes = [\"W1\"],\n",
" effect_fraction_on_treatment = 0.2,\n",
" effect_fraction_on_outcome = 0.2,\n",
" plugin_reisz=True\n",
" )\n",
"print(refute_npar)"
]
},
{
"cell_type": "markdown",
"id": "db63007e",
"metadata": {},
"source": [
"The plot has the same interpretation as before. We obtain a robustness value of 0.66 compared to robustness value of 0.45 for LinearDML estimator.\n",
"\n",
"Note that the robustness value changes, even though the point estimates from LinearDML and KernelDML are similar. This is because we made different assumptions on the true data-generating process. "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | it will change with every commit we modify the notebook (if we are using vscode) | andresmor-ms | 184 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | some methods do support multiple treatments. especially the EconML ones.
That's why we always pass the treatment list, as you can see. whereas for outcome, we pass the first element of the list | amit-sharma | 185 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | docstring `treatment` should be `treatment_name`. | amit-sharma | 186 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | Changing the value of `need_conditional_estimates` is a side-effect of this method. Will be good to mention in the docstring. | amit-sharma | 187 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | In the old code, the `estimate` param was used to initialize the different variables. But now we are using the `self` object. The estimate param becomes redundant. We can remove it | amit-sharma | 188 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | why is the typehint in quotes? | amit-sharma | 189 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | should we define the signature of fit and effect methods here so that the signature is enforced for child estimators? We can simply return "raise NotImplementedError" --conveying that child classes have to implement this method. | amit-sharma | 190 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | docstring needs to be updated.
* estimator is not referenced. | amit-sharma | 191 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | why is identified_estimand removed from the params? Logically, a user should provide the output of identify_effect to this function.
I see that you are using the estimator's target_estimand, but then having a str parameter feels odd. What are the possible values for identifier name?
Rather than identifier_name, I recommend passing the full identifiedestimand object. It might be redundant now, but can be useful when we are using an external Estimator implementation. We need not assume too much about what the estimator contains. | amit-sharma | 192 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | test_significance is not used anywhere in this method. It should be passed to the estimator. | amit-sharma | 193 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | can rename it to `get_new_estimator_object` to make it clear that it is not returning the same old estimator, since it is an instance method now | amit-sharma | 194 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | fit_estimator is not needed. We can remove the `fit` code from this method and expect refuters to call it explicitly. | amit-sharma | 195 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | in the docstring below, need to update how estimator is provided. Providing strings is not allowed. | amit-sharma | 196 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | Oh, I copied this comment from: https://github.com/py-why/dowhy/blob/main/dowhy/causal_estimator.py#L109 I guess that it is a legacy comment that was never removed?
If that's the case I'll need to update the types for all treatment parameters and the docs, because from the text of it (at least to me) it says that it is a single string and not a list of strings. | andresmor-ms | 197 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | This is a forward reference, since CausalEstimate is defined below the CausalEstimator class we need to do use the quotes, otherwise we get a syntax error https://peps.python.org/pep-0484/#forward-references | andresmor-ms | 198 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | The issue with creating abstract methods here is that we would essentially set the method signature (including parameters) so we would make all the effect and fit methods take the same parameters for all estimators (preventing new estimators to take different parameters or having a bunch of unneeded parameters in all estimators), even if python would allow for signatures to be different it is a bad practice and type checkers like mypy would complain about it. The solution (which is I think also the solution that scikit-learn uses) is to avoid defining the `fit` and `effect` in the base class, in fact in https://scikit-learn.org/stable/modules/generated/sklearn.base.BaseEstimator.html you can see that they don't have the fit and predict defined. I found this (https://scikit-learn.org/stable/developers/develop.html) to be pretty useful as help for this refactor, obviously it would be a giant effort (and perhaps not required) to make it work the same way as scikit-learn but it is a good guide. | andresmor-ms | 199 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | What information from the removed parameter are we missing from the estimator target_estimand? shouldn't they be the same? the identifier name replaces the first part of the previous parameter `method_name` which was a string. According to the docs it:
> Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name".
So, the values that it can take are "backdoor" or "iv", which I think can't be found anywhere in the `target_estimand` or the removed `identified_estimand` param, let me know if this is correct, or if we have another way of getting that "backdoor" or "iv" information. | andresmor-ms | 200 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | got it, I understand the motivation now. | amit-sharma | 201 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | oh I see. yeah, that's my bad. I may have updated the code without the docstring.
But yes, we do support multiple treatments for some methods, that's why treatment is always passed as a list. | amit-sharma | 202 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | Good point. There's another case to consider, where a user will not provide any estimator and the function will automatically find the right Estimator and initialize it. This is the direction we'll be moving in the future. In that case, we would need the `identified_estimand` parameter.
Also, the identifier_name parameter can be removed, I guess. It was useful earlier because we wanted to specify which identifier name each estimator is connected to. But now since the estimator already has the estimand object, we can consider adding that string as an instance attribute for each Estimator. Each estimator corresponds to either of "backdoor"/"iv". And each estimator could check: if the identified_estimand does not have an entry for the estimator's identifier_name, then it can raise an Error (not for this PR, but later). In that case, L723 is not needed, and L731-733 could be moved to init methods of each estimator. What do you think?
So maybe we can keep the identifier_name parameter for this PR, but could be removed in future. | amit-sharma | 203 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | Can we avoid this method? At least, I think we should not make it part of the contract of a `CausalEstimator`. If some implementation of `CausalEstimator` needs the data after `fit`, that specific implementation is still free to store it.
I think having such a method is too inviting to get into the business of very stateful objects. While an estimator is by definition stateful (it stores the artifacts coming out of training), I don't think there should be explicit methods for this kind of state management. | petergtz | 204 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | This is similar to `set_data` above. Since it seems like this is common functionality needed in multiple places, how about defining a module-based (not class-based):
```python
def `effect_modifiers(effect_modifier_names: ...) -> Tuple[... , ... ,...]:
...
```
and then calling it in those places where it is needed as:
```python
self.need_conditional_estimates, self._effect_modifiers, self._effect_modifier_names = effect_modifiers(effect_modifier_names)
```
If this doesn't work, how about at least making this method private by calling it `effect_modifiers`? | petergtz | 205 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | Good point. I think we can avoid storing these data-based objects completely.
For context, the earlier version had this code inside the init method of the estimator. But now the init method does not get access to the data, only fit method gets access to the data (as in sklearn API). That's why this method needed to be created. But it may not be needed.
Going forward, the three basic methods in an estimator's contract are `fit`, `effect` and `do` . It is not expected that fit and estimate effect should be called on the same data. So we probably don't need to store dataset inside self._data. Again we don't need to store the values of the treatment and outcome data. And the names of the treatment and outcome columns can be extracted from self.target_estimand.
The only big change I see is that data_df will not be optional in estimate_effect method. Which is fine, because even sklearn API requires a dataset to be provided for the `predict` step.
@andresmor-ms what do you think? would removing this method lead to any problems? | amit-sharma | 206 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | I'll take a look at this, I tried to do this in a previous version but found out that self._data was used in several places and decided to go with setting self._data and focus on getting the fit() to work correctly, if this becomes a big change, would you agree that it could go into another PR? @amit-sharma @petergtz | andresmor-ms | 207 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | That's a good idea, tbh I didn't like the `set_*` methods I created, but as I mentioned in a comment before I was focusing on getting the fit() to work correctly :) | andresmor-ms | 208 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | Agree with @amit-sharma that ideally we wouldn't have to store this data at all. But to avoid delaying this PR too much, I'd be fine by simply renaming this method to `_set_data` to make it very explicit, that this is not part of the API of that class. | petergtz | 209 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | yeah, let's just make the `set` methods as private for this PR, as Peter suggested. This will be work for a future PR | amit-sharma | 210 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | `test_significance` is used only on the estimate.add_params, this is because at this point the Estimator is already instantiated by the CausalModel and the `test_significance` was already passed by the `CausalModel.estimate_effect` method, we don't need them as parameter here | andresmor-ms | 211 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import copy
import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**_,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._target_estimand = identified_estimand
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._confidence_intervals = confidence_intervals
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = need_conditional_estimates
self._bootstrap_estimates = None
self._bootstrap_null_estimates = None
def _set_data(self, data: pd.DataFrame, treatment_name: List[str], outcome_name: List[str]):
"""Sets the data for the estimator
:param data: data frame containing the data
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
"""
self._data = data
self._treatment_name = treatment_name
self._outcome_name = outcome_name[0]
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
def _set_effect_modifiers(self, effect_modifier_names: Optional[List[str]] = None):
"""Sets the effect modifiers for the estimator
Modifies need_conditional_estimates accordingly to effect modifiers value
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._effect_modifiers = effect_modifier_names
if effect_modifier_names is not None:
self._effect_modifier_names = [cname for cname in effect_modifier_names if cname in self._data.columns]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = []
else:
self._effect_modifier_names = []
self.need_conditional_estimates = (
self.need_conditional_estimates
if self.need_conditional_estimates != "auto"
else (self._effect_modifier_names and len(self._effect_modifier_names) > 0)
)
def _set_identified_estimand(self, new_identified_estimand):
"""Method used internally to change the target estimand (required by some refuters)
:param new_identified_estimand: The new target_estimand to use
"""
self._target_estimand = new_identified_estimand
def get_new_estimator_object(
self,
identified_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=None,
):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with the identified_estimand
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:returns: A new instance of the same estimator class that had generated the given estimate.
"""
new_estimator = copy.deepcopy(self)
new_estimator._target_estimand = identified_estimand
new_estimator._test_significance = test_significance
new_estimator._evaluate_effect_strength = evaluate_effect_strength
new_estimator._confidence_intervals = (
self._confidence_intervals if confidence_intervals is None else confidence_intervals
)
return new_estimator
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = self.get_new_estimator_object(
self._target_estimand,
# names of treatment and outcome
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
new_data,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
treatment_value=self._treatment_value,
control_value=self._control_value,
target_units=self._target_units,
)
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = self.get_new_estimator_object(
self._target_estimand,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
)
new_estimator.fit(
data=new_data,
treatment_name=self._target_estimand.treatment_variable,
outcome_name=("dummy_outcome",),
effect_modifier_names=self._effect_modifier_names,
)
new_effect = new_estimator.estimate_effect(
target_units=self._target_units,
)
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
data: pd.DataFrame,
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identifier_name: str,
estimator: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
target_units: str = "ate",
effect_modifiers: List[str] = None,
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param estimator: Instance of a CausalEstimator to use
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None:
effect_modifiers = []
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = estimator.__class__
identified_estimand = estimator._target_estimand
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
if fit_estimator:
estimator.fit(
data=data,
treatment_name=treatment,
outcome_name=outcome,
effect_modifier_names=effect_modifiers,
**method_params["fit_params"] if "fit_params" in method_params else {},
)
estimate = estimator.estimate_effect(
treatment_value=treatment_value,
control_value=control_value,
target_units=target_units,
confidence_intervals=estimator._confidence_intervals,
)
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=estimator._significance_test,
evaluate_effect_strength=estimator._effect_strength_eval,
confidence_intervals=estimator._confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
)
if estimator._significance_test:
estimator.test_significance(estimate.value, method=estimator._significance_test)
if estimator._confidence_intervals:
estimator.estimate_confidence_intervals(
estimate.value, confidence_level=estimator.confidence_level, method=estimator._confidence_intervals
)
if estimator._effect_strength_eval:
effect_strength_dict = estimator.evaluate_effect_strength(estimate)
estimate.add_effect_strength(effect_strength_dict)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | okay, let's move this discussion for the next PR. | amit-sharma | 212 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/distance_matching_estimator.py | import numpy as np
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
class DistanceMatchingEstimator(CausalEstimator):
"""Simple matching estimator for binary treatments based on a distance
metric.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# allowed types of distance metric
Valid_Dist_Metric_Params = ["p", "V", "VI", "w"]
def __init__(self, *args, num_matches_per_unit=1, distance_metric="minkowski", exact_match_cols=None, **kwargs):
"""
:param num_matches_per_unit: The number of matches per data point.
Default=1.
:param distance_metric: Distance metric to use. Default="minkowski"
that corresponds to Euclidean distance metric with p=2.
:param exact_match_cols: List of column names whose values should be
exactly matched. Typically used for columns with discrete values.
"""
# Required to ensure that self.method_params contains all the
# parameters to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
# Checking if the treatment is binary
if not pd.api.types.is_bool_dtype(self._data[self._treatment_name[0]]):
error_msg = "Distance Matching method is applicable only for binary treatments"
self.logger.error(error_msg)
raise Exception(error_msg)
self.num_matches_per_unit = num_matches_per_unit
self.distance_metric = distance_metric
self.exact_match_cols = exact_match_cols
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if self._observed_common_causes_names:
if self.exact_match_cols is not None:
self._observed_common_causes_names = [
v for v in self._observed_common_causes_names if v not in self.exact_match_cols
]
self._observed_common_causes = self._data[self._observed_common_causes_names]
# Convert the categorical variables into dummy/indicator variables
# Basically, this gives a one hot encoding for each category
# The first category is taken to be the base line.
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
error_msg = "No common causes/confounders present. Distance matching methods are not applicable"
self.logger.error(error_msg)
raise Exception(error_msg)
# Dictionary of any user-provided params for the distance metric
# that will be passed to sklearn nearestneighbors
self.distance_metric_params = {}
for param_name in self.Valid_Dist_Metric_Params:
param_val = getattr(self, param_name, None)
if param_val is not None:
self.distance_metric_params[param_name] = param_val
self.logger.info("INFO: Using Distance Matching Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
self.matched_indices_att = None
self.matched_indices_atc = None
def _estimate_effect(self):
# this assumes a binary treatment regime
updated_df = pd.concat(
[self._observed_common_causes, self._data[[self._outcome_name, self._treatment_name[0]]]], axis=1
)
if self.exact_match_cols is not None:
updated_df = pd.concat([updated_df, self._data[self.exact_match_cols]], axis=1)
treated = updated_df.loc[self._data[self._treatment_name[0]] == 1]
control = updated_df.loc[self._data[self._treatment_name[0]] == 0]
numtreatedunits = treated.shape[0]
numcontrolunits = control.shape[0]
fit_att, fit_atc = False, False
est = None
# TODO remove neighbors that are more than a given radius apart
if self._target_units == "att":
fit_att = True
elif self._target_units == "atc":
fit_atc = True
elif self._target_units == "ate":
fit_att = True
fit_atc = True
else:
raise ValueError("Target units string value not supported")
if fit_att:
# estimate ATT on treated by summing over difference between matched neighbors
if self.exact_match_cols is None:
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(treated[self._observed_common_causes.columns].values)
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
att /= numtreatedunits
if self._target_units == "att":
est = att
elif self._target_units == "ate":
est = att * numtreatedunits
# Return indices in the original dataframe
self.matched_indices_att = {}
treated_df_index = treated.index.tolist()
for i in range(numtreatedunits):
self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
else:
grouped = updated_df.groupby(self.exact_match_cols)
att = 0
for name, group in grouped:
treated = group.loc[group[self._treatment_name[0]] == 1]
control = group.loc[group[self._treatment_name[0]] == 0]
if treated.shape[0] == 0:
continue
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(
treated[self._observed_common_causes.columns].values
)
self.logger.debug("distances:")
self.logger.debug(distances)
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
# self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
att /= numtreatedunits
if self._target_units == "att":
est = att
elif self._target_units == "ate":
est = att * numtreatedunits
if fit_atc:
# Now computing ATC
treated_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(treated[self._observed_common_causes.columns].values)
distances, indices = treated_neighbors.kneighbors(control[self._observed_common_causes.columns].values)
atc = 0
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = np.mean(treated.iloc[indices[i]][self._outcome_name].values)
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if self._target_units == "atc":
est = atc
elif self._target_units == "ate":
est += atc * numcontrolunits
est /= numtreatedunits + numcontrolunits
# Return indices in the original dataframe
self.matched_indices_atc = {}
control_df_index = control.index.tolist()
for i in range(numcontrolunits):
self.matched_indices_atc[control_df_index[i]] = treated.iloc[indices[i]].index.tolist()
estimate = CausalEstimate(
estimate=est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| from typing import Any, List, Optional, Union
import numpy as np
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class DistanceMatchingEstimator(CausalEstimator):
"""Simple matching estimator for binary treatments based on a distance
metric.
Supports additional parameters as listed below.
"""
# allowed types of distance metric
Valid_Dist_Metric_Params = ["p", "V", "VI", "w"]
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
num_matches_per_unit: int = 1,
distance_metric: str = "minkowski",
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param num_matches_per_unit: The number of matches per data point.
Default=1.
:param distance_metric: Distance metric to use. Default="minkowski"
that corresponds to Euclidean distance metric with p=2.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
num_matches_per_unit=num_matches_per_unit,
distance_metric=distance_metric,
**kwargs,
)
self.num_matches_per_unit = num_matches_per_unit
self.distance_metric = distance_metric
# Dictionary of any user-provided params for the distance metric
# that will be passed to sklearn nearestneighbors
self.distance_metric_params = {}
for param_name in self.Valid_Dist_Metric_Params:
param_val = getattr(self, param_name, None)
if param_val is not None:
self.distance_metric_params[param_name] = param_val
self.logger.info("INFO: Using Distance Matching Estimator")
self.matched_indices_att = None
self.matched_indices_atc = None
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
exact_match_cols=None,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param exact_match_cols: List of column names whose values should be
exactly matched. Typically used for columns with discrete values.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self.exact_match_cols = exact_match_cols
self._set_effect_modifiers(effect_modifier_names)
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
# Checking if the treatment is binary
if not pd.api.types.is_bool_dtype(self._data[self._treatment_name[0]]):
error_msg = "Distance Matching method is applicable only for binary treatments"
self.logger.error(error_msg)
raise Exception(error_msg)
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if self._observed_common_causes_names:
if self.exact_match_cols is not None:
self._observed_common_causes_names = [
v for v in self._observed_common_causes_names if v not in self.exact_match_cols
]
self._observed_common_causes = self._data[self._observed_common_causes_names]
# Convert the categorical variables into dummy/indicator variables
# Basically, this gives a one hot encoding for each category
# The first category is taken to be the base line.
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
error_msg = "No common causes/confounders present. Distance matching methods are not applicable"
self.logger.error(error_msg)
raise Exception(error_msg)
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
# this assumes a binary treatment regime
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
updated_df = pd.concat(
[self._observed_common_causes, data[[self._outcome_name, self._treatment_name[0]]]], axis=1
)
if self.exact_match_cols is not None:
updated_df = pd.concat([updated_df, data[self.exact_match_cols]], axis=1)
treated = updated_df.loc[data[self._treatment_name[0]] == 1]
control = updated_df.loc[data[self._treatment_name[0]] == 0]
numtreatedunits = treated.shape[0]
numcontrolunits = control.shape[0]
fit_att, fit_atc = False, False
est = None
# TODO remove neighbors that are more than a given radius apart
if target_units == "att":
fit_att = True
elif target_units == "atc":
fit_atc = True
elif target_units == "ate":
fit_att = True
fit_atc = True
else:
raise ValueError("Target units string value not supported")
if fit_att:
# estimate ATT on treated by summing over difference between matched neighbors
if self.exact_match_cols is None:
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(treated[self._observed_common_causes.columns].values)
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
att /= numtreatedunits
if target_units == "att":
est = att
elif target_units == "ate":
est = att * numtreatedunits
# Return indices in the original dataframe
self.matched_indices_att = {}
treated_df_index = treated.index.tolist()
for i in range(numtreatedunits):
self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
else:
grouped = updated_df.groupby(self.exact_match_cols)
att = 0
for name, group in grouped:
treated = group.loc[group[self._treatment_name[0]] == 1]
control = group.loc[group[self._treatment_name[0]] == 0]
if treated.shape[0] == 0:
continue
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(
treated[self._observed_common_causes.columns].values
)
self.logger.debug("distances:")
self.logger.debug(distances)
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
# self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
att /= numtreatedunits
if target_units == "att":
est = att
elif target_units == "ate":
est = att * numtreatedunits
if fit_atc:
# Now computing ATC
treated_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(treated[self._observed_common_causes.columns].values)
distances, indices = treated_neighbors.kneighbors(control[self._observed_common_causes.columns].values)
atc = 0
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = np.mean(treated.iloc[indices[i]][self._outcome_name].values)
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if target_units == "atc":
est = atc
elif target_units == "ate":
est += atc * numcontrolunits
est /= numtreatedunits + numcontrolunits
# Return indices in the original dataframe
self.matched_indices_atc = {}
control_df_index = control.index.tolist()
for i in range(numcontrolunits):
self.matched_indices_atc[control_df_index[i]] = treated.iloc[indices[i]].index.tolist()
estimate = CausalEstimate(
estimate=est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
estimate.add_estimator(self)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | need a docstring for this method. | amit-sharma | 213 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/distance_matching_estimator.py | import numpy as np
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
class DistanceMatchingEstimator(CausalEstimator):
"""Simple matching estimator for binary treatments based on a distance
metric.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# allowed types of distance metric
Valid_Dist_Metric_Params = ["p", "V", "VI", "w"]
def __init__(self, *args, num_matches_per_unit=1, distance_metric="minkowski", exact_match_cols=None, **kwargs):
"""
:param num_matches_per_unit: The number of matches per data point.
Default=1.
:param distance_metric: Distance metric to use. Default="minkowski"
that corresponds to Euclidean distance metric with p=2.
:param exact_match_cols: List of column names whose values should be
exactly matched. Typically used for columns with discrete values.
"""
# Required to ensure that self.method_params contains all the
# parameters to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
# Checking if the treatment is binary
if not pd.api.types.is_bool_dtype(self._data[self._treatment_name[0]]):
error_msg = "Distance Matching method is applicable only for binary treatments"
self.logger.error(error_msg)
raise Exception(error_msg)
self.num_matches_per_unit = num_matches_per_unit
self.distance_metric = distance_metric
self.exact_match_cols = exact_match_cols
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if self._observed_common_causes_names:
if self.exact_match_cols is not None:
self._observed_common_causes_names = [
v for v in self._observed_common_causes_names if v not in self.exact_match_cols
]
self._observed_common_causes = self._data[self._observed_common_causes_names]
# Convert the categorical variables into dummy/indicator variables
# Basically, this gives a one hot encoding for each category
# The first category is taken to be the base line.
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
error_msg = "No common causes/confounders present. Distance matching methods are not applicable"
self.logger.error(error_msg)
raise Exception(error_msg)
# Dictionary of any user-provided params for the distance metric
# that will be passed to sklearn nearestneighbors
self.distance_metric_params = {}
for param_name in self.Valid_Dist_Metric_Params:
param_val = getattr(self, param_name, None)
if param_val is not None:
self.distance_metric_params[param_name] = param_val
self.logger.info("INFO: Using Distance Matching Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
self.matched_indices_att = None
self.matched_indices_atc = None
def _estimate_effect(self):
# this assumes a binary treatment regime
updated_df = pd.concat(
[self._observed_common_causes, self._data[[self._outcome_name, self._treatment_name[0]]]], axis=1
)
if self.exact_match_cols is not None:
updated_df = pd.concat([updated_df, self._data[self.exact_match_cols]], axis=1)
treated = updated_df.loc[self._data[self._treatment_name[0]] == 1]
control = updated_df.loc[self._data[self._treatment_name[0]] == 0]
numtreatedunits = treated.shape[0]
numcontrolunits = control.shape[0]
fit_att, fit_atc = False, False
est = None
# TODO remove neighbors that are more than a given radius apart
if self._target_units == "att":
fit_att = True
elif self._target_units == "atc":
fit_atc = True
elif self._target_units == "ate":
fit_att = True
fit_atc = True
else:
raise ValueError("Target units string value not supported")
if fit_att:
# estimate ATT on treated by summing over difference between matched neighbors
if self.exact_match_cols is None:
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(treated[self._observed_common_causes.columns].values)
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
att /= numtreatedunits
if self._target_units == "att":
est = att
elif self._target_units == "ate":
est = att * numtreatedunits
# Return indices in the original dataframe
self.matched_indices_att = {}
treated_df_index = treated.index.tolist()
for i in range(numtreatedunits):
self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
else:
grouped = updated_df.groupby(self.exact_match_cols)
att = 0
for name, group in grouped:
treated = group.loc[group[self._treatment_name[0]] == 1]
control = group.loc[group[self._treatment_name[0]] == 0]
if treated.shape[0] == 0:
continue
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(
treated[self._observed_common_causes.columns].values
)
self.logger.debug("distances:")
self.logger.debug(distances)
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
# self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
att /= numtreatedunits
if self._target_units == "att":
est = att
elif self._target_units == "ate":
est = att * numtreatedunits
if fit_atc:
# Now computing ATC
treated_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(treated[self._observed_common_causes.columns].values)
distances, indices = treated_neighbors.kneighbors(control[self._observed_common_causes.columns].values)
atc = 0
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = np.mean(treated.iloc[indices[i]][self._outcome_name].values)
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if self._target_units == "atc":
est = atc
elif self._target_units == "ate":
est += atc * numcontrolunits
est /= numtreatedunits + numcontrolunits
# Return indices in the original dataframe
self.matched_indices_atc = {}
control_df_index = control.index.tolist()
for i in range(numcontrolunits):
self.matched_indices_atc[control_df_index[i]] = treated.iloc[indices[i]].index.tolist()
estimate = CausalEstimate(
estimate=est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| from typing import Any, List, Optional, Union
import numpy as np
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class DistanceMatchingEstimator(CausalEstimator):
"""Simple matching estimator for binary treatments based on a distance
metric.
Supports additional parameters as listed below.
"""
# allowed types of distance metric
Valid_Dist_Metric_Params = ["p", "V", "VI", "w"]
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
num_matches_per_unit: int = 1,
distance_metric: str = "minkowski",
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param num_matches_per_unit: The number of matches per data point.
Default=1.
:param distance_metric: Distance metric to use. Default="minkowski"
that corresponds to Euclidean distance metric with p=2.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
num_matches_per_unit=num_matches_per_unit,
distance_metric=distance_metric,
**kwargs,
)
self.num_matches_per_unit = num_matches_per_unit
self.distance_metric = distance_metric
# Dictionary of any user-provided params for the distance metric
# that will be passed to sklearn nearestneighbors
self.distance_metric_params = {}
for param_name in self.Valid_Dist_Metric_Params:
param_val = getattr(self, param_name, None)
if param_val is not None:
self.distance_metric_params[param_name] = param_val
self.logger.info("INFO: Using Distance Matching Estimator")
self.matched_indices_att = None
self.matched_indices_atc = None
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
exact_match_cols=None,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param exact_match_cols: List of column names whose values should be
exactly matched. Typically used for columns with discrete values.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self.exact_match_cols = exact_match_cols
self._set_effect_modifiers(effect_modifier_names)
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
# Checking if the treatment is binary
if not pd.api.types.is_bool_dtype(self._data[self._treatment_name[0]]):
error_msg = "Distance Matching method is applicable only for binary treatments"
self.logger.error(error_msg)
raise Exception(error_msg)
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if self._observed_common_causes_names:
if self.exact_match_cols is not None:
self._observed_common_causes_names = [
v for v in self._observed_common_causes_names if v not in self.exact_match_cols
]
self._observed_common_causes = self._data[self._observed_common_causes_names]
# Convert the categorical variables into dummy/indicator variables
# Basically, this gives a one hot encoding for each category
# The first category is taken to be the base line.
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
error_msg = "No common causes/confounders present. Distance matching methods are not applicable"
self.logger.error(error_msg)
raise Exception(error_msg)
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
# this assumes a binary treatment regime
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
updated_df = pd.concat(
[self._observed_common_causes, data[[self._outcome_name, self._treatment_name[0]]]], axis=1
)
if self.exact_match_cols is not None:
updated_df = pd.concat([updated_df, data[self.exact_match_cols]], axis=1)
treated = updated_df.loc[data[self._treatment_name[0]] == 1]
control = updated_df.loc[data[self._treatment_name[0]] == 0]
numtreatedunits = treated.shape[0]
numcontrolunits = control.shape[0]
fit_att, fit_atc = False, False
est = None
# TODO remove neighbors that are more than a given radius apart
if target_units == "att":
fit_att = True
elif target_units == "atc":
fit_atc = True
elif target_units == "ate":
fit_att = True
fit_atc = True
else:
raise ValueError("Target units string value not supported")
if fit_att:
# estimate ATT on treated by summing over difference between matched neighbors
if self.exact_match_cols is None:
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(treated[self._observed_common_causes.columns].values)
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
att /= numtreatedunits
if target_units == "att":
est = att
elif target_units == "ate":
est = att * numtreatedunits
# Return indices in the original dataframe
self.matched_indices_att = {}
treated_df_index = treated.index.tolist()
for i in range(numtreatedunits):
self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
else:
grouped = updated_df.groupby(self.exact_match_cols)
att = 0
for name, group in grouped:
treated = group.loc[group[self._treatment_name[0]] == 1]
control = group.loc[group[self._treatment_name[0]] == 0]
if treated.shape[0] == 0:
continue
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(
treated[self._observed_common_causes.columns].values
)
self.logger.debug("distances:")
self.logger.debug(distances)
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
# self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
att /= numtreatedunits
if target_units == "att":
est = att
elif target_units == "ate":
est = att * numtreatedunits
if fit_atc:
# Now computing ATC
treated_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(treated[self._observed_common_causes.columns].values)
distances, indices = treated_neighbors.kneighbors(control[self._observed_common_causes.columns].values)
atc = 0
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = np.mean(treated.iloc[indices[i]][self._outcome_name].values)
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if target_units == "atc":
est = atc
elif target_units == "ate":
est += atc * numcontrolunits
est /= numtreatedunits + numcontrolunits
# Return indices in the original dataframe
self.matched_indices_atc = {}
control_df_index = control.index.tolist()
for i in range(numcontrolunits):
self.matched_indices_atc[control_df_index[i]] = treated.iloc[indices[i]].index.tolist()
estimate = CausalEstimate(
estimate=est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
estimate.add_estimator(self)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | Also, how do you feel about renaming this instance method to simply "effect"? Its shorter and the meaning is clear because we do `Estimator.effect()`. If you agree we can change it for all estimators. | amit-sharma | 214 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/distance_matching_estimator.py | import numpy as np
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
class DistanceMatchingEstimator(CausalEstimator):
"""Simple matching estimator for binary treatments based on a distance
metric.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# allowed types of distance metric
Valid_Dist_Metric_Params = ["p", "V", "VI", "w"]
def __init__(self, *args, num_matches_per_unit=1, distance_metric="minkowski", exact_match_cols=None, **kwargs):
"""
:param num_matches_per_unit: The number of matches per data point.
Default=1.
:param distance_metric: Distance metric to use. Default="minkowski"
that corresponds to Euclidean distance metric with p=2.
:param exact_match_cols: List of column names whose values should be
exactly matched. Typically used for columns with discrete values.
"""
# Required to ensure that self.method_params contains all the
# parameters to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
# Checking if the treatment is binary
if not pd.api.types.is_bool_dtype(self._data[self._treatment_name[0]]):
error_msg = "Distance Matching method is applicable only for binary treatments"
self.logger.error(error_msg)
raise Exception(error_msg)
self.num_matches_per_unit = num_matches_per_unit
self.distance_metric = distance_metric
self.exact_match_cols = exact_match_cols
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if self._observed_common_causes_names:
if self.exact_match_cols is not None:
self._observed_common_causes_names = [
v for v in self._observed_common_causes_names if v not in self.exact_match_cols
]
self._observed_common_causes = self._data[self._observed_common_causes_names]
# Convert the categorical variables into dummy/indicator variables
# Basically, this gives a one hot encoding for each category
# The first category is taken to be the base line.
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
error_msg = "No common causes/confounders present. Distance matching methods are not applicable"
self.logger.error(error_msg)
raise Exception(error_msg)
# Dictionary of any user-provided params for the distance metric
# that will be passed to sklearn nearestneighbors
self.distance_metric_params = {}
for param_name in self.Valid_Dist_Metric_Params:
param_val = getattr(self, param_name, None)
if param_val is not None:
self.distance_metric_params[param_name] = param_val
self.logger.info("INFO: Using Distance Matching Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
self.matched_indices_att = None
self.matched_indices_atc = None
def _estimate_effect(self):
# this assumes a binary treatment regime
updated_df = pd.concat(
[self._observed_common_causes, self._data[[self._outcome_name, self._treatment_name[0]]]], axis=1
)
if self.exact_match_cols is not None:
updated_df = pd.concat([updated_df, self._data[self.exact_match_cols]], axis=1)
treated = updated_df.loc[self._data[self._treatment_name[0]] == 1]
control = updated_df.loc[self._data[self._treatment_name[0]] == 0]
numtreatedunits = treated.shape[0]
numcontrolunits = control.shape[0]
fit_att, fit_atc = False, False
est = None
# TODO remove neighbors that are more than a given radius apart
if self._target_units == "att":
fit_att = True
elif self._target_units == "atc":
fit_atc = True
elif self._target_units == "ate":
fit_att = True
fit_atc = True
else:
raise ValueError("Target units string value not supported")
if fit_att:
# estimate ATT on treated by summing over difference between matched neighbors
if self.exact_match_cols is None:
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(treated[self._observed_common_causes.columns].values)
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
att /= numtreatedunits
if self._target_units == "att":
est = att
elif self._target_units == "ate":
est = att * numtreatedunits
# Return indices in the original dataframe
self.matched_indices_att = {}
treated_df_index = treated.index.tolist()
for i in range(numtreatedunits):
self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
else:
grouped = updated_df.groupby(self.exact_match_cols)
att = 0
for name, group in grouped:
treated = group.loc[group[self._treatment_name[0]] == 1]
control = group.loc[group[self._treatment_name[0]] == 0]
if treated.shape[0] == 0:
continue
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(
treated[self._observed_common_causes.columns].values
)
self.logger.debug("distances:")
self.logger.debug(distances)
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
# self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
att /= numtreatedunits
if self._target_units == "att":
est = att
elif self._target_units == "ate":
est = att * numtreatedunits
if fit_atc:
# Now computing ATC
treated_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(treated[self._observed_common_causes.columns].values)
distances, indices = treated_neighbors.kneighbors(control[self._observed_common_causes.columns].values)
atc = 0
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = np.mean(treated.iloc[indices[i]][self._outcome_name].values)
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if self._target_units == "atc":
est = atc
elif self._target_units == "ate":
est += atc * numcontrolunits
est /= numtreatedunits + numcontrolunits
# Return indices in the original dataframe
self.matched_indices_atc = {}
control_df_index = control.index.tolist()
for i in range(numcontrolunits):
self.matched_indices_atc[control_df_index[i]] = treated.iloc[indices[i]].index.tolist()
estimate = CausalEstimate(
estimate=est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| from typing import Any, List, Optional, Union
import numpy as np
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class DistanceMatchingEstimator(CausalEstimator):
"""Simple matching estimator for binary treatments based on a distance
metric.
Supports additional parameters as listed below.
"""
# allowed types of distance metric
Valid_Dist_Metric_Params = ["p", "V", "VI", "w"]
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
num_matches_per_unit: int = 1,
distance_metric: str = "minkowski",
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param num_matches_per_unit: The number of matches per data point.
Default=1.
:param distance_metric: Distance metric to use. Default="minkowski"
that corresponds to Euclidean distance metric with p=2.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
num_matches_per_unit=num_matches_per_unit,
distance_metric=distance_metric,
**kwargs,
)
self.num_matches_per_unit = num_matches_per_unit
self.distance_metric = distance_metric
# Dictionary of any user-provided params for the distance metric
# that will be passed to sklearn nearestneighbors
self.distance_metric_params = {}
for param_name in self.Valid_Dist_Metric_Params:
param_val = getattr(self, param_name, None)
if param_val is not None:
self.distance_metric_params[param_name] = param_val
self.logger.info("INFO: Using Distance Matching Estimator")
self.matched_indices_att = None
self.matched_indices_atc = None
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
exact_match_cols=None,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param exact_match_cols: List of column names whose values should be
exactly matched. Typically used for columns with discrete values.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self.exact_match_cols = exact_match_cols
self._set_effect_modifiers(effect_modifier_names)
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
# Checking if the treatment is binary
if not pd.api.types.is_bool_dtype(self._data[self._treatment_name[0]]):
error_msg = "Distance Matching method is applicable only for binary treatments"
self.logger.error(error_msg)
raise Exception(error_msg)
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if self._observed_common_causes_names:
if self.exact_match_cols is not None:
self._observed_common_causes_names = [
v for v in self._observed_common_causes_names if v not in self.exact_match_cols
]
self._observed_common_causes = self._data[self._observed_common_causes_names]
# Convert the categorical variables into dummy/indicator variables
# Basically, this gives a one hot encoding for each category
# The first category is taken to be the base line.
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
error_msg = "No common causes/confounders present. Distance matching methods are not applicable"
self.logger.error(error_msg)
raise Exception(error_msg)
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
# this assumes a binary treatment regime
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
updated_df = pd.concat(
[self._observed_common_causes, data[[self._outcome_name, self._treatment_name[0]]]], axis=1
)
if self.exact_match_cols is not None:
updated_df = pd.concat([updated_df, data[self.exact_match_cols]], axis=1)
treated = updated_df.loc[data[self._treatment_name[0]] == 1]
control = updated_df.loc[data[self._treatment_name[0]] == 0]
numtreatedunits = treated.shape[0]
numcontrolunits = control.shape[0]
fit_att, fit_atc = False, False
est = None
# TODO remove neighbors that are more than a given radius apart
if target_units == "att":
fit_att = True
elif target_units == "atc":
fit_atc = True
elif target_units == "ate":
fit_att = True
fit_atc = True
else:
raise ValueError("Target units string value not supported")
if fit_att:
# estimate ATT on treated by summing over difference between matched neighbors
if self.exact_match_cols is None:
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(treated[self._observed_common_causes.columns].values)
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
att /= numtreatedunits
if target_units == "att":
est = att
elif target_units == "ate":
est = att * numtreatedunits
# Return indices in the original dataframe
self.matched_indices_att = {}
treated_df_index = treated.index.tolist()
for i in range(numtreatedunits):
self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
else:
grouped = updated_df.groupby(self.exact_match_cols)
att = 0
for name, group in grouped:
treated = group.loc[group[self._treatment_name[0]] == 1]
control = group.loc[group[self._treatment_name[0]] == 0]
if treated.shape[0] == 0:
continue
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(
treated[self._observed_common_causes.columns].values
)
self.logger.debug("distances:")
self.logger.debug(distances)
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
# self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
att /= numtreatedunits
if target_units == "att":
est = att
elif target_units == "ate":
est = att * numtreatedunits
if fit_atc:
# Now computing ATC
treated_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(treated[self._observed_common_causes.columns].values)
distances, indices = treated_neighbors.kneighbors(control[self._observed_common_causes.columns].values)
atc = 0
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = np.mean(treated.iloc[indices[i]][self._outcome_name].values)
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if target_units == "atc":
est = atc
elif target_units == "ate":
est += atc * numcontrolunits
est /= numtreatedunits + numcontrolunits
# Return indices in the original dataframe
self.matched_indices_atc = {}
control_df_index = control.index.tolist()
for i in range(numcontrolunits):
self.matched_indices_atc[control_df_index[i]] = treated.iloc[indices[i]].index.tolist()
estimate = CausalEstimate(
estimate=est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
estimate.add_estimator(self)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | I like it, I just didn't know if we actually wanted to rename it. I'll rename it in the estimators but leave the estimate_effect method name in the CausalModel otherwise we might break backwards compatibility. | andresmor-ms | 215 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/distance_matching_estimator.py | import numpy as np
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
class DistanceMatchingEstimator(CausalEstimator):
"""Simple matching estimator for binary treatments based on a distance
metric.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# allowed types of distance metric
Valid_Dist_Metric_Params = ["p", "V", "VI", "w"]
def __init__(self, *args, num_matches_per_unit=1, distance_metric="minkowski", exact_match_cols=None, **kwargs):
"""
:param num_matches_per_unit: The number of matches per data point.
Default=1.
:param distance_metric: Distance metric to use. Default="minkowski"
that corresponds to Euclidean distance metric with p=2.
:param exact_match_cols: List of column names whose values should be
exactly matched. Typically used for columns with discrete values.
"""
# Required to ensure that self.method_params contains all the
# parameters to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
# Checking if the treatment is binary
if not pd.api.types.is_bool_dtype(self._data[self._treatment_name[0]]):
error_msg = "Distance Matching method is applicable only for binary treatments"
self.logger.error(error_msg)
raise Exception(error_msg)
self.num_matches_per_unit = num_matches_per_unit
self.distance_metric = distance_metric
self.exact_match_cols = exact_match_cols
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if self._observed_common_causes_names:
if self.exact_match_cols is not None:
self._observed_common_causes_names = [
v for v in self._observed_common_causes_names if v not in self.exact_match_cols
]
self._observed_common_causes = self._data[self._observed_common_causes_names]
# Convert the categorical variables into dummy/indicator variables
# Basically, this gives a one hot encoding for each category
# The first category is taken to be the base line.
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
error_msg = "No common causes/confounders present. Distance matching methods are not applicable"
self.logger.error(error_msg)
raise Exception(error_msg)
# Dictionary of any user-provided params for the distance metric
# that will be passed to sklearn nearestneighbors
self.distance_metric_params = {}
for param_name in self.Valid_Dist_Metric_Params:
param_val = getattr(self, param_name, None)
if param_val is not None:
self.distance_metric_params[param_name] = param_val
self.logger.info("INFO: Using Distance Matching Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
self.matched_indices_att = None
self.matched_indices_atc = None
def _estimate_effect(self):
# this assumes a binary treatment regime
updated_df = pd.concat(
[self._observed_common_causes, self._data[[self._outcome_name, self._treatment_name[0]]]], axis=1
)
if self.exact_match_cols is not None:
updated_df = pd.concat([updated_df, self._data[self.exact_match_cols]], axis=1)
treated = updated_df.loc[self._data[self._treatment_name[0]] == 1]
control = updated_df.loc[self._data[self._treatment_name[0]] == 0]
numtreatedunits = treated.shape[0]
numcontrolunits = control.shape[0]
fit_att, fit_atc = False, False
est = None
# TODO remove neighbors that are more than a given radius apart
if self._target_units == "att":
fit_att = True
elif self._target_units == "atc":
fit_atc = True
elif self._target_units == "ate":
fit_att = True
fit_atc = True
else:
raise ValueError("Target units string value not supported")
if fit_att:
# estimate ATT on treated by summing over difference between matched neighbors
if self.exact_match_cols is None:
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(treated[self._observed_common_causes.columns].values)
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
att /= numtreatedunits
if self._target_units == "att":
est = att
elif self._target_units == "ate":
est = att * numtreatedunits
# Return indices in the original dataframe
self.matched_indices_att = {}
treated_df_index = treated.index.tolist()
for i in range(numtreatedunits):
self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
else:
grouped = updated_df.groupby(self.exact_match_cols)
att = 0
for name, group in grouped:
treated = group.loc[group[self._treatment_name[0]] == 1]
control = group.loc[group[self._treatment_name[0]] == 0]
if treated.shape[0] == 0:
continue
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(
treated[self._observed_common_causes.columns].values
)
self.logger.debug("distances:")
self.logger.debug(distances)
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
# self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
att /= numtreatedunits
if self._target_units == "att":
est = att
elif self._target_units == "ate":
est = att * numtreatedunits
if fit_atc:
# Now computing ATC
treated_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(treated[self._observed_common_causes.columns].values)
distances, indices = treated_neighbors.kneighbors(control[self._observed_common_causes.columns].values)
atc = 0
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = np.mean(treated.iloc[indices[i]][self._outcome_name].values)
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if self._target_units == "atc":
est = atc
elif self._target_units == "ate":
est += atc * numcontrolunits
est /= numtreatedunits + numcontrolunits
# Return indices in the original dataframe
self.matched_indices_atc = {}
control_df_index = control.index.tolist()
for i in range(numcontrolunits):
self.matched_indices_atc[control_df_index[i]] = treated.iloc[indices[i]].index.tolist()
estimate = CausalEstimate(
estimate=est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| from typing import Any, List, Optional, Union
import numpy as np
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class DistanceMatchingEstimator(CausalEstimator):
"""Simple matching estimator for binary treatments based on a distance
metric.
Supports additional parameters as listed below.
"""
# allowed types of distance metric
Valid_Dist_Metric_Params = ["p", "V", "VI", "w"]
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
num_matches_per_unit: int = 1,
distance_metric: str = "minkowski",
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param num_matches_per_unit: The number of matches per data point.
Default=1.
:param distance_metric: Distance metric to use. Default="minkowski"
that corresponds to Euclidean distance metric with p=2.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
num_matches_per_unit=num_matches_per_unit,
distance_metric=distance_metric,
**kwargs,
)
self.num_matches_per_unit = num_matches_per_unit
self.distance_metric = distance_metric
# Dictionary of any user-provided params for the distance metric
# that will be passed to sklearn nearestneighbors
self.distance_metric_params = {}
for param_name in self.Valid_Dist_Metric_Params:
param_val = getattr(self, param_name, None)
if param_val is not None:
self.distance_metric_params[param_name] = param_val
self.logger.info("INFO: Using Distance Matching Estimator")
self.matched_indices_att = None
self.matched_indices_atc = None
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
exact_match_cols=None,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param exact_match_cols: List of column names whose values should be
exactly matched. Typically used for columns with discrete values.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self.exact_match_cols = exact_match_cols
self._set_effect_modifiers(effect_modifier_names)
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
# Checking if the treatment is binary
if not pd.api.types.is_bool_dtype(self._data[self._treatment_name[0]]):
error_msg = "Distance Matching method is applicable only for binary treatments"
self.logger.error(error_msg)
raise Exception(error_msg)
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if self._observed_common_causes_names:
if self.exact_match_cols is not None:
self._observed_common_causes_names = [
v for v in self._observed_common_causes_names if v not in self.exact_match_cols
]
self._observed_common_causes = self._data[self._observed_common_causes_names]
# Convert the categorical variables into dummy/indicator variables
# Basically, this gives a one hot encoding for each category
# The first category is taken to be the base line.
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
error_msg = "No common causes/confounders present. Distance matching methods are not applicable"
self.logger.error(error_msg)
raise Exception(error_msg)
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
# this assumes a binary treatment regime
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
updated_df = pd.concat(
[self._observed_common_causes, data[[self._outcome_name, self._treatment_name[0]]]], axis=1
)
if self.exact_match_cols is not None:
updated_df = pd.concat([updated_df, data[self.exact_match_cols]], axis=1)
treated = updated_df.loc[data[self._treatment_name[0]] == 1]
control = updated_df.loc[data[self._treatment_name[0]] == 0]
numtreatedunits = treated.shape[0]
numcontrolunits = control.shape[0]
fit_att, fit_atc = False, False
est = None
# TODO remove neighbors that are more than a given radius apart
if target_units == "att":
fit_att = True
elif target_units == "atc":
fit_atc = True
elif target_units == "ate":
fit_att = True
fit_atc = True
else:
raise ValueError("Target units string value not supported")
if fit_att:
# estimate ATT on treated by summing over difference between matched neighbors
if self.exact_match_cols is None:
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(treated[self._observed_common_causes.columns].values)
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
att /= numtreatedunits
if target_units == "att":
est = att
elif target_units == "ate":
est = att * numtreatedunits
# Return indices in the original dataframe
self.matched_indices_att = {}
treated_df_index = treated.index.tolist()
for i in range(numtreatedunits):
self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
else:
grouped = updated_df.groupby(self.exact_match_cols)
att = 0
for name, group in grouped:
treated = group.loc[group[self._treatment_name[0]] == 1]
control = group.loc[group[self._treatment_name[0]] == 0]
if treated.shape[0] == 0:
continue
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(
treated[self._observed_common_causes.columns].values
)
self.logger.debug("distances:")
self.logger.debug(distances)
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
# self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
att /= numtreatedunits
if target_units == "att":
est = att
elif target_units == "ate":
est = att * numtreatedunits
if fit_atc:
# Now computing ATC
treated_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(treated[self._observed_common_causes.columns].values)
distances, indices = treated_neighbors.kneighbors(control[self._observed_common_causes.columns].values)
atc = 0
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = np.mean(treated.iloc[indices[i]][self._outcome_name].values)
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if target_units == "atc":
est = atc
elif target_units == "ate":
est += atc * numcontrolunits
est /= numtreatedunits + numcontrolunits
# Return indices in the original dataframe
self.matched_indices_atc = {}
control_df_index = control.index.tolist()
for i in range(numcontrolunits):
self.matched_indices_atc[control_df_index[i]] = treated.iloc[indices[i]].index.tolist()
estimate = CausalEstimate(
estimate=est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
estimate.add_estimator(self)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | That sounds good! | amit-sharma | 216 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/distance_matching_estimator.py | import numpy as np
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
class DistanceMatchingEstimator(CausalEstimator):
"""Simple matching estimator for binary treatments based on a distance
metric.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# allowed types of distance metric
Valid_Dist_Metric_Params = ["p", "V", "VI", "w"]
def __init__(self, *args, num_matches_per_unit=1, distance_metric="minkowski", exact_match_cols=None, **kwargs):
"""
:param num_matches_per_unit: The number of matches per data point.
Default=1.
:param distance_metric: Distance metric to use. Default="minkowski"
that corresponds to Euclidean distance metric with p=2.
:param exact_match_cols: List of column names whose values should be
exactly matched. Typically used for columns with discrete values.
"""
# Required to ensure that self.method_params contains all the
# parameters to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
# Checking if the treatment is binary
if not pd.api.types.is_bool_dtype(self._data[self._treatment_name[0]]):
error_msg = "Distance Matching method is applicable only for binary treatments"
self.logger.error(error_msg)
raise Exception(error_msg)
self.num_matches_per_unit = num_matches_per_unit
self.distance_metric = distance_metric
self.exact_match_cols = exact_match_cols
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if self._observed_common_causes_names:
if self.exact_match_cols is not None:
self._observed_common_causes_names = [
v for v in self._observed_common_causes_names if v not in self.exact_match_cols
]
self._observed_common_causes = self._data[self._observed_common_causes_names]
# Convert the categorical variables into dummy/indicator variables
# Basically, this gives a one hot encoding for each category
# The first category is taken to be the base line.
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
error_msg = "No common causes/confounders present. Distance matching methods are not applicable"
self.logger.error(error_msg)
raise Exception(error_msg)
# Dictionary of any user-provided params for the distance metric
# that will be passed to sklearn nearestneighbors
self.distance_metric_params = {}
for param_name in self.Valid_Dist_Metric_Params:
param_val = getattr(self, param_name, None)
if param_val is not None:
self.distance_metric_params[param_name] = param_val
self.logger.info("INFO: Using Distance Matching Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
self.matched_indices_att = None
self.matched_indices_atc = None
def _estimate_effect(self):
# this assumes a binary treatment regime
updated_df = pd.concat(
[self._observed_common_causes, self._data[[self._outcome_name, self._treatment_name[0]]]], axis=1
)
if self.exact_match_cols is not None:
updated_df = pd.concat([updated_df, self._data[self.exact_match_cols]], axis=1)
treated = updated_df.loc[self._data[self._treatment_name[0]] == 1]
control = updated_df.loc[self._data[self._treatment_name[0]] == 0]
numtreatedunits = treated.shape[0]
numcontrolunits = control.shape[0]
fit_att, fit_atc = False, False
est = None
# TODO remove neighbors that are more than a given radius apart
if self._target_units == "att":
fit_att = True
elif self._target_units == "atc":
fit_atc = True
elif self._target_units == "ate":
fit_att = True
fit_atc = True
else:
raise ValueError("Target units string value not supported")
if fit_att:
# estimate ATT on treated by summing over difference between matched neighbors
if self.exact_match_cols is None:
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(treated[self._observed_common_causes.columns].values)
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
att /= numtreatedunits
if self._target_units == "att":
est = att
elif self._target_units == "ate":
est = att * numtreatedunits
# Return indices in the original dataframe
self.matched_indices_att = {}
treated_df_index = treated.index.tolist()
for i in range(numtreatedunits):
self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
else:
grouped = updated_df.groupby(self.exact_match_cols)
att = 0
for name, group in grouped:
treated = group.loc[group[self._treatment_name[0]] == 1]
control = group.loc[group[self._treatment_name[0]] == 0]
if treated.shape[0] == 0:
continue
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(
treated[self._observed_common_causes.columns].values
)
self.logger.debug("distances:")
self.logger.debug(distances)
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
# self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
att /= numtreatedunits
if self._target_units == "att":
est = att
elif self._target_units == "ate":
est = att * numtreatedunits
if fit_atc:
# Now computing ATC
treated_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(treated[self._observed_common_causes.columns].values)
distances, indices = treated_neighbors.kneighbors(control[self._observed_common_causes.columns].values)
atc = 0
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = np.mean(treated.iloc[indices[i]][self._outcome_name].values)
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if self._target_units == "atc":
est = atc
elif self._target_units == "ate":
est += atc * numcontrolunits
est /= numtreatedunits + numcontrolunits
# Return indices in the original dataframe
self.matched_indices_atc = {}
control_df_index = control.index.tolist()
for i in range(numcontrolunits):
self.matched_indices_atc[control_df_index[i]] = treated.iloc[indices[i]].index.tolist()
estimate = CausalEstimate(
estimate=est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| from typing import Any, List, Optional, Union
import numpy as np
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class DistanceMatchingEstimator(CausalEstimator):
"""Simple matching estimator for binary treatments based on a distance
metric.
Supports additional parameters as listed below.
"""
# allowed types of distance metric
Valid_Dist_Metric_Params = ["p", "V", "VI", "w"]
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
num_matches_per_unit: int = 1,
distance_metric: str = "minkowski",
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param num_matches_per_unit: The number of matches per data point.
Default=1.
:param distance_metric: Distance metric to use. Default="minkowski"
that corresponds to Euclidean distance metric with p=2.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
num_matches_per_unit=num_matches_per_unit,
distance_metric=distance_metric,
**kwargs,
)
self.num_matches_per_unit = num_matches_per_unit
self.distance_metric = distance_metric
# Dictionary of any user-provided params for the distance metric
# that will be passed to sklearn nearestneighbors
self.distance_metric_params = {}
for param_name in self.Valid_Dist_Metric_Params:
param_val = getattr(self, param_name, None)
if param_val is not None:
self.distance_metric_params[param_name] = param_val
self.logger.info("INFO: Using Distance Matching Estimator")
self.matched_indices_att = None
self.matched_indices_atc = None
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
exact_match_cols=None,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param exact_match_cols: List of column names whose values should be
exactly matched. Typically used for columns with discrete values.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self.exact_match_cols = exact_match_cols
self._set_effect_modifiers(effect_modifier_names)
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
# Checking if the treatment is binary
if not pd.api.types.is_bool_dtype(self._data[self._treatment_name[0]]):
error_msg = "Distance Matching method is applicable only for binary treatments"
self.logger.error(error_msg)
raise Exception(error_msg)
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if self._observed_common_causes_names:
if self.exact_match_cols is not None:
self._observed_common_causes_names = [
v for v in self._observed_common_causes_names if v not in self.exact_match_cols
]
self._observed_common_causes = self._data[self._observed_common_causes_names]
# Convert the categorical variables into dummy/indicator variables
# Basically, this gives a one hot encoding for each category
# The first category is taken to be the base line.
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
error_msg = "No common causes/confounders present. Distance matching methods are not applicable"
self.logger.error(error_msg)
raise Exception(error_msg)
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
# this assumes a binary treatment regime
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
updated_df = pd.concat(
[self._observed_common_causes, data[[self._outcome_name, self._treatment_name[0]]]], axis=1
)
if self.exact_match_cols is not None:
updated_df = pd.concat([updated_df, data[self.exact_match_cols]], axis=1)
treated = updated_df.loc[data[self._treatment_name[0]] == 1]
control = updated_df.loc[data[self._treatment_name[0]] == 0]
numtreatedunits = treated.shape[0]
numcontrolunits = control.shape[0]
fit_att, fit_atc = False, False
est = None
# TODO remove neighbors that are more than a given radius apart
if target_units == "att":
fit_att = True
elif target_units == "atc":
fit_atc = True
elif target_units == "ate":
fit_att = True
fit_atc = True
else:
raise ValueError("Target units string value not supported")
if fit_att:
# estimate ATT on treated by summing over difference between matched neighbors
if self.exact_match_cols is None:
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(treated[self._observed_common_causes.columns].values)
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
att /= numtreatedunits
if target_units == "att":
est = att
elif target_units == "ate":
est = att * numtreatedunits
# Return indices in the original dataframe
self.matched_indices_att = {}
treated_df_index = treated.index.tolist()
for i in range(numtreatedunits):
self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
else:
grouped = updated_df.groupby(self.exact_match_cols)
att = 0
for name, group in grouped:
treated = group.loc[group[self._treatment_name[0]] == 1]
control = group.loc[group[self._treatment_name[0]] == 0]
if treated.shape[0] == 0:
continue
control_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(control[self._observed_common_causes.columns].values)
distances, indices = control_neighbors.kneighbors(
treated[self._observed_common_causes.columns].values
)
self.logger.debug("distances:")
self.logger.debug(distances)
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = np.mean(control.iloc[indices[i]][self._outcome_name].values)
att += treated_outcome - control_outcome
# self.matched_indices_att[treated_df_index[i]] = control.iloc[indices[i]].index.tolist()
att /= numtreatedunits
if target_units == "att":
est = att
elif target_units == "ate":
est = att * numtreatedunits
if fit_atc:
# Now computing ATC
treated_neighbors = NearestNeighbors(
n_neighbors=self.num_matches_per_unit,
metric=self.distance_metric,
algorithm="ball_tree",
**self.distance_metric_params,
).fit(treated[self._observed_common_causes.columns].values)
distances, indices = treated_neighbors.kneighbors(control[self._observed_common_causes.columns].values)
atc = 0
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = np.mean(treated.iloc[indices[i]][self._outcome_name].values)
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if target_units == "atc":
est = atc
elif target_units == "ate":
est += atc * numcontrolunits
est /= numtreatedunits + numcontrolunits
# Return indices in the original dataframe
self.matched_indices_atc = {}
control_df_index = control.index.tolist()
for i in range(numcontrolunits):
self.matched_indices_atc[control_df_index[i]] = treated.iloc[indices[i]].index.tolist()
estimate = CausalEstimate(
estimate=est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
estimate.add_estimator(self)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | Will rename in future PR as there are other places where the `effect()` function exist. | andresmor-ms | 217 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/econml.py | import inspect
from importlib import import_module
from typing import Callable
import numpy as np
import pandas as pd
from numpy.distutils.misc_util import is_sequence
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.utils.api import parse_state
class Econml(CausalEstimator):
"""Wrapper class for estimators from the EconML library.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below. For init and fit
parameters of each estimator, refer to the EconML docs.
"""
def __init__(self, *args, econml_methodname, **kwargs):
"""
:param econml_methodname: Fully qualified name of econml estimator
class. For example, 'econml.dml.DML'
"""
# Required to ensure that self.method_params contains all the
# parameters to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self._econml_methodname = econml_methodname
self.logger.info("INFO: Using EconML Estimator")
self.identifier_method = self._target_estimand.identifier_method
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables().copy()
# Enforcing this ordering is necessary to feed through the propensity values from dataset
self._observed_common_causes_names = [
c for c in self._observed_common_causes_names if "propensity" not in c
] + sorted([c for c in self._observed_common_causes_names if "propensity" in c])
# For metalearners only--issue a warning if w contains variables not in x
(module_name, _, class_name) = self._econml_methodname.rpartition(".")
if module_name.endswith("metalearners"):
effect_modifier_names = []
if self._effect_modifier_names is not None:
effect_modifier_names = self._effect_modifier_names.copy()
w_diff_x = [w for w in self._observed_common_causes_names if w not in effect_modifier_names]
if len(w_diff_x) > 0:
self.logger.warn(
"Concatenating common_causes and effect_modifiers and providing a single list of variables to metalearner estimator method, "
+ class_name
+ ". EconML metalearners accept a single X argument."
)
effect_modifier_names.extend(w_diff_x)
# Override the effect_modifiers set in CausalEstimator.__init__()
# Also only update self._effect_modifiers, and create a copy of self._effect_modifier_names
# the latter can be used by other estimator methods later
self._effect_modifiers = self._data[effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self._effect_modifier_names = effect_modifier_names
self.logger.debug("Effect modifiers: " + ",".join(effect_modifier_names))
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.logger.debug("Back-door variables used:" + ",".join(self._observed_common_causes_names))
# Instrumental variables names, if present
# choosing the instrumental variable to use
if getattr(self, "iv_instrument_name", None) is None:
self.estimating_instrument_names = self._target_estimand.instrumental_variables
else:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
if self.estimating_instrument_names:
self._estimating_instruments = self._data[self.estimating_instrument_names]
self._estimating_instruments = pd.get_dummies(self._estimating_instruments, drop_first=True)
else:
self._estimating_instruments = None
self.estimator = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _get_econml_class_object(self, module_method_name, *args, **kwargs):
# from https://www.bnmetrics.com/blog/factory-pattern-in-python3-simple-version
try:
(module_name, _, class_name) = module_method_name.rpartition(".")
estimator_module = import_module(module_name)
estimator_class = getattr(estimator_module, class_name)
except (AttributeError, AssertionError, ImportError):
raise ImportError(
"Error loading {}.{}. Double-check the method name and ensure that all econml dependencies are installed.".format(
module_name, class_name
)
)
return estimator_class
def _estimate_effect(self):
n_samples = self._treatment.shape[0]
X = None # Effect modifiers
W = None # common causes/ confounders
Z = None # Instruments
Y = self._outcome
T = self._treatment
if self._effect_modifiers is not None:
X = self._effect_modifiers
if self._observed_common_causes_names:
W = self._observed_common_causes
if self.estimating_instrument_names:
Z = self._estimating_instruments
named_data_args = {"Y": Y, "T": T, "X": X, "W": W, "Z": Z}
if self.estimator is None:
estimator_class = self._get_econml_class_object(self._econml_methodname)
self.estimator = estimator_class(**self.method_params["init_params"])
# Calling the econml estimator's fit method
estimator_argspec = inspect.getfullargspec(inspect.unwrap(self.estimator.fit))
# As of v0.9, econml has some kewyord only arguments
estimator_named_args = estimator_argspec.args + estimator_argspec.kwonlyargs
estimator_data_args = {
arg: named_data_args[arg] for arg in named_data_args.keys() if arg in estimator_named_args
}
if self.method_params["fit_params"] is not False:
self.estimator.fit(**estimator_data_args, **self.method_params["fit_params"])
X_test = X
if X is not None:
if type(self._target_units) is pd.DataFrame:
X_test = self._target_units
elif callable(self._target_units):
filtered_rows = self._data.where(self._target_units)
boolean_criterion = np.array(filtered_rows.notnull().iloc[:, 0])
X_test = X[boolean_criterion]
# Changing shape to a list for a singleton value
self._treatment_value = parse_state(self._treatment_value)
est = self.effect(X_test)
ate = np.mean(est, axis=0) # one value per treatment value
if len(ate) == 1:
ate = ate[0]
if self._confidence_intervals:
self.effect_intervals = self.effect_interval(X_test)
else:
self.effect_intervals = None
estimate = CausalEstimate(
estimate=ate,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
cate_estimates=est,
effect_intervals=self.effect_intervals,
_estimator_object=self.estimator,
)
return estimate
def _estimate_confidence_intervals(self, confidence_level=None, method=None):
"""Returns None if the confidence interval has not been calculated."""
return self.effect_intervals
def _do(self, x):
raise NotImplementedError
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
(module_name, _, class_name) = self._econml_methodname.rpartition(".")
if module_name.endswith("metalearners"):
var_list = estimand.treatment_variable + self._effect_modifier_names
expr += "+".join(var_list)
else:
var_list = estimand.treatment_variable + self._observed_common_causes_names
expr += "+".join(var_list)
expr += " | " + ",".join(self._effect_modifier_names)
return expr
def shap_values(self, df: pd.DataFrame, *args, **kwargs):
return self.estimator.shap_values(df[self._effect_modifier_names].values, *args, **kwargs)
def apply_multitreatment(self, df: pd.DataFrame, fun: Callable, *args, **kwargs):
ests = []
assert not isinstance(self._treatment_value, str)
assert is_sequence(self._treatment_value)
if df is None:
filtered_df = None
else:
filtered_df = df[self._effect_modifier_names].values
for tv in self._treatment_value:
ests.append(
fun(
filtered_df,
T0=self._control_value,
T1=tv,
*args,
**kwargs,
)
)
est = np.stack(ests, axis=1)
return est
def effect(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise estimated treatment effect,
output shape n_units x n_treatment_values (not counting control)
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_fun, *args, **kwargs)
def effect_interval(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise confidence intervals for the estimated treatment effect
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_interval_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_interval(
filtered_df, T0=T0, T1=T1, alpha=1 - self.confidence_level, *args, **kwargs
)
return self.apply_multitreatment(df, effect_interval_fun, *args, **kwargs)
def effect_inference(self, df: pd.DataFrame, *args, **kwargs):
"""
Inference (uncertainty) results produced by the underlying EconML estimator
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_inference_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_inference(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_inference_fun, *args, **kwargs)
def effect_tt(self, df: pd.DataFrame, *args, **kwargs):
"""
Effect of the actual treatment that was applied to each unit
("effect of Treatment on the Treated")
:param df: Features of the units to evaluate
:param args: passed through to estimator.effect()
:param kwargs: passed through to estimator.effect()
"""
eff = self.effect(df, *args, **kwargs).reshape((len(df), len(self._treatment_value)))
out = np.zeros(len(df))
treatment_value = parse_state(self._treatment_value)
treatment_name = parse_state(self._treatment_name)[0]
eff = np.reshape(eff, (len(df), len(treatment_value)))
# For each unit, return the estimated effect of the treatment value
# that was actually applied to the unit
for c, col in enumerate(treatment_value):
out[df[treatment_name] == col] = eff[df[treatment_name] == col, c]
return pd.Series(data=out, index=df.index)
| import inspect
from importlib import import_module
from typing import Any, Callable, List, Optional, Protocol, Union
from warnings import warn
import numpy as np
import pandas as pd
from numpy.distutils.misc_util import is_sequence
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier import IdentifiedEstimand
from dowhy.utils.api import parse_state
class _EconmlEstimator(Protocol):
def fit(self, *args, **kwargs):
...
def effect(self, *args, **kwargs):
...
def effect_interval(self, *args, **kwargs):
...
def effect_inference(self, *args, **kwargs):
...
def shap_values(self, *args, **kwargs):
...
class Econml(CausalEstimator):
"""Wrapper class for estimators from the EconML library.
Supports additional parameters as listed below. For init and fit
parameters of each estimator, refer to the EconML docs.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
econml_estimator: Union[_EconmlEstimator, str],
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param econml_methodname: Fully qualified name of econml estimator
class. For example, 'econml.dml.DML'
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
econml_estimator=econml_estimator,
**kwargs,
)
if isinstance(econml_estimator, str):
warn(
"Using a string to specify the value for econml_estimator is now deprecated, please provide an instance of a econml object",
DeprecationWarning,
stacklevel=2,
)
estimator_class = self._get_econml_class_object(econml_estimator)
self.estimator = estimator_class(**kwargs["init_params"])
else:
self.estimator = econml_estimator
self.logger.info("INFO: Using EconML Estimator")
self.identifier_method = self._target_estimand.identifier_method
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
**kwargs,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
# Save parameters for later refutter fitting
self._econml_fit_params = kwargs
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables().copy()
# Enforcing this ordering is necessary to feed through the propensity values from dataset
self._observed_common_causes_names = [
c for c in self._observed_common_causes_names if "propensity" not in c
] + sorted([c for c in self._observed_common_causes_names if "propensity" in c])
# For metalearners only--issue a warning if w contains variables not in x
if self.estimator.__module__.endswith("metalearners"):
effect_modifier_names = []
if self._effect_modifier_names is not None:
effect_modifier_names = self._effect_modifier_names.copy()
w_diff_x = [w for w in self._observed_common_causes_names if w not in effect_modifier_names]
if len(w_diff_x) > 0:
self.logger.warn(
"Concatenating common_causes and effect_modifiers and providing a single list of variables to metalearner estimator method, "
+ self.estimator.__class__.__name__
+ ". EconML metalearners accept a single X argument."
)
effect_modifier_names.extend(w_diff_x)
# Override the effect_modifiers set in CausalEstimator.__init__()
# Also only update self._effect_modifiers, and create a copy of self._effect_modifier_names
# the latter can be used by other estimator methods later
self._effect_modifiers = self._data[effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self._effect_modifier_names = effect_modifier_names
self.logger.debug("Effect modifiers: " + ",".join(effect_modifier_names))
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.logger.debug("Back-door variables used:" + ",".join(self._observed_common_causes_names))
# Instrumental variables names, if present
# choosing the instrumental variable to use
if getattr(self, "iv_instrument_name", None) is None:
self.estimating_instrument_names = self._target_estimand.instrumental_variables
else:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
if self.estimating_instrument_names:
self._estimating_instruments = self._data[self.estimating_instrument_names]
self._estimating_instruments = pd.get_dummies(self._estimating_instruments, drop_first=True)
else:
self._estimating_instruments = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
X = None
W = None # common causes/ confounders
Z = None # Instruments
Y = self._outcome
T = self._treatment
if self._effect_modifiers is not None and len(self._effect_modifiers) > 0:
X = self._effect_modifiers
if self._observed_common_causes_names:
W = self._observed_common_causes
if self.estimating_instrument_names:
Z = self._estimating_instruments
named_data_args = {"Y": Y, "T": T, "X": X, "W": W, "Z": Z}
# Calling the econml estimator's fit method
estimator_argspec = inspect.getfullargspec(inspect.unwrap(self.estimator.fit))
# As of v0.9, econml has some kewyord only arguments
estimator_named_args = estimator_argspec.args + estimator_argspec.kwonlyargs
estimator_data_args = {
arg: named_data_args[arg] for arg in named_data_args.keys() if arg in estimator_named_args
}
self.estimator.fit(**estimator_data_args, **kwargs)
return self
def _get_econml_class_object(self, module_method_name, *args, **kwargs):
# from https://www.bnmetrics.com/blog/factory-pattern-in-python3-simple-version
try:
(module_name, _, class_name) = module_method_name.rpartition(".")
estimator_module = import_module(module_name)
estimator_class = getattr(estimator_module, class_name)
except (AttributeError, AssertionError, ImportError):
raise ImportError(
"Error loading {}.{}. Double-check the method name and ensure that all econml dependencies are installed.".format(
module_name, class_name
)
)
return estimator_class
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
"""
data: dataframe containing the data on which treatment effect is to be estimated.
treatment_value: value of the treatment variable for which the effect is to be estimated.
control_value: value of the treatment variable that denotes its absence (usually 0)
target_units: The units for which the treatment effect should be estimated.
It can be a DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
It can also be a lambda function that can be used as an index for the data (pandas DataFrame) to select the required rows.
"""
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
n_samples = self._treatment.shape[0]
X = None # Effect modifiers
if self._effect_modifiers is not None and len(self._effect_modifiers) > 0:
X = self._effect_modifiers
X_test = X
if X is not None:
if type(target_units) is pd.DataFrame:
X_test = target_units
elif callable(target_units):
filtered_rows = data.where(target_units)
boolean_criterion = np.array(filtered_rows.notnull().iloc[:, 0])
X_test = X[boolean_criterion]
# Changing shape to a list for a singleton value
self._treatment_value = parse_state(self._treatment_value)
est = self.effect(X_test)
ate = np.mean(est, axis=0) # one value per treatment value
if len(ate) == 1:
ate = ate[0]
if self._confidence_intervals:
self.effect_intervals = self.effect_interval(X_test)
else:
self.effect_intervals = None
estimate = CausalEstimate(
estimate=ate,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
cate_estimates=est,
effect_intervals=self.effect_intervals,
_estimator_object=self.estimator,
)
estimate.add_estimator(self)
return estimate
def _estimate_confidence_intervals(self, confidence_level=None, method=None):
"""Returns None if the confidence interval has not been calculated."""
return self.effect_intervals
def _do(self, x):
raise NotImplementedError
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
if self.estimator.__module__.endswith("metalearners"):
var_list = estimand.treatment_variable + self._effect_modifier_names
expr += "+".join(var_list)
else:
var_list = estimand.treatment_variable + self._observed_common_causes_names
expr += "+".join(var_list)
expr += " | " + ",".join(self._effect_modifier_names)
return expr
def shap_values(self, df: pd.DataFrame, *args, **kwargs):
return self.estimator.shap_values(df[self._effect_modifier_names].values, *args, **kwargs)
def apply_multitreatment(self, df: pd.DataFrame, fun: Callable, *args, **kwargs):
ests = []
assert not isinstance(self._treatment_value, str)
assert is_sequence(self._treatment_value)
if df is None:
filtered_df = None
else:
filtered_df = df[self._effect_modifier_names].values
for tv in self._treatment_value:
ests.append(
fun(
filtered_df,
T0=self._control_value,
T1=tv,
*args,
**kwargs,
)
)
est = np.stack(ests, axis=1)
return est
def effect(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise estimated treatment effect,
output shape n_units x n_treatment_values (not counting control)
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_fun, *args, **kwargs)
def effect_interval(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise confidence intervals for the estimated treatment effect
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_interval_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_interval(
filtered_df, T0=T0, T1=T1, alpha=1 - self.confidence_level, *args, **kwargs
)
return self.apply_multitreatment(df, effect_interval_fun, *args, **kwargs)
def effect_inference(self, df: pd.DataFrame, *args, **kwargs):
"""
Inference (uncertainty) results produced by the underlying EconML estimator
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_inference_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_inference(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_inference_fun, *args, **kwargs)
def effect_tt(self, df: pd.DataFrame, *args, **kwargs):
"""
Effect of the actual treatment that was applied to each unit
("effect of Treatment on the Treated")
:param df: Features of the units to evaluate
:param args: passed through to estimator.effect()
:param kwargs: passed through to estimator.effect()
"""
eff = self.effect(df, *args, **kwargs).reshape((len(df), len(self._treatment_value)))
out = np.zeros(len(df))
treatment_value = parse_state(self._treatment_value)
treatment_name = parse_state(self._treatment_name)[0]
eff = np.reshape(eff, (len(df), len(treatment_value)))
# For each unit, return the estimated effect of the treatment value
# that was actually applied to the unit
for c, col in enumerate(treatment_value):
out[df[treatment_name] == col] = eff[df[treatment_name] == col, c]
return pd.Series(data=out, index=df.index)
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | needs a docstring | amit-sharma | 218 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/econml.py | import inspect
from importlib import import_module
from typing import Callable
import numpy as np
import pandas as pd
from numpy.distutils.misc_util import is_sequence
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.utils.api import parse_state
class Econml(CausalEstimator):
"""Wrapper class for estimators from the EconML library.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below. For init and fit
parameters of each estimator, refer to the EconML docs.
"""
def __init__(self, *args, econml_methodname, **kwargs):
"""
:param econml_methodname: Fully qualified name of econml estimator
class. For example, 'econml.dml.DML'
"""
# Required to ensure that self.method_params contains all the
# parameters to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self._econml_methodname = econml_methodname
self.logger.info("INFO: Using EconML Estimator")
self.identifier_method = self._target_estimand.identifier_method
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables().copy()
# Enforcing this ordering is necessary to feed through the propensity values from dataset
self._observed_common_causes_names = [
c for c in self._observed_common_causes_names if "propensity" not in c
] + sorted([c for c in self._observed_common_causes_names if "propensity" in c])
# For metalearners only--issue a warning if w contains variables not in x
(module_name, _, class_name) = self._econml_methodname.rpartition(".")
if module_name.endswith("metalearners"):
effect_modifier_names = []
if self._effect_modifier_names is not None:
effect_modifier_names = self._effect_modifier_names.copy()
w_diff_x = [w for w in self._observed_common_causes_names if w not in effect_modifier_names]
if len(w_diff_x) > 0:
self.logger.warn(
"Concatenating common_causes and effect_modifiers and providing a single list of variables to metalearner estimator method, "
+ class_name
+ ". EconML metalearners accept a single X argument."
)
effect_modifier_names.extend(w_diff_x)
# Override the effect_modifiers set in CausalEstimator.__init__()
# Also only update self._effect_modifiers, and create a copy of self._effect_modifier_names
# the latter can be used by other estimator methods later
self._effect_modifiers = self._data[effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self._effect_modifier_names = effect_modifier_names
self.logger.debug("Effect modifiers: " + ",".join(effect_modifier_names))
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.logger.debug("Back-door variables used:" + ",".join(self._observed_common_causes_names))
# Instrumental variables names, if present
# choosing the instrumental variable to use
if getattr(self, "iv_instrument_name", None) is None:
self.estimating_instrument_names = self._target_estimand.instrumental_variables
else:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
if self.estimating_instrument_names:
self._estimating_instruments = self._data[self.estimating_instrument_names]
self._estimating_instruments = pd.get_dummies(self._estimating_instruments, drop_first=True)
else:
self._estimating_instruments = None
self.estimator = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _get_econml_class_object(self, module_method_name, *args, **kwargs):
# from https://www.bnmetrics.com/blog/factory-pattern-in-python3-simple-version
try:
(module_name, _, class_name) = module_method_name.rpartition(".")
estimator_module = import_module(module_name)
estimator_class = getattr(estimator_module, class_name)
except (AttributeError, AssertionError, ImportError):
raise ImportError(
"Error loading {}.{}. Double-check the method name and ensure that all econml dependencies are installed.".format(
module_name, class_name
)
)
return estimator_class
def _estimate_effect(self):
n_samples = self._treatment.shape[0]
X = None # Effect modifiers
W = None # common causes/ confounders
Z = None # Instruments
Y = self._outcome
T = self._treatment
if self._effect_modifiers is not None:
X = self._effect_modifiers
if self._observed_common_causes_names:
W = self._observed_common_causes
if self.estimating_instrument_names:
Z = self._estimating_instruments
named_data_args = {"Y": Y, "T": T, "X": X, "W": W, "Z": Z}
if self.estimator is None:
estimator_class = self._get_econml_class_object(self._econml_methodname)
self.estimator = estimator_class(**self.method_params["init_params"])
# Calling the econml estimator's fit method
estimator_argspec = inspect.getfullargspec(inspect.unwrap(self.estimator.fit))
# As of v0.9, econml has some kewyord only arguments
estimator_named_args = estimator_argspec.args + estimator_argspec.kwonlyargs
estimator_data_args = {
arg: named_data_args[arg] for arg in named_data_args.keys() if arg in estimator_named_args
}
if self.method_params["fit_params"] is not False:
self.estimator.fit(**estimator_data_args, **self.method_params["fit_params"])
X_test = X
if X is not None:
if type(self._target_units) is pd.DataFrame:
X_test = self._target_units
elif callable(self._target_units):
filtered_rows = self._data.where(self._target_units)
boolean_criterion = np.array(filtered_rows.notnull().iloc[:, 0])
X_test = X[boolean_criterion]
# Changing shape to a list for a singleton value
self._treatment_value = parse_state(self._treatment_value)
est = self.effect(X_test)
ate = np.mean(est, axis=0) # one value per treatment value
if len(ate) == 1:
ate = ate[0]
if self._confidence_intervals:
self.effect_intervals = self.effect_interval(X_test)
else:
self.effect_intervals = None
estimate = CausalEstimate(
estimate=ate,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
cate_estimates=est,
effect_intervals=self.effect_intervals,
_estimator_object=self.estimator,
)
return estimate
def _estimate_confidence_intervals(self, confidence_level=None, method=None):
"""Returns None if the confidence interval has not been calculated."""
return self.effect_intervals
def _do(self, x):
raise NotImplementedError
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
(module_name, _, class_name) = self._econml_methodname.rpartition(".")
if module_name.endswith("metalearners"):
var_list = estimand.treatment_variable + self._effect_modifier_names
expr += "+".join(var_list)
else:
var_list = estimand.treatment_variable + self._observed_common_causes_names
expr += "+".join(var_list)
expr += " | " + ",".join(self._effect_modifier_names)
return expr
def shap_values(self, df: pd.DataFrame, *args, **kwargs):
return self.estimator.shap_values(df[self._effect_modifier_names].values, *args, **kwargs)
def apply_multitreatment(self, df: pd.DataFrame, fun: Callable, *args, **kwargs):
ests = []
assert not isinstance(self._treatment_value, str)
assert is_sequence(self._treatment_value)
if df is None:
filtered_df = None
else:
filtered_df = df[self._effect_modifier_names].values
for tv in self._treatment_value:
ests.append(
fun(
filtered_df,
T0=self._control_value,
T1=tv,
*args,
**kwargs,
)
)
est = np.stack(ests, axis=1)
return est
def effect(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise estimated treatment effect,
output shape n_units x n_treatment_values (not counting control)
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_fun, *args, **kwargs)
def effect_interval(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise confidence intervals for the estimated treatment effect
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_interval_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_interval(
filtered_df, T0=T0, T1=T1, alpha=1 - self.confidence_level, *args, **kwargs
)
return self.apply_multitreatment(df, effect_interval_fun, *args, **kwargs)
def effect_inference(self, df: pd.DataFrame, *args, **kwargs):
"""
Inference (uncertainty) results produced by the underlying EconML estimator
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_inference_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_inference(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_inference_fun, *args, **kwargs)
def effect_tt(self, df: pd.DataFrame, *args, **kwargs):
"""
Effect of the actual treatment that was applied to each unit
("effect of Treatment on the Treated")
:param df: Features of the units to evaluate
:param args: passed through to estimator.effect()
:param kwargs: passed through to estimator.effect()
"""
eff = self.effect(df, *args, **kwargs).reshape((len(df), len(self._treatment_value)))
out = np.zeros(len(df))
treatment_value = parse_state(self._treatment_value)
treatment_name = parse_state(self._treatment_name)[0]
eff = np.reshape(eff, (len(df), len(treatment_value)))
# For each unit, return the estimated effect of the treatment value
# that was actually applied to the unit
for c, col in enumerate(treatment_value):
out[df[treatment_name] == col] = eff[df[treatment_name] == col, c]
return pd.Series(data=out, index=df.index)
| import inspect
from importlib import import_module
from typing import Any, Callable, List, Optional, Protocol, Union
from warnings import warn
import numpy as np
import pandas as pd
from numpy.distutils.misc_util import is_sequence
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier import IdentifiedEstimand
from dowhy.utils.api import parse_state
class _EconmlEstimator(Protocol):
def fit(self, *args, **kwargs):
...
def effect(self, *args, **kwargs):
...
def effect_interval(self, *args, **kwargs):
...
def effect_inference(self, *args, **kwargs):
...
def shap_values(self, *args, **kwargs):
...
class Econml(CausalEstimator):
"""Wrapper class for estimators from the EconML library.
Supports additional parameters as listed below. For init and fit
parameters of each estimator, refer to the EconML docs.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
econml_estimator: Union[_EconmlEstimator, str],
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param econml_methodname: Fully qualified name of econml estimator
class. For example, 'econml.dml.DML'
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
econml_estimator=econml_estimator,
**kwargs,
)
if isinstance(econml_estimator, str):
warn(
"Using a string to specify the value for econml_estimator is now deprecated, please provide an instance of a econml object",
DeprecationWarning,
stacklevel=2,
)
estimator_class = self._get_econml_class_object(econml_estimator)
self.estimator = estimator_class(**kwargs["init_params"])
else:
self.estimator = econml_estimator
self.logger.info("INFO: Using EconML Estimator")
self.identifier_method = self._target_estimand.identifier_method
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
**kwargs,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
# Save parameters for later refutter fitting
self._econml_fit_params = kwargs
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables().copy()
# Enforcing this ordering is necessary to feed through the propensity values from dataset
self._observed_common_causes_names = [
c for c in self._observed_common_causes_names if "propensity" not in c
] + sorted([c for c in self._observed_common_causes_names if "propensity" in c])
# For metalearners only--issue a warning if w contains variables not in x
if self.estimator.__module__.endswith("metalearners"):
effect_modifier_names = []
if self._effect_modifier_names is not None:
effect_modifier_names = self._effect_modifier_names.copy()
w_diff_x = [w for w in self._observed_common_causes_names if w not in effect_modifier_names]
if len(w_diff_x) > 0:
self.logger.warn(
"Concatenating common_causes and effect_modifiers and providing a single list of variables to metalearner estimator method, "
+ self.estimator.__class__.__name__
+ ". EconML metalearners accept a single X argument."
)
effect_modifier_names.extend(w_diff_x)
# Override the effect_modifiers set in CausalEstimator.__init__()
# Also only update self._effect_modifiers, and create a copy of self._effect_modifier_names
# the latter can be used by other estimator methods later
self._effect_modifiers = self._data[effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self._effect_modifier_names = effect_modifier_names
self.logger.debug("Effect modifiers: " + ",".join(effect_modifier_names))
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.logger.debug("Back-door variables used:" + ",".join(self._observed_common_causes_names))
# Instrumental variables names, if present
# choosing the instrumental variable to use
if getattr(self, "iv_instrument_name", None) is None:
self.estimating_instrument_names = self._target_estimand.instrumental_variables
else:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
if self.estimating_instrument_names:
self._estimating_instruments = self._data[self.estimating_instrument_names]
self._estimating_instruments = pd.get_dummies(self._estimating_instruments, drop_first=True)
else:
self._estimating_instruments = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
X = None
W = None # common causes/ confounders
Z = None # Instruments
Y = self._outcome
T = self._treatment
if self._effect_modifiers is not None and len(self._effect_modifiers) > 0:
X = self._effect_modifiers
if self._observed_common_causes_names:
W = self._observed_common_causes
if self.estimating_instrument_names:
Z = self._estimating_instruments
named_data_args = {"Y": Y, "T": T, "X": X, "W": W, "Z": Z}
# Calling the econml estimator's fit method
estimator_argspec = inspect.getfullargspec(inspect.unwrap(self.estimator.fit))
# As of v0.9, econml has some kewyord only arguments
estimator_named_args = estimator_argspec.args + estimator_argspec.kwonlyargs
estimator_data_args = {
arg: named_data_args[arg] for arg in named_data_args.keys() if arg in estimator_named_args
}
self.estimator.fit(**estimator_data_args, **kwargs)
return self
def _get_econml_class_object(self, module_method_name, *args, **kwargs):
# from https://www.bnmetrics.com/blog/factory-pattern-in-python3-simple-version
try:
(module_name, _, class_name) = module_method_name.rpartition(".")
estimator_module = import_module(module_name)
estimator_class = getattr(estimator_module, class_name)
except (AttributeError, AssertionError, ImportError):
raise ImportError(
"Error loading {}.{}. Double-check the method name and ensure that all econml dependencies are installed.".format(
module_name, class_name
)
)
return estimator_class
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
"""
data: dataframe containing the data on which treatment effect is to be estimated.
treatment_value: value of the treatment variable for which the effect is to be estimated.
control_value: value of the treatment variable that denotes its absence (usually 0)
target_units: The units for which the treatment effect should be estimated.
It can be a DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
It can also be a lambda function that can be used as an index for the data (pandas DataFrame) to select the required rows.
"""
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
n_samples = self._treatment.shape[0]
X = None # Effect modifiers
if self._effect_modifiers is not None and len(self._effect_modifiers) > 0:
X = self._effect_modifiers
X_test = X
if X is not None:
if type(target_units) is pd.DataFrame:
X_test = target_units
elif callable(target_units):
filtered_rows = data.where(target_units)
boolean_criterion = np.array(filtered_rows.notnull().iloc[:, 0])
X_test = X[boolean_criterion]
# Changing shape to a list for a singleton value
self._treatment_value = parse_state(self._treatment_value)
est = self.effect(X_test)
ate = np.mean(est, axis=0) # one value per treatment value
if len(ate) == 1:
ate = ate[0]
if self._confidence_intervals:
self.effect_intervals = self.effect_interval(X_test)
else:
self.effect_intervals = None
estimate = CausalEstimate(
estimate=ate,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
cate_estimates=est,
effect_intervals=self.effect_intervals,
_estimator_object=self.estimator,
)
estimate.add_estimator(self)
return estimate
def _estimate_confidence_intervals(self, confidence_level=None, method=None):
"""Returns None if the confidence interval has not been calculated."""
return self.effect_intervals
def _do(self, x):
raise NotImplementedError
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
if self.estimator.__module__.endswith("metalearners"):
var_list = estimand.treatment_variable + self._effect_modifier_names
expr += "+".join(var_list)
else:
var_list = estimand.treatment_variable + self._observed_common_causes_names
expr += "+".join(var_list)
expr += " | " + ",".join(self._effect_modifier_names)
return expr
def shap_values(self, df: pd.DataFrame, *args, **kwargs):
return self.estimator.shap_values(df[self._effect_modifier_names].values, *args, **kwargs)
def apply_multitreatment(self, df: pd.DataFrame, fun: Callable, *args, **kwargs):
ests = []
assert not isinstance(self._treatment_value, str)
assert is_sequence(self._treatment_value)
if df is None:
filtered_df = None
else:
filtered_df = df[self._effect_modifier_names].values
for tv in self._treatment_value:
ests.append(
fun(
filtered_df,
T0=self._control_value,
T1=tv,
*args,
**kwargs,
)
)
est = np.stack(ests, axis=1)
return est
def effect(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise estimated treatment effect,
output shape n_units x n_treatment_values (not counting control)
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_fun, *args, **kwargs)
def effect_interval(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise confidence intervals for the estimated treatment effect
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_interval_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_interval(
filtered_df, T0=T0, T1=T1, alpha=1 - self.confidence_level, *args, **kwargs
)
return self.apply_multitreatment(df, effect_interval_fun, *args, **kwargs)
def effect_inference(self, df: pd.DataFrame, *args, **kwargs):
"""
Inference (uncertainty) results produced by the underlying EconML estimator
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_inference_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_inference(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_inference_fun, *args, **kwargs)
def effect_tt(self, df: pd.DataFrame, *args, **kwargs):
"""
Effect of the actual treatment that was applied to each unit
("effect of Treatment on the Treated")
:param df: Features of the units to evaluate
:param args: passed through to estimator.effect()
:param kwargs: passed through to estimator.effect()
"""
eff = self.effect(df, *args, **kwargs).reshape((len(df), len(self._treatment_value)))
out = np.zeros(len(df))
treatment_value = parse_state(self._treatment_value)
treatment_name = parse_state(self._treatment_name)[0]
eff = np.reshape(eff, (len(df), len(treatment_value)))
# For each unit, return the estimated effect of the treatment value
# that was actually applied to the unit
for c, col in enumerate(treatment_value):
out[df[treatment_name] == col] = eff[df[treatment_name] == col, c]
return pd.Series(data=out, index=df.index)
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | Could this already be a `Union[str,EconMLEstimator]` where `EconMLEstimator` is something along the lines of:
```python
class EconMLEstimator(Protocol):
def estimate(self, ...):
...
...
```
Then, when actually using this, you could check if it's a string or not. Long-term, we could deprecate and remove the string and just allow the estimator objects themselves. That would make usage and implementation simpler. | petergtz | 219 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/econml.py | import inspect
from importlib import import_module
from typing import Callable
import numpy as np
import pandas as pd
from numpy.distutils.misc_util import is_sequence
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.utils.api import parse_state
class Econml(CausalEstimator):
"""Wrapper class for estimators from the EconML library.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below. For init and fit
parameters of each estimator, refer to the EconML docs.
"""
def __init__(self, *args, econml_methodname, **kwargs):
"""
:param econml_methodname: Fully qualified name of econml estimator
class. For example, 'econml.dml.DML'
"""
# Required to ensure that self.method_params contains all the
# parameters to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self._econml_methodname = econml_methodname
self.logger.info("INFO: Using EconML Estimator")
self.identifier_method = self._target_estimand.identifier_method
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables().copy()
# Enforcing this ordering is necessary to feed through the propensity values from dataset
self._observed_common_causes_names = [
c for c in self._observed_common_causes_names if "propensity" not in c
] + sorted([c for c in self._observed_common_causes_names if "propensity" in c])
# For metalearners only--issue a warning if w contains variables not in x
(module_name, _, class_name) = self._econml_methodname.rpartition(".")
if module_name.endswith("metalearners"):
effect_modifier_names = []
if self._effect_modifier_names is not None:
effect_modifier_names = self._effect_modifier_names.copy()
w_diff_x = [w for w in self._observed_common_causes_names if w not in effect_modifier_names]
if len(w_diff_x) > 0:
self.logger.warn(
"Concatenating common_causes and effect_modifiers and providing a single list of variables to metalearner estimator method, "
+ class_name
+ ". EconML metalearners accept a single X argument."
)
effect_modifier_names.extend(w_diff_x)
# Override the effect_modifiers set in CausalEstimator.__init__()
# Also only update self._effect_modifiers, and create a copy of self._effect_modifier_names
# the latter can be used by other estimator methods later
self._effect_modifiers = self._data[effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self._effect_modifier_names = effect_modifier_names
self.logger.debug("Effect modifiers: " + ",".join(effect_modifier_names))
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.logger.debug("Back-door variables used:" + ",".join(self._observed_common_causes_names))
# Instrumental variables names, if present
# choosing the instrumental variable to use
if getattr(self, "iv_instrument_name", None) is None:
self.estimating_instrument_names = self._target_estimand.instrumental_variables
else:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
if self.estimating_instrument_names:
self._estimating_instruments = self._data[self.estimating_instrument_names]
self._estimating_instruments = pd.get_dummies(self._estimating_instruments, drop_first=True)
else:
self._estimating_instruments = None
self.estimator = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _get_econml_class_object(self, module_method_name, *args, **kwargs):
# from https://www.bnmetrics.com/blog/factory-pattern-in-python3-simple-version
try:
(module_name, _, class_name) = module_method_name.rpartition(".")
estimator_module = import_module(module_name)
estimator_class = getattr(estimator_module, class_name)
except (AttributeError, AssertionError, ImportError):
raise ImportError(
"Error loading {}.{}. Double-check the method name and ensure that all econml dependencies are installed.".format(
module_name, class_name
)
)
return estimator_class
def _estimate_effect(self):
n_samples = self._treatment.shape[0]
X = None # Effect modifiers
W = None # common causes/ confounders
Z = None # Instruments
Y = self._outcome
T = self._treatment
if self._effect_modifiers is not None:
X = self._effect_modifiers
if self._observed_common_causes_names:
W = self._observed_common_causes
if self.estimating_instrument_names:
Z = self._estimating_instruments
named_data_args = {"Y": Y, "T": T, "X": X, "W": W, "Z": Z}
if self.estimator is None:
estimator_class = self._get_econml_class_object(self._econml_methodname)
self.estimator = estimator_class(**self.method_params["init_params"])
# Calling the econml estimator's fit method
estimator_argspec = inspect.getfullargspec(inspect.unwrap(self.estimator.fit))
# As of v0.9, econml has some kewyord only arguments
estimator_named_args = estimator_argspec.args + estimator_argspec.kwonlyargs
estimator_data_args = {
arg: named_data_args[arg] for arg in named_data_args.keys() if arg in estimator_named_args
}
if self.method_params["fit_params"] is not False:
self.estimator.fit(**estimator_data_args, **self.method_params["fit_params"])
X_test = X
if X is not None:
if type(self._target_units) is pd.DataFrame:
X_test = self._target_units
elif callable(self._target_units):
filtered_rows = self._data.where(self._target_units)
boolean_criterion = np.array(filtered_rows.notnull().iloc[:, 0])
X_test = X[boolean_criterion]
# Changing shape to a list for a singleton value
self._treatment_value = parse_state(self._treatment_value)
est = self.effect(X_test)
ate = np.mean(est, axis=0) # one value per treatment value
if len(ate) == 1:
ate = ate[0]
if self._confidence_intervals:
self.effect_intervals = self.effect_interval(X_test)
else:
self.effect_intervals = None
estimate = CausalEstimate(
estimate=ate,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
cate_estimates=est,
effect_intervals=self.effect_intervals,
_estimator_object=self.estimator,
)
return estimate
def _estimate_confidence_intervals(self, confidence_level=None, method=None):
"""Returns None if the confidence interval has not been calculated."""
return self.effect_intervals
def _do(self, x):
raise NotImplementedError
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
(module_name, _, class_name) = self._econml_methodname.rpartition(".")
if module_name.endswith("metalearners"):
var_list = estimand.treatment_variable + self._effect_modifier_names
expr += "+".join(var_list)
else:
var_list = estimand.treatment_variable + self._observed_common_causes_names
expr += "+".join(var_list)
expr += " | " + ",".join(self._effect_modifier_names)
return expr
def shap_values(self, df: pd.DataFrame, *args, **kwargs):
return self.estimator.shap_values(df[self._effect_modifier_names].values, *args, **kwargs)
def apply_multitreatment(self, df: pd.DataFrame, fun: Callable, *args, **kwargs):
ests = []
assert not isinstance(self._treatment_value, str)
assert is_sequence(self._treatment_value)
if df is None:
filtered_df = None
else:
filtered_df = df[self._effect_modifier_names].values
for tv in self._treatment_value:
ests.append(
fun(
filtered_df,
T0=self._control_value,
T1=tv,
*args,
**kwargs,
)
)
est = np.stack(ests, axis=1)
return est
def effect(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise estimated treatment effect,
output shape n_units x n_treatment_values (not counting control)
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_fun, *args, **kwargs)
def effect_interval(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise confidence intervals for the estimated treatment effect
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_interval_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_interval(
filtered_df, T0=T0, T1=T1, alpha=1 - self.confidence_level, *args, **kwargs
)
return self.apply_multitreatment(df, effect_interval_fun, *args, **kwargs)
def effect_inference(self, df: pd.DataFrame, *args, **kwargs):
"""
Inference (uncertainty) results produced by the underlying EconML estimator
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_inference_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_inference(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_inference_fun, *args, **kwargs)
def effect_tt(self, df: pd.DataFrame, *args, **kwargs):
"""
Effect of the actual treatment that was applied to each unit
("effect of Treatment on the Treated")
:param df: Features of the units to evaluate
:param args: passed through to estimator.effect()
:param kwargs: passed through to estimator.effect()
"""
eff = self.effect(df, *args, **kwargs).reshape((len(df), len(self._treatment_value)))
out = np.zeros(len(df))
treatment_value = parse_state(self._treatment_value)
treatment_name = parse_state(self._treatment_name)[0]
eff = np.reshape(eff, (len(df), len(treatment_value)))
# For each unit, return the estimated effect of the treatment value
# that was actually applied to the unit
for c, col in enumerate(treatment_value):
out[df[treatment_name] == col] = eff[df[treatment_name] == col, c]
return pd.Series(data=out, index=df.index)
| import inspect
from importlib import import_module
from typing import Any, Callable, List, Optional, Protocol, Union
from warnings import warn
import numpy as np
import pandas as pd
from numpy.distutils.misc_util import is_sequence
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier import IdentifiedEstimand
from dowhy.utils.api import parse_state
class _EconmlEstimator(Protocol):
def fit(self, *args, **kwargs):
...
def effect(self, *args, **kwargs):
...
def effect_interval(self, *args, **kwargs):
...
def effect_inference(self, *args, **kwargs):
...
def shap_values(self, *args, **kwargs):
...
class Econml(CausalEstimator):
"""Wrapper class for estimators from the EconML library.
Supports additional parameters as listed below. For init and fit
parameters of each estimator, refer to the EconML docs.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
econml_estimator: Union[_EconmlEstimator, str],
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param econml_methodname: Fully qualified name of econml estimator
class. For example, 'econml.dml.DML'
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
econml_estimator=econml_estimator,
**kwargs,
)
if isinstance(econml_estimator, str):
warn(
"Using a string to specify the value for econml_estimator is now deprecated, please provide an instance of a econml object",
DeprecationWarning,
stacklevel=2,
)
estimator_class = self._get_econml_class_object(econml_estimator)
self.estimator = estimator_class(**kwargs["init_params"])
else:
self.estimator = econml_estimator
self.logger.info("INFO: Using EconML Estimator")
self.identifier_method = self._target_estimand.identifier_method
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
**kwargs,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
# Save parameters for later refutter fitting
self._econml_fit_params = kwargs
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables().copy()
# Enforcing this ordering is necessary to feed through the propensity values from dataset
self._observed_common_causes_names = [
c for c in self._observed_common_causes_names if "propensity" not in c
] + sorted([c for c in self._observed_common_causes_names if "propensity" in c])
# For metalearners only--issue a warning if w contains variables not in x
if self.estimator.__module__.endswith("metalearners"):
effect_modifier_names = []
if self._effect_modifier_names is not None:
effect_modifier_names = self._effect_modifier_names.copy()
w_diff_x = [w for w in self._observed_common_causes_names if w not in effect_modifier_names]
if len(w_diff_x) > 0:
self.logger.warn(
"Concatenating common_causes and effect_modifiers and providing a single list of variables to metalearner estimator method, "
+ self.estimator.__class__.__name__
+ ". EconML metalearners accept a single X argument."
)
effect_modifier_names.extend(w_diff_x)
# Override the effect_modifiers set in CausalEstimator.__init__()
# Also only update self._effect_modifiers, and create a copy of self._effect_modifier_names
# the latter can be used by other estimator methods later
self._effect_modifiers = self._data[effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self._effect_modifier_names = effect_modifier_names
self.logger.debug("Effect modifiers: " + ",".join(effect_modifier_names))
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.logger.debug("Back-door variables used:" + ",".join(self._observed_common_causes_names))
# Instrumental variables names, if present
# choosing the instrumental variable to use
if getattr(self, "iv_instrument_name", None) is None:
self.estimating_instrument_names = self._target_estimand.instrumental_variables
else:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
if self.estimating_instrument_names:
self._estimating_instruments = self._data[self.estimating_instrument_names]
self._estimating_instruments = pd.get_dummies(self._estimating_instruments, drop_first=True)
else:
self._estimating_instruments = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
X = None
W = None # common causes/ confounders
Z = None # Instruments
Y = self._outcome
T = self._treatment
if self._effect_modifiers is not None and len(self._effect_modifiers) > 0:
X = self._effect_modifiers
if self._observed_common_causes_names:
W = self._observed_common_causes
if self.estimating_instrument_names:
Z = self._estimating_instruments
named_data_args = {"Y": Y, "T": T, "X": X, "W": W, "Z": Z}
# Calling the econml estimator's fit method
estimator_argspec = inspect.getfullargspec(inspect.unwrap(self.estimator.fit))
# As of v0.9, econml has some kewyord only arguments
estimator_named_args = estimator_argspec.args + estimator_argspec.kwonlyargs
estimator_data_args = {
arg: named_data_args[arg] for arg in named_data_args.keys() if arg in estimator_named_args
}
self.estimator.fit(**estimator_data_args, **kwargs)
return self
def _get_econml_class_object(self, module_method_name, *args, **kwargs):
# from https://www.bnmetrics.com/blog/factory-pattern-in-python3-simple-version
try:
(module_name, _, class_name) = module_method_name.rpartition(".")
estimator_module = import_module(module_name)
estimator_class = getattr(estimator_module, class_name)
except (AttributeError, AssertionError, ImportError):
raise ImportError(
"Error loading {}.{}. Double-check the method name and ensure that all econml dependencies are installed.".format(
module_name, class_name
)
)
return estimator_class
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
"""
data: dataframe containing the data on which treatment effect is to be estimated.
treatment_value: value of the treatment variable for which the effect is to be estimated.
control_value: value of the treatment variable that denotes its absence (usually 0)
target_units: The units for which the treatment effect should be estimated.
It can be a DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
It can also be a lambda function that can be used as an index for the data (pandas DataFrame) to select the required rows.
"""
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
n_samples = self._treatment.shape[0]
X = None # Effect modifiers
if self._effect_modifiers is not None and len(self._effect_modifiers) > 0:
X = self._effect_modifiers
X_test = X
if X is not None:
if type(target_units) is pd.DataFrame:
X_test = target_units
elif callable(target_units):
filtered_rows = data.where(target_units)
boolean_criterion = np.array(filtered_rows.notnull().iloc[:, 0])
X_test = X[boolean_criterion]
# Changing shape to a list for a singleton value
self._treatment_value = parse_state(self._treatment_value)
est = self.effect(X_test)
ate = np.mean(est, axis=0) # one value per treatment value
if len(ate) == 1:
ate = ate[0]
if self._confidence_intervals:
self.effect_intervals = self.effect_interval(X_test)
else:
self.effect_intervals = None
estimate = CausalEstimate(
estimate=ate,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
cate_estimates=est,
effect_intervals=self.effect_intervals,
_estimator_object=self.estimator,
)
estimate.add_estimator(self)
return estimate
def _estimate_confidence_intervals(self, confidence_level=None, method=None):
"""Returns None if the confidence interval has not been calculated."""
return self.effect_intervals
def _do(self, x):
raise NotImplementedError
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
if self.estimator.__module__.endswith("metalearners"):
var_list = estimand.treatment_variable + self._effect_modifier_names
expr += "+".join(var_list)
else:
var_list = estimand.treatment_variable + self._observed_common_causes_names
expr += "+".join(var_list)
expr += " | " + ",".join(self._effect_modifier_names)
return expr
def shap_values(self, df: pd.DataFrame, *args, **kwargs):
return self.estimator.shap_values(df[self._effect_modifier_names].values, *args, **kwargs)
def apply_multitreatment(self, df: pd.DataFrame, fun: Callable, *args, **kwargs):
ests = []
assert not isinstance(self._treatment_value, str)
assert is_sequence(self._treatment_value)
if df is None:
filtered_df = None
else:
filtered_df = df[self._effect_modifier_names].values
for tv in self._treatment_value:
ests.append(
fun(
filtered_df,
T0=self._control_value,
T1=tv,
*args,
**kwargs,
)
)
est = np.stack(ests, axis=1)
return est
def effect(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise estimated treatment effect,
output shape n_units x n_treatment_values (not counting control)
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_fun, *args, **kwargs)
def effect_interval(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise confidence intervals for the estimated treatment effect
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_interval_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_interval(
filtered_df, T0=T0, T1=T1, alpha=1 - self.confidence_level, *args, **kwargs
)
return self.apply_multitreatment(df, effect_interval_fun, *args, **kwargs)
def effect_inference(self, df: pd.DataFrame, *args, **kwargs):
"""
Inference (uncertainty) results produced by the underlying EconML estimator
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_inference_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_inference(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_inference_fun, *args, **kwargs)
def effect_tt(self, df: pd.DataFrame, *args, **kwargs):
"""
Effect of the actual treatment that was applied to each unit
("effect of Treatment on the Treated")
:param df: Features of the units to evaluate
:param args: passed through to estimator.effect()
:param kwargs: passed through to estimator.effect()
"""
eff = self.effect(df, *args, **kwargs).reshape((len(df), len(self._treatment_value)))
out = np.zeros(len(df))
treatment_value = parse_state(self._treatment_value)
treatment_name = parse_state(self._treatment_name)[0]
eff = np.reshape(eff, (len(df), len(treatment_value)))
# For each unit, return the estimated effect of the treatment value
# that was actually applied to the unit
for c, col in enumerate(treatment_value):
out[df[treatment_name] == col] = eff[df[treatment_name] == col, c]
return pd.Series(data=out, index=df.index)
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | +1 This is a good idea to maintain backwards compatibility while still following the new API. | amit-sharma | 220 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/econml.py | import inspect
from importlib import import_module
from typing import Callable
import numpy as np
import pandas as pd
from numpy.distutils.misc_util import is_sequence
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.utils.api import parse_state
class Econml(CausalEstimator):
"""Wrapper class for estimators from the EconML library.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below. For init and fit
parameters of each estimator, refer to the EconML docs.
"""
def __init__(self, *args, econml_methodname, **kwargs):
"""
:param econml_methodname: Fully qualified name of econml estimator
class. For example, 'econml.dml.DML'
"""
# Required to ensure that self.method_params contains all the
# parameters to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self._econml_methodname = econml_methodname
self.logger.info("INFO: Using EconML Estimator")
self.identifier_method = self._target_estimand.identifier_method
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables().copy()
# Enforcing this ordering is necessary to feed through the propensity values from dataset
self._observed_common_causes_names = [
c for c in self._observed_common_causes_names if "propensity" not in c
] + sorted([c for c in self._observed_common_causes_names if "propensity" in c])
# For metalearners only--issue a warning if w contains variables not in x
(module_name, _, class_name) = self._econml_methodname.rpartition(".")
if module_name.endswith("metalearners"):
effect_modifier_names = []
if self._effect_modifier_names is not None:
effect_modifier_names = self._effect_modifier_names.copy()
w_diff_x = [w for w in self._observed_common_causes_names if w not in effect_modifier_names]
if len(w_diff_x) > 0:
self.logger.warn(
"Concatenating common_causes and effect_modifiers and providing a single list of variables to metalearner estimator method, "
+ class_name
+ ". EconML metalearners accept a single X argument."
)
effect_modifier_names.extend(w_diff_x)
# Override the effect_modifiers set in CausalEstimator.__init__()
# Also only update self._effect_modifiers, and create a copy of self._effect_modifier_names
# the latter can be used by other estimator methods later
self._effect_modifiers = self._data[effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self._effect_modifier_names = effect_modifier_names
self.logger.debug("Effect modifiers: " + ",".join(effect_modifier_names))
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.logger.debug("Back-door variables used:" + ",".join(self._observed_common_causes_names))
# Instrumental variables names, if present
# choosing the instrumental variable to use
if getattr(self, "iv_instrument_name", None) is None:
self.estimating_instrument_names = self._target_estimand.instrumental_variables
else:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
if self.estimating_instrument_names:
self._estimating_instruments = self._data[self.estimating_instrument_names]
self._estimating_instruments = pd.get_dummies(self._estimating_instruments, drop_first=True)
else:
self._estimating_instruments = None
self.estimator = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _get_econml_class_object(self, module_method_name, *args, **kwargs):
# from https://www.bnmetrics.com/blog/factory-pattern-in-python3-simple-version
try:
(module_name, _, class_name) = module_method_name.rpartition(".")
estimator_module = import_module(module_name)
estimator_class = getattr(estimator_module, class_name)
except (AttributeError, AssertionError, ImportError):
raise ImportError(
"Error loading {}.{}. Double-check the method name and ensure that all econml dependencies are installed.".format(
module_name, class_name
)
)
return estimator_class
def _estimate_effect(self):
n_samples = self._treatment.shape[0]
X = None # Effect modifiers
W = None # common causes/ confounders
Z = None # Instruments
Y = self._outcome
T = self._treatment
if self._effect_modifiers is not None:
X = self._effect_modifiers
if self._observed_common_causes_names:
W = self._observed_common_causes
if self.estimating_instrument_names:
Z = self._estimating_instruments
named_data_args = {"Y": Y, "T": T, "X": X, "W": W, "Z": Z}
if self.estimator is None:
estimator_class = self._get_econml_class_object(self._econml_methodname)
self.estimator = estimator_class(**self.method_params["init_params"])
# Calling the econml estimator's fit method
estimator_argspec = inspect.getfullargspec(inspect.unwrap(self.estimator.fit))
# As of v0.9, econml has some kewyord only arguments
estimator_named_args = estimator_argspec.args + estimator_argspec.kwonlyargs
estimator_data_args = {
arg: named_data_args[arg] for arg in named_data_args.keys() if arg in estimator_named_args
}
if self.method_params["fit_params"] is not False:
self.estimator.fit(**estimator_data_args, **self.method_params["fit_params"])
X_test = X
if X is not None:
if type(self._target_units) is pd.DataFrame:
X_test = self._target_units
elif callable(self._target_units):
filtered_rows = self._data.where(self._target_units)
boolean_criterion = np.array(filtered_rows.notnull().iloc[:, 0])
X_test = X[boolean_criterion]
# Changing shape to a list for a singleton value
self._treatment_value = parse_state(self._treatment_value)
est = self.effect(X_test)
ate = np.mean(est, axis=0) # one value per treatment value
if len(ate) == 1:
ate = ate[0]
if self._confidence_intervals:
self.effect_intervals = self.effect_interval(X_test)
else:
self.effect_intervals = None
estimate = CausalEstimate(
estimate=ate,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
cate_estimates=est,
effect_intervals=self.effect_intervals,
_estimator_object=self.estimator,
)
return estimate
def _estimate_confidence_intervals(self, confidence_level=None, method=None):
"""Returns None if the confidence interval has not been calculated."""
return self.effect_intervals
def _do(self, x):
raise NotImplementedError
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
(module_name, _, class_name) = self._econml_methodname.rpartition(".")
if module_name.endswith("metalearners"):
var_list = estimand.treatment_variable + self._effect_modifier_names
expr += "+".join(var_list)
else:
var_list = estimand.treatment_variable + self._observed_common_causes_names
expr += "+".join(var_list)
expr += " | " + ",".join(self._effect_modifier_names)
return expr
def shap_values(self, df: pd.DataFrame, *args, **kwargs):
return self.estimator.shap_values(df[self._effect_modifier_names].values, *args, **kwargs)
def apply_multitreatment(self, df: pd.DataFrame, fun: Callable, *args, **kwargs):
ests = []
assert not isinstance(self._treatment_value, str)
assert is_sequence(self._treatment_value)
if df is None:
filtered_df = None
else:
filtered_df = df[self._effect_modifier_names].values
for tv in self._treatment_value:
ests.append(
fun(
filtered_df,
T0=self._control_value,
T1=tv,
*args,
**kwargs,
)
)
est = np.stack(ests, axis=1)
return est
def effect(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise estimated treatment effect,
output shape n_units x n_treatment_values (not counting control)
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_fun, *args, **kwargs)
def effect_interval(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise confidence intervals for the estimated treatment effect
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_interval_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_interval(
filtered_df, T0=T0, T1=T1, alpha=1 - self.confidence_level, *args, **kwargs
)
return self.apply_multitreatment(df, effect_interval_fun, *args, **kwargs)
def effect_inference(self, df: pd.DataFrame, *args, **kwargs):
"""
Inference (uncertainty) results produced by the underlying EconML estimator
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_inference_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_inference(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_inference_fun, *args, **kwargs)
def effect_tt(self, df: pd.DataFrame, *args, **kwargs):
"""
Effect of the actual treatment that was applied to each unit
("effect of Treatment on the Treated")
:param df: Features of the units to evaluate
:param args: passed through to estimator.effect()
:param kwargs: passed through to estimator.effect()
"""
eff = self.effect(df, *args, **kwargs).reshape((len(df), len(self._treatment_value)))
out = np.zeros(len(df))
treatment_value = parse_state(self._treatment_value)
treatment_name = parse_state(self._treatment_name)[0]
eff = np.reshape(eff, (len(df), len(treatment_value)))
# For each unit, return the estimated effect of the treatment value
# that was actually applied to the unit
for c, col in enumerate(treatment_value):
out[df[treatment_name] == col] = eff[df[treatment_name] == col, c]
return pd.Series(data=out, index=df.index)
| import inspect
from importlib import import_module
from typing import Any, Callable, List, Optional, Protocol, Union
from warnings import warn
import numpy as np
import pandas as pd
from numpy.distutils.misc_util import is_sequence
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier import IdentifiedEstimand
from dowhy.utils.api import parse_state
class _EconmlEstimator(Protocol):
def fit(self, *args, **kwargs):
...
def effect(self, *args, **kwargs):
...
def effect_interval(self, *args, **kwargs):
...
def effect_inference(self, *args, **kwargs):
...
def shap_values(self, *args, **kwargs):
...
class Econml(CausalEstimator):
"""Wrapper class for estimators from the EconML library.
Supports additional parameters as listed below. For init and fit
parameters of each estimator, refer to the EconML docs.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
econml_estimator: Union[_EconmlEstimator, str],
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param econml_methodname: Fully qualified name of econml estimator
class. For example, 'econml.dml.DML'
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
econml_estimator=econml_estimator,
**kwargs,
)
if isinstance(econml_estimator, str):
warn(
"Using a string to specify the value for econml_estimator is now deprecated, please provide an instance of a econml object",
DeprecationWarning,
stacklevel=2,
)
estimator_class = self._get_econml_class_object(econml_estimator)
self.estimator = estimator_class(**kwargs["init_params"])
else:
self.estimator = econml_estimator
self.logger.info("INFO: Using EconML Estimator")
self.identifier_method = self._target_estimand.identifier_method
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
**kwargs,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
# Save parameters for later refutter fitting
self._econml_fit_params = kwargs
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables().copy()
# Enforcing this ordering is necessary to feed through the propensity values from dataset
self._observed_common_causes_names = [
c for c in self._observed_common_causes_names if "propensity" not in c
] + sorted([c for c in self._observed_common_causes_names if "propensity" in c])
# For metalearners only--issue a warning if w contains variables not in x
if self.estimator.__module__.endswith("metalearners"):
effect_modifier_names = []
if self._effect_modifier_names is not None:
effect_modifier_names = self._effect_modifier_names.copy()
w_diff_x = [w for w in self._observed_common_causes_names if w not in effect_modifier_names]
if len(w_diff_x) > 0:
self.logger.warn(
"Concatenating common_causes and effect_modifiers and providing a single list of variables to metalearner estimator method, "
+ self.estimator.__class__.__name__
+ ". EconML metalearners accept a single X argument."
)
effect_modifier_names.extend(w_diff_x)
# Override the effect_modifiers set in CausalEstimator.__init__()
# Also only update self._effect_modifiers, and create a copy of self._effect_modifier_names
# the latter can be used by other estimator methods later
self._effect_modifiers = self._data[effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self._effect_modifier_names = effect_modifier_names
self.logger.debug("Effect modifiers: " + ",".join(effect_modifier_names))
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.logger.debug("Back-door variables used:" + ",".join(self._observed_common_causes_names))
# Instrumental variables names, if present
# choosing the instrumental variable to use
if getattr(self, "iv_instrument_name", None) is None:
self.estimating_instrument_names = self._target_estimand.instrumental_variables
else:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
if self.estimating_instrument_names:
self._estimating_instruments = self._data[self.estimating_instrument_names]
self._estimating_instruments = pd.get_dummies(self._estimating_instruments, drop_first=True)
else:
self._estimating_instruments = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
X = None
W = None # common causes/ confounders
Z = None # Instruments
Y = self._outcome
T = self._treatment
if self._effect_modifiers is not None and len(self._effect_modifiers) > 0:
X = self._effect_modifiers
if self._observed_common_causes_names:
W = self._observed_common_causes
if self.estimating_instrument_names:
Z = self._estimating_instruments
named_data_args = {"Y": Y, "T": T, "X": X, "W": W, "Z": Z}
# Calling the econml estimator's fit method
estimator_argspec = inspect.getfullargspec(inspect.unwrap(self.estimator.fit))
# As of v0.9, econml has some kewyord only arguments
estimator_named_args = estimator_argspec.args + estimator_argspec.kwonlyargs
estimator_data_args = {
arg: named_data_args[arg] for arg in named_data_args.keys() if arg in estimator_named_args
}
self.estimator.fit(**estimator_data_args, **kwargs)
return self
def _get_econml_class_object(self, module_method_name, *args, **kwargs):
# from https://www.bnmetrics.com/blog/factory-pattern-in-python3-simple-version
try:
(module_name, _, class_name) = module_method_name.rpartition(".")
estimator_module = import_module(module_name)
estimator_class = getattr(estimator_module, class_name)
except (AttributeError, AssertionError, ImportError):
raise ImportError(
"Error loading {}.{}. Double-check the method name and ensure that all econml dependencies are installed.".format(
module_name, class_name
)
)
return estimator_class
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
"""
data: dataframe containing the data on which treatment effect is to be estimated.
treatment_value: value of the treatment variable for which the effect is to be estimated.
control_value: value of the treatment variable that denotes its absence (usually 0)
target_units: The units for which the treatment effect should be estimated.
It can be a DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
It can also be a lambda function that can be used as an index for the data (pandas DataFrame) to select the required rows.
"""
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
n_samples = self._treatment.shape[0]
X = None # Effect modifiers
if self._effect_modifiers is not None and len(self._effect_modifiers) > 0:
X = self._effect_modifiers
X_test = X
if X is not None:
if type(target_units) is pd.DataFrame:
X_test = target_units
elif callable(target_units):
filtered_rows = data.where(target_units)
boolean_criterion = np.array(filtered_rows.notnull().iloc[:, 0])
X_test = X[boolean_criterion]
# Changing shape to a list for a singleton value
self._treatment_value = parse_state(self._treatment_value)
est = self.effect(X_test)
ate = np.mean(est, axis=0) # one value per treatment value
if len(ate) == 1:
ate = ate[0]
if self._confidence_intervals:
self.effect_intervals = self.effect_interval(X_test)
else:
self.effect_intervals = None
estimate = CausalEstimate(
estimate=ate,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
cate_estimates=est,
effect_intervals=self.effect_intervals,
_estimator_object=self.estimator,
)
estimate.add_estimator(self)
return estimate
def _estimate_confidence_intervals(self, confidence_level=None, method=None):
"""Returns None if the confidence interval has not been calculated."""
return self.effect_intervals
def _do(self, x):
raise NotImplementedError
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
if self.estimator.__module__.endswith("metalearners"):
var_list = estimand.treatment_variable + self._effect_modifier_names
expr += "+".join(var_list)
else:
var_list = estimand.treatment_variable + self._observed_common_causes_names
expr += "+".join(var_list)
expr += " | " + ",".join(self._effect_modifier_names)
return expr
def shap_values(self, df: pd.DataFrame, *args, **kwargs):
return self.estimator.shap_values(df[self._effect_modifier_names].values, *args, **kwargs)
def apply_multitreatment(self, df: pd.DataFrame, fun: Callable, *args, **kwargs):
ests = []
assert not isinstance(self._treatment_value, str)
assert is_sequence(self._treatment_value)
if df is None:
filtered_df = None
else:
filtered_df = df[self._effect_modifier_names].values
for tv in self._treatment_value:
ests.append(
fun(
filtered_df,
T0=self._control_value,
T1=tv,
*args,
**kwargs,
)
)
est = np.stack(ests, axis=1)
return est
def effect(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise estimated treatment effect,
output shape n_units x n_treatment_values (not counting control)
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_fun, *args, **kwargs)
def effect_interval(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise confidence intervals for the estimated treatment effect
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_interval_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_interval(
filtered_df, T0=T0, T1=T1, alpha=1 - self.confidence_level, *args, **kwargs
)
return self.apply_multitreatment(df, effect_interval_fun, *args, **kwargs)
def effect_inference(self, df: pd.DataFrame, *args, **kwargs):
"""
Inference (uncertainty) results produced by the underlying EconML estimator
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_inference_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_inference(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_inference_fun, *args, **kwargs)
def effect_tt(self, df: pd.DataFrame, *args, **kwargs):
"""
Effect of the actual treatment that was applied to each unit
("effect of Treatment on the Treated")
:param df: Features of the units to evaluate
:param args: passed through to estimator.effect()
:param kwargs: passed through to estimator.effect()
"""
eff = self.effect(df, *args, **kwargs).reshape((len(df), len(self._treatment_value)))
out = np.zeros(len(df))
treatment_value = parse_state(self._treatment_value)
treatment_name = parse_state(self._treatment_name)[0]
eff = np.reshape(eff, (len(df), len(treatment_value)))
# For each unit, return the estimated effect of the treatment value
# that was actually applied to the unit
for c, col in enumerate(treatment_value):
out[df[treatment_name] == col] = eff[df[treatment_name] == col, c]
return pd.Series(data=out, index=df.index)
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | Yep, I actually created an example of this to show Amit some days ago :) I think we could even deprecate the string now and move the code that creates an econml instance from the string to the CausalModel estimate_effect class, what do you think? @petergtz
And this also applies to the CausalML estimator | andresmor-ms | 221 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/econml.py | import inspect
from importlib import import_module
from typing import Callable
import numpy as np
import pandas as pd
from numpy.distutils.misc_util import is_sequence
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.utils.api import parse_state
class Econml(CausalEstimator):
"""Wrapper class for estimators from the EconML library.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below. For init and fit
parameters of each estimator, refer to the EconML docs.
"""
def __init__(self, *args, econml_methodname, **kwargs):
"""
:param econml_methodname: Fully qualified name of econml estimator
class. For example, 'econml.dml.DML'
"""
# Required to ensure that self.method_params contains all the
# parameters to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self._econml_methodname = econml_methodname
self.logger.info("INFO: Using EconML Estimator")
self.identifier_method = self._target_estimand.identifier_method
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables().copy()
# Enforcing this ordering is necessary to feed through the propensity values from dataset
self._observed_common_causes_names = [
c for c in self._observed_common_causes_names if "propensity" not in c
] + sorted([c for c in self._observed_common_causes_names if "propensity" in c])
# For metalearners only--issue a warning if w contains variables not in x
(module_name, _, class_name) = self._econml_methodname.rpartition(".")
if module_name.endswith("metalearners"):
effect_modifier_names = []
if self._effect_modifier_names is not None:
effect_modifier_names = self._effect_modifier_names.copy()
w_diff_x = [w for w in self._observed_common_causes_names if w not in effect_modifier_names]
if len(w_diff_x) > 0:
self.logger.warn(
"Concatenating common_causes and effect_modifiers and providing a single list of variables to metalearner estimator method, "
+ class_name
+ ". EconML metalearners accept a single X argument."
)
effect_modifier_names.extend(w_diff_x)
# Override the effect_modifiers set in CausalEstimator.__init__()
# Also only update self._effect_modifiers, and create a copy of self._effect_modifier_names
# the latter can be used by other estimator methods later
self._effect_modifiers = self._data[effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self._effect_modifier_names = effect_modifier_names
self.logger.debug("Effect modifiers: " + ",".join(effect_modifier_names))
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.logger.debug("Back-door variables used:" + ",".join(self._observed_common_causes_names))
# Instrumental variables names, if present
# choosing the instrumental variable to use
if getattr(self, "iv_instrument_name", None) is None:
self.estimating_instrument_names = self._target_estimand.instrumental_variables
else:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
if self.estimating_instrument_names:
self._estimating_instruments = self._data[self.estimating_instrument_names]
self._estimating_instruments = pd.get_dummies(self._estimating_instruments, drop_first=True)
else:
self._estimating_instruments = None
self.estimator = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _get_econml_class_object(self, module_method_name, *args, **kwargs):
# from https://www.bnmetrics.com/blog/factory-pattern-in-python3-simple-version
try:
(module_name, _, class_name) = module_method_name.rpartition(".")
estimator_module = import_module(module_name)
estimator_class = getattr(estimator_module, class_name)
except (AttributeError, AssertionError, ImportError):
raise ImportError(
"Error loading {}.{}. Double-check the method name and ensure that all econml dependencies are installed.".format(
module_name, class_name
)
)
return estimator_class
def _estimate_effect(self):
n_samples = self._treatment.shape[0]
X = None # Effect modifiers
W = None # common causes/ confounders
Z = None # Instruments
Y = self._outcome
T = self._treatment
if self._effect_modifiers is not None:
X = self._effect_modifiers
if self._observed_common_causes_names:
W = self._observed_common_causes
if self.estimating_instrument_names:
Z = self._estimating_instruments
named_data_args = {"Y": Y, "T": T, "X": X, "W": W, "Z": Z}
if self.estimator is None:
estimator_class = self._get_econml_class_object(self._econml_methodname)
self.estimator = estimator_class(**self.method_params["init_params"])
# Calling the econml estimator's fit method
estimator_argspec = inspect.getfullargspec(inspect.unwrap(self.estimator.fit))
# As of v0.9, econml has some kewyord only arguments
estimator_named_args = estimator_argspec.args + estimator_argspec.kwonlyargs
estimator_data_args = {
arg: named_data_args[arg] for arg in named_data_args.keys() if arg in estimator_named_args
}
if self.method_params["fit_params"] is not False:
self.estimator.fit(**estimator_data_args, **self.method_params["fit_params"])
X_test = X
if X is not None:
if type(self._target_units) is pd.DataFrame:
X_test = self._target_units
elif callable(self._target_units):
filtered_rows = self._data.where(self._target_units)
boolean_criterion = np.array(filtered_rows.notnull().iloc[:, 0])
X_test = X[boolean_criterion]
# Changing shape to a list for a singleton value
self._treatment_value = parse_state(self._treatment_value)
est = self.effect(X_test)
ate = np.mean(est, axis=0) # one value per treatment value
if len(ate) == 1:
ate = ate[0]
if self._confidence_intervals:
self.effect_intervals = self.effect_interval(X_test)
else:
self.effect_intervals = None
estimate = CausalEstimate(
estimate=ate,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
cate_estimates=est,
effect_intervals=self.effect_intervals,
_estimator_object=self.estimator,
)
return estimate
def _estimate_confidence_intervals(self, confidence_level=None, method=None):
"""Returns None if the confidence interval has not been calculated."""
return self.effect_intervals
def _do(self, x):
raise NotImplementedError
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
(module_name, _, class_name) = self._econml_methodname.rpartition(".")
if module_name.endswith("metalearners"):
var_list = estimand.treatment_variable + self._effect_modifier_names
expr += "+".join(var_list)
else:
var_list = estimand.treatment_variable + self._observed_common_causes_names
expr += "+".join(var_list)
expr += " | " + ",".join(self._effect_modifier_names)
return expr
def shap_values(self, df: pd.DataFrame, *args, **kwargs):
return self.estimator.shap_values(df[self._effect_modifier_names].values, *args, **kwargs)
def apply_multitreatment(self, df: pd.DataFrame, fun: Callable, *args, **kwargs):
ests = []
assert not isinstance(self._treatment_value, str)
assert is_sequence(self._treatment_value)
if df is None:
filtered_df = None
else:
filtered_df = df[self._effect_modifier_names].values
for tv in self._treatment_value:
ests.append(
fun(
filtered_df,
T0=self._control_value,
T1=tv,
*args,
**kwargs,
)
)
est = np.stack(ests, axis=1)
return est
def effect(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise estimated treatment effect,
output shape n_units x n_treatment_values (not counting control)
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_fun, *args, **kwargs)
def effect_interval(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise confidence intervals for the estimated treatment effect
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_interval_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_interval(
filtered_df, T0=T0, T1=T1, alpha=1 - self.confidence_level, *args, **kwargs
)
return self.apply_multitreatment(df, effect_interval_fun, *args, **kwargs)
def effect_inference(self, df: pd.DataFrame, *args, **kwargs):
"""
Inference (uncertainty) results produced by the underlying EconML estimator
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_inference_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_inference(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_inference_fun, *args, **kwargs)
def effect_tt(self, df: pd.DataFrame, *args, **kwargs):
"""
Effect of the actual treatment that was applied to each unit
("effect of Treatment on the Treated")
:param df: Features of the units to evaluate
:param args: passed through to estimator.effect()
:param kwargs: passed through to estimator.effect()
"""
eff = self.effect(df, *args, **kwargs).reshape((len(df), len(self._treatment_value)))
out = np.zeros(len(df))
treatment_value = parse_state(self._treatment_value)
treatment_name = parse_state(self._treatment_name)[0]
eff = np.reshape(eff, (len(df), len(treatment_value)))
# For each unit, return the estimated effect of the treatment value
# that was actually applied to the unit
for c, col in enumerate(treatment_value):
out[df[treatment_name] == col] = eff[df[treatment_name] == col, c]
return pd.Series(data=out, index=df.index)
| import inspect
from importlib import import_module
from typing import Any, Callable, List, Optional, Protocol, Union
from warnings import warn
import numpy as np
import pandas as pd
from numpy.distutils.misc_util import is_sequence
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier import IdentifiedEstimand
from dowhy.utils.api import parse_state
class _EconmlEstimator(Protocol):
def fit(self, *args, **kwargs):
...
def effect(self, *args, **kwargs):
...
def effect_interval(self, *args, **kwargs):
...
def effect_inference(self, *args, **kwargs):
...
def shap_values(self, *args, **kwargs):
...
class Econml(CausalEstimator):
"""Wrapper class for estimators from the EconML library.
Supports additional parameters as listed below. For init and fit
parameters of each estimator, refer to the EconML docs.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
econml_estimator: Union[_EconmlEstimator, str],
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param econml_methodname: Fully qualified name of econml estimator
class. For example, 'econml.dml.DML'
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
econml_estimator=econml_estimator,
**kwargs,
)
if isinstance(econml_estimator, str):
warn(
"Using a string to specify the value for econml_estimator is now deprecated, please provide an instance of a econml object",
DeprecationWarning,
stacklevel=2,
)
estimator_class = self._get_econml_class_object(econml_estimator)
self.estimator = estimator_class(**kwargs["init_params"])
else:
self.estimator = econml_estimator
self.logger.info("INFO: Using EconML Estimator")
self.identifier_method = self._target_estimand.identifier_method
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
**kwargs,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
# Save parameters for later refutter fitting
self._econml_fit_params = kwargs
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables().copy()
# Enforcing this ordering is necessary to feed through the propensity values from dataset
self._observed_common_causes_names = [
c for c in self._observed_common_causes_names if "propensity" not in c
] + sorted([c for c in self._observed_common_causes_names if "propensity" in c])
# For metalearners only--issue a warning if w contains variables not in x
if self.estimator.__module__.endswith("metalearners"):
effect_modifier_names = []
if self._effect_modifier_names is not None:
effect_modifier_names = self._effect_modifier_names.copy()
w_diff_x = [w for w in self._observed_common_causes_names if w not in effect_modifier_names]
if len(w_diff_x) > 0:
self.logger.warn(
"Concatenating common_causes and effect_modifiers and providing a single list of variables to metalearner estimator method, "
+ self.estimator.__class__.__name__
+ ". EconML metalearners accept a single X argument."
)
effect_modifier_names.extend(w_diff_x)
# Override the effect_modifiers set in CausalEstimator.__init__()
# Also only update self._effect_modifiers, and create a copy of self._effect_modifier_names
# the latter can be used by other estimator methods later
self._effect_modifiers = self._data[effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self._effect_modifier_names = effect_modifier_names
self.logger.debug("Effect modifiers: " + ",".join(effect_modifier_names))
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.logger.debug("Back-door variables used:" + ",".join(self._observed_common_causes_names))
# Instrumental variables names, if present
# choosing the instrumental variable to use
if getattr(self, "iv_instrument_name", None) is None:
self.estimating_instrument_names = self._target_estimand.instrumental_variables
else:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
if self.estimating_instrument_names:
self._estimating_instruments = self._data[self.estimating_instrument_names]
self._estimating_instruments = pd.get_dummies(self._estimating_instruments, drop_first=True)
else:
self._estimating_instruments = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
X = None
W = None # common causes/ confounders
Z = None # Instruments
Y = self._outcome
T = self._treatment
if self._effect_modifiers is not None and len(self._effect_modifiers) > 0:
X = self._effect_modifiers
if self._observed_common_causes_names:
W = self._observed_common_causes
if self.estimating_instrument_names:
Z = self._estimating_instruments
named_data_args = {"Y": Y, "T": T, "X": X, "W": W, "Z": Z}
# Calling the econml estimator's fit method
estimator_argspec = inspect.getfullargspec(inspect.unwrap(self.estimator.fit))
# As of v0.9, econml has some kewyord only arguments
estimator_named_args = estimator_argspec.args + estimator_argspec.kwonlyargs
estimator_data_args = {
arg: named_data_args[arg] for arg in named_data_args.keys() if arg in estimator_named_args
}
self.estimator.fit(**estimator_data_args, **kwargs)
return self
def _get_econml_class_object(self, module_method_name, *args, **kwargs):
# from https://www.bnmetrics.com/blog/factory-pattern-in-python3-simple-version
try:
(module_name, _, class_name) = module_method_name.rpartition(".")
estimator_module = import_module(module_name)
estimator_class = getattr(estimator_module, class_name)
except (AttributeError, AssertionError, ImportError):
raise ImportError(
"Error loading {}.{}. Double-check the method name and ensure that all econml dependencies are installed.".format(
module_name, class_name
)
)
return estimator_class
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
"""
data: dataframe containing the data on which treatment effect is to be estimated.
treatment_value: value of the treatment variable for which the effect is to be estimated.
control_value: value of the treatment variable that denotes its absence (usually 0)
target_units: The units for which the treatment effect should be estimated.
It can be a DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
It can also be a lambda function that can be used as an index for the data (pandas DataFrame) to select the required rows.
"""
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
n_samples = self._treatment.shape[0]
X = None # Effect modifiers
if self._effect_modifiers is not None and len(self._effect_modifiers) > 0:
X = self._effect_modifiers
X_test = X
if X is not None:
if type(target_units) is pd.DataFrame:
X_test = target_units
elif callable(target_units):
filtered_rows = data.where(target_units)
boolean_criterion = np.array(filtered_rows.notnull().iloc[:, 0])
X_test = X[boolean_criterion]
# Changing shape to a list for a singleton value
self._treatment_value = parse_state(self._treatment_value)
est = self.effect(X_test)
ate = np.mean(est, axis=0) # one value per treatment value
if len(ate) == 1:
ate = ate[0]
if self._confidence_intervals:
self.effect_intervals = self.effect_interval(X_test)
else:
self.effect_intervals = None
estimate = CausalEstimate(
estimate=ate,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
cate_estimates=est,
effect_intervals=self.effect_intervals,
_estimator_object=self.estimator,
)
estimate.add_estimator(self)
return estimate
def _estimate_confidence_intervals(self, confidence_level=None, method=None):
"""Returns None if the confidence interval has not been calculated."""
return self.effect_intervals
def _do(self, x):
raise NotImplementedError
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
if self.estimator.__module__.endswith("metalearners"):
var_list = estimand.treatment_variable + self._effect_modifier_names
expr += "+".join(var_list)
else:
var_list = estimand.treatment_variable + self._observed_common_causes_names
expr += "+".join(var_list)
expr += " | " + ",".join(self._effect_modifier_names)
return expr
def shap_values(self, df: pd.DataFrame, *args, **kwargs):
return self.estimator.shap_values(df[self._effect_modifier_names].values, *args, **kwargs)
def apply_multitreatment(self, df: pd.DataFrame, fun: Callable, *args, **kwargs):
ests = []
assert not isinstance(self._treatment_value, str)
assert is_sequence(self._treatment_value)
if df is None:
filtered_df = None
else:
filtered_df = df[self._effect_modifier_names].values
for tv in self._treatment_value:
ests.append(
fun(
filtered_df,
T0=self._control_value,
T1=tv,
*args,
**kwargs,
)
)
est = np.stack(ests, axis=1)
return est
def effect(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise estimated treatment effect,
output shape n_units x n_treatment_values (not counting control)
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_fun, *args, **kwargs)
def effect_interval(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise confidence intervals for the estimated treatment effect
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_interval_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_interval(
filtered_df, T0=T0, T1=T1, alpha=1 - self.confidence_level, *args, **kwargs
)
return self.apply_multitreatment(df, effect_interval_fun, *args, **kwargs)
def effect_inference(self, df: pd.DataFrame, *args, **kwargs):
"""
Inference (uncertainty) results produced by the underlying EconML estimator
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_inference_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_inference(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_inference_fun, *args, **kwargs)
def effect_tt(self, df: pd.DataFrame, *args, **kwargs):
"""
Effect of the actual treatment that was applied to each unit
("effect of Treatment on the Treated")
:param df: Features of the units to evaluate
:param args: passed through to estimator.effect()
:param kwargs: passed through to estimator.effect()
"""
eff = self.effect(df, *args, **kwargs).reshape((len(df), len(self._treatment_value)))
out = np.zeros(len(df))
treatment_value = parse_state(self._treatment_value)
treatment_name = parse_state(self._treatment_name)[0]
eff = np.reshape(eff, (len(df), len(treatment_value)))
# For each unit, return the estimated effect of the treatment value
# that was actually applied to the unit
for c, col in enumerate(treatment_value):
out[df[treatment_name] == col] = eff[df[treatment_name] == col, c]
return pd.Series(data=out, index=df.index)
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | > we could even deprecate the string now and move the code that creates an econml instance from the string to the CausalModel estimate_effect class
From a point of getting more concrete on this, I like it. But in terms of backwards compatibility, I'm not sure sure we should already be so bold. But that's mostly depending on the question of "is this code used directly by users" or will this only be invoked through estimate_effect. @amit-sharma ?
Either way, we could think about raising a `DeprecationWarning` already when a string is passed. | petergtz | 222 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/econml.py | import inspect
from importlib import import_module
from typing import Callable
import numpy as np
import pandas as pd
from numpy.distutils.misc_util import is_sequence
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.utils.api import parse_state
class Econml(CausalEstimator):
"""Wrapper class for estimators from the EconML library.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below. For init and fit
parameters of each estimator, refer to the EconML docs.
"""
def __init__(self, *args, econml_methodname, **kwargs):
"""
:param econml_methodname: Fully qualified name of econml estimator
class. For example, 'econml.dml.DML'
"""
# Required to ensure that self.method_params contains all the
# parameters to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self._econml_methodname = econml_methodname
self.logger.info("INFO: Using EconML Estimator")
self.identifier_method = self._target_estimand.identifier_method
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables().copy()
# Enforcing this ordering is necessary to feed through the propensity values from dataset
self._observed_common_causes_names = [
c for c in self._observed_common_causes_names if "propensity" not in c
] + sorted([c for c in self._observed_common_causes_names if "propensity" in c])
# For metalearners only--issue a warning if w contains variables not in x
(module_name, _, class_name) = self._econml_methodname.rpartition(".")
if module_name.endswith("metalearners"):
effect_modifier_names = []
if self._effect_modifier_names is not None:
effect_modifier_names = self._effect_modifier_names.copy()
w_diff_x = [w for w in self._observed_common_causes_names if w not in effect_modifier_names]
if len(w_diff_x) > 0:
self.logger.warn(
"Concatenating common_causes and effect_modifiers and providing a single list of variables to metalearner estimator method, "
+ class_name
+ ". EconML metalearners accept a single X argument."
)
effect_modifier_names.extend(w_diff_x)
# Override the effect_modifiers set in CausalEstimator.__init__()
# Also only update self._effect_modifiers, and create a copy of self._effect_modifier_names
# the latter can be used by other estimator methods later
self._effect_modifiers = self._data[effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self._effect_modifier_names = effect_modifier_names
self.logger.debug("Effect modifiers: " + ",".join(effect_modifier_names))
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.logger.debug("Back-door variables used:" + ",".join(self._observed_common_causes_names))
# Instrumental variables names, if present
# choosing the instrumental variable to use
if getattr(self, "iv_instrument_name", None) is None:
self.estimating_instrument_names = self._target_estimand.instrumental_variables
else:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
if self.estimating_instrument_names:
self._estimating_instruments = self._data[self.estimating_instrument_names]
self._estimating_instruments = pd.get_dummies(self._estimating_instruments, drop_first=True)
else:
self._estimating_instruments = None
self.estimator = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _get_econml_class_object(self, module_method_name, *args, **kwargs):
# from https://www.bnmetrics.com/blog/factory-pattern-in-python3-simple-version
try:
(module_name, _, class_name) = module_method_name.rpartition(".")
estimator_module = import_module(module_name)
estimator_class = getattr(estimator_module, class_name)
except (AttributeError, AssertionError, ImportError):
raise ImportError(
"Error loading {}.{}. Double-check the method name and ensure that all econml dependencies are installed.".format(
module_name, class_name
)
)
return estimator_class
def _estimate_effect(self):
n_samples = self._treatment.shape[0]
X = None # Effect modifiers
W = None # common causes/ confounders
Z = None # Instruments
Y = self._outcome
T = self._treatment
if self._effect_modifiers is not None:
X = self._effect_modifiers
if self._observed_common_causes_names:
W = self._observed_common_causes
if self.estimating_instrument_names:
Z = self._estimating_instruments
named_data_args = {"Y": Y, "T": T, "X": X, "W": W, "Z": Z}
if self.estimator is None:
estimator_class = self._get_econml_class_object(self._econml_methodname)
self.estimator = estimator_class(**self.method_params["init_params"])
# Calling the econml estimator's fit method
estimator_argspec = inspect.getfullargspec(inspect.unwrap(self.estimator.fit))
# As of v0.9, econml has some kewyord only arguments
estimator_named_args = estimator_argspec.args + estimator_argspec.kwonlyargs
estimator_data_args = {
arg: named_data_args[arg] for arg in named_data_args.keys() if arg in estimator_named_args
}
if self.method_params["fit_params"] is not False:
self.estimator.fit(**estimator_data_args, **self.method_params["fit_params"])
X_test = X
if X is not None:
if type(self._target_units) is pd.DataFrame:
X_test = self._target_units
elif callable(self._target_units):
filtered_rows = self._data.where(self._target_units)
boolean_criterion = np.array(filtered_rows.notnull().iloc[:, 0])
X_test = X[boolean_criterion]
# Changing shape to a list for a singleton value
self._treatment_value = parse_state(self._treatment_value)
est = self.effect(X_test)
ate = np.mean(est, axis=0) # one value per treatment value
if len(ate) == 1:
ate = ate[0]
if self._confidence_intervals:
self.effect_intervals = self.effect_interval(X_test)
else:
self.effect_intervals = None
estimate = CausalEstimate(
estimate=ate,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
cate_estimates=est,
effect_intervals=self.effect_intervals,
_estimator_object=self.estimator,
)
return estimate
def _estimate_confidence_intervals(self, confidence_level=None, method=None):
"""Returns None if the confidence interval has not been calculated."""
return self.effect_intervals
def _do(self, x):
raise NotImplementedError
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
(module_name, _, class_name) = self._econml_methodname.rpartition(".")
if module_name.endswith("metalearners"):
var_list = estimand.treatment_variable + self._effect_modifier_names
expr += "+".join(var_list)
else:
var_list = estimand.treatment_variable + self._observed_common_causes_names
expr += "+".join(var_list)
expr += " | " + ",".join(self._effect_modifier_names)
return expr
def shap_values(self, df: pd.DataFrame, *args, **kwargs):
return self.estimator.shap_values(df[self._effect_modifier_names].values, *args, **kwargs)
def apply_multitreatment(self, df: pd.DataFrame, fun: Callable, *args, **kwargs):
ests = []
assert not isinstance(self._treatment_value, str)
assert is_sequence(self._treatment_value)
if df is None:
filtered_df = None
else:
filtered_df = df[self._effect_modifier_names].values
for tv in self._treatment_value:
ests.append(
fun(
filtered_df,
T0=self._control_value,
T1=tv,
*args,
**kwargs,
)
)
est = np.stack(ests, axis=1)
return est
def effect(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise estimated treatment effect,
output shape n_units x n_treatment_values (not counting control)
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_fun, *args, **kwargs)
def effect_interval(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise confidence intervals for the estimated treatment effect
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_interval_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_interval(
filtered_df, T0=T0, T1=T1, alpha=1 - self.confidence_level, *args, **kwargs
)
return self.apply_multitreatment(df, effect_interval_fun, *args, **kwargs)
def effect_inference(self, df: pd.DataFrame, *args, **kwargs):
"""
Inference (uncertainty) results produced by the underlying EconML estimator
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_inference_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_inference(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_inference_fun, *args, **kwargs)
def effect_tt(self, df: pd.DataFrame, *args, **kwargs):
"""
Effect of the actual treatment that was applied to each unit
("effect of Treatment on the Treated")
:param df: Features of the units to evaluate
:param args: passed through to estimator.effect()
:param kwargs: passed through to estimator.effect()
"""
eff = self.effect(df, *args, **kwargs).reshape((len(df), len(self._treatment_value)))
out = np.zeros(len(df))
treatment_value = parse_state(self._treatment_value)
treatment_name = parse_state(self._treatment_name)[0]
eff = np.reshape(eff, (len(df), len(treatment_value)))
# For each unit, return the estimated effect of the treatment value
# that was actually applied to the unit
for c, col in enumerate(treatment_value):
out[df[treatment_name] == col] = eff[df[treatment_name] == col, c]
return pd.Series(data=out, index=df.index)
| import inspect
from importlib import import_module
from typing import Any, Callable, List, Optional, Protocol, Union
from warnings import warn
import numpy as np
import pandas as pd
from numpy.distutils.misc_util import is_sequence
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier import IdentifiedEstimand
from dowhy.utils.api import parse_state
class _EconmlEstimator(Protocol):
def fit(self, *args, **kwargs):
...
def effect(self, *args, **kwargs):
...
def effect_interval(self, *args, **kwargs):
...
def effect_inference(self, *args, **kwargs):
...
def shap_values(self, *args, **kwargs):
...
class Econml(CausalEstimator):
"""Wrapper class for estimators from the EconML library.
Supports additional parameters as listed below. For init and fit
parameters of each estimator, refer to the EconML docs.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
econml_estimator: Union[_EconmlEstimator, str],
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param econml_methodname: Fully qualified name of econml estimator
class. For example, 'econml.dml.DML'
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
econml_estimator=econml_estimator,
**kwargs,
)
if isinstance(econml_estimator, str):
warn(
"Using a string to specify the value for econml_estimator is now deprecated, please provide an instance of a econml object",
DeprecationWarning,
stacklevel=2,
)
estimator_class = self._get_econml_class_object(econml_estimator)
self.estimator = estimator_class(**kwargs["init_params"])
else:
self.estimator = econml_estimator
self.logger.info("INFO: Using EconML Estimator")
self.identifier_method = self._target_estimand.identifier_method
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
**kwargs,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
# Save parameters for later refutter fitting
self._econml_fit_params = kwargs
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables().copy()
# Enforcing this ordering is necessary to feed through the propensity values from dataset
self._observed_common_causes_names = [
c for c in self._observed_common_causes_names if "propensity" not in c
] + sorted([c for c in self._observed_common_causes_names if "propensity" in c])
# For metalearners only--issue a warning if w contains variables not in x
if self.estimator.__module__.endswith("metalearners"):
effect_modifier_names = []
if self._effect_modifier_names is not None:
effect_modifier_names = self._effect_modifier_names.copy()
w_diff_x = [w for w in self._observed_common_causes_names if w not in effect_modifier_names]
if len(w_diff_x) > 0:
self.logger.warn(
"Concatenating common_causes and effect_modifiers and providing a single list of variables to metalearner estimator method, "
+ self.estimator.__class__.__name__
+ ". EconML metalearners accept a single X argument."
)
effect_modifier_names.extend(w_diff_x)
# Override the effect_modifiers set in CausalEstimator.__init__()
# Also only update self._effect_modifiers, and create a copy of self._effect_modifier_names
# the latter can be used by other estimator methods later
self._effect_modifiers = self._data[effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self._effect_modifier_names = effect_modifier_names
self.logger.debug("Effect modifiers: " + ",".join(effect_modifier_names))
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.logger.debug("Back-door variables used:" + ",".join(self._observed_common_causes_names))
# Instrumental variables names, if present
# choosing the instrumental variable to use
if getattr(self, "iv_instrument_name", None) is None:
self.estimating_instrument_names = self._target_estimand.instrumental_variables
else:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
if self.estimating_instrument_names:
self._estimating_instruments = self._data[self.estimating_instrument_names]
self._estimating_instruments = pd.get_dummies(self._estimating_instruments, drop_first=True)
else:
self._estimating_instruments = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
X = None
W = None # common causes/ confounders
Z = None # Instruments
Y = self._outcome
T = self._treatment
if self._effect_modifiers is not None and len(self._effect_modifiers) > 0:
X = self._effect_modifiers
if self._observed_common_causes_names:
W = self._observed_common_causes
if self.estimating_instrument_names:
Z = self._estimating_instruments
named_data_args = {"Y": Y, "T": T, "X": X, "W": W, "Z": Z}
# Calling the econml estimator's fit method
estimator_argspec = inspect.getfullargspec(inspect.unwrap(self.estimator.fit))
# As of v0.9, econml has some kewyord only arguments
estimator_named_args = estimator_argspec.args + estimator_argspec.kwonlyargs
estimator_data_args = {
arg: named_data_args[arg] for arg in named_data_args.keys() if arg in estimator_named_args
}
self.estimator.fit(**estimator_data_args, **kwargs)
return self
def _get_econml_class_object(self, module_method_name, *args, **kwargs):
# from https://www.bnmetrics.com/blog/factory-pattern-in-python3-simple-version
try:
(module_name, _, class_name) = module_method_name.rpartition(".")
estimator_module = import_module(module_name)
estimator_class = getattr(estimator_module, class_name)
except (AttributeError, AssertionError, ImportError):
raise ImportError(
"Error loading {}.{}. Double-check the method name and ensure that all econml dependencies are installed.".format(
module_name, class_name
)
)
return estimator_class
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
"""
data: dataframe containing the data on which treatment effect is to be estimated.
treatment_value: value of the treatment variable for which the effect is to be estimated.
control_value: value of the treatment variable that denotes its absence (usually 0)
target_units: The units for which the treatment effect should be estimated.
It can be a DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
It can also be a lambda function that can be used as an index for the data (pandas DataFrame) to select the required rows.
"""
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
n_samples = self._treatment.shape[0]
X = None # Effect modifiers
if self._effect_modifiers is not None and len(self._effect_modifiers) > 0:
X = self._effect_modifiers
X_test = X
if X is not None:
if type(target_units) is pd.DataFrame:
X_test = target_units
elif callable(target_units):
filtered_rows = data.where(target_units)
boolean_criterion = np.array(filtered_rows.notnull().iloc[:, 0])
X_test = X[boolean_criterion]
# Changing shape to a list for a singleton value
self._treatment_value = parse_state(self._treatment_value)
est = self.effect(X_test)
ate = np.mean(est, axis=0) # one value per treatment value
if len(ate) == 1:
ate = ate[0]
if self._confidence_intervals:
self.effect_intervals = self.effect_interval(X_test)
else:
self.effect_intervals = None
estimate = CausalEstimate(
estimate=ate,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
cate_estimates=est,
effect_intervals=self.effect_intervals,
_estimator_object=self.estimator,
)
estimate.add_estimator(self)
return estimate
def _estimate_confidence_intervals(self, confidence_level=None, method=None):
"""Returns None if the confidence interval has not been calculated."""
return self.effect_intervals
def _do(self, x):
raise NotImplementedError
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
if self.estimator.__module__.endswith("metalearners"):
var_list = estimand.treatment_variable + self._effect_modifier_names
expr += "+".join(var_list)
else:
var_list = estimand.treatment_variable + self._observed_common_causes_names
expr += "+".join(var_list)
expr += " | " + ",".join(self._effect_modifier_names)
return expr
def shap_values(self, df: pd.DataFrame, *args, **kwargs):
return self.estimator.shap_values(df[self._effect_modifier_names].values, *args, **kwargs)
def apply_multitreatment(self, df: pd.DataFrame, fun: Callable, *args, **kwargs):
ests = []
assert not isinstance(self._treatment_value, str)
assert is_sequence(self._treatment_value)
if df is None:
filtered_df = None
else:
filtered_df = df[self._effect_modifier_names].values
for tv in self._treatment_value:
ests.append(
fun(
filtered_df,
T0=self._control_value,
T1=tv,
*args,
**kwargs,
)
)
est = np.stack(ests, axis=1)
return est
def effect(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise estimated treatment effect,
output shape n_units x n_treatment_values (not counting control)
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_fun, *args, **kwargs)
def effect_interval(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise confidence intervals for the estimated treatment effect
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_interval_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_interval(
filtered_df, T0=T0, T1=T1, alpha=1 - self.confidence_level, *args, **kwargs
)
return self.apply_multitreatment(df, effect_interval_fun, *args, **kwargs)
def effect_inference(self, df: pd.DataFrame, *args, **kwargs):
"""
Inference (uncertainty) results produced by the underlying EconML estimator
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_inference_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_inference(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_inference_fun, *args, **kwargs)
def effect_tt(self, df: pd.DataFrame, *args, **kwargs):
"""
Effect of the actual treatment that was applied to each unit
("effect of Treatment on the Treated")
:param df: Features of the units to evaluate
:param args: passed through to estimator.effect()
:param kwargs: passed through to estimator.effect()
"""
eff = self.effect(df, *args, **kwargs).reshape((len(df), len(self._treatment_value)))
out = np.zeros(len(df))
treatment_value = parse_state(self._treatment_value)
treatment_name = parse_state(self._treatment_name)[0]
eff = np.reshape(eff, (len(df), len(treatment_value)))
# For each unit, return the estimated effect of the treatment value
# that was actually applied to the unit
for c, col in enumerate(treatment_value):
out[df[treatment_name] == col] = eff[df[treatment_name] == col, c]
return pd.Series(data=out, index=df.index)
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | this is nice way of using the protocol! | amit-sharma | 223 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/econml.py | import inspect
from importlib import import_module
from typing import Callable
import numpy as np
import pandas as pd
from numpy.distutils.misc_util import is_sequence
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.utils.api import parse_state
class Econml(CausalEstimator):
"""Wrapper class for estimators from the EconML library.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below. For init and fit
parameters of each estimator, refer to the EconML docs.
"""
def __init__(self, *args, econml_methodname, **kwargs):
"""
:param econml_methodname: Fully qualified name of econml estimator
class. For example, 'econml.dml.DML'
"""
# Required to ensure that self.method_params contains all the
# parameters to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self._econml_methodname = econml_methodname
self.logger.info("INFO: Using EconML Estimator")
self.identifier_method = self._target_estimand.identifier_method
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables().copy()
# Enforcing this ordering is necessary to feed through the propensity values from dataset
self._observed_common_causes_names = [
c for c in self._observed_common_causes_names if "propensity" not in c
] + sorted([c for c in self._observed_common_causes_names if "propensity" in c])
# For metalearners only--issue a warning if w contains variables not in x
(module_name, _, class_name) = self._econml_methodname.rpartition(".")
if module_name.endswith("metalearners"):
effect_modifier_names = []
if self._effect_modifier_names is not None:
effect_modifier_names = self._effect_modifier_names.copy()
w_diff_x = [w for w in self._observed_common_causes_names if w not in effect_modifier_names]
if len(w_diff_x) > 0:
self.logger.warn(
"Concatenating common_causes and effect_modifiers and providing a single list of variables to metalearner estimator method, "
+ class_name
+ ". EconML metalearners accept a single X argument."
)
effect_modifier_names.extend(w_diff_x)
# Override the effect_modifiers set in CausalEstimator.__init__()
# Also only update self._effect_modifiers, and create a copy of self._effect_modifier_names
# the latter can be used by other estimator methods later
self._effect_modifiers = self._data[effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self._effect_modifier_names = effect_modifier_names
self.logger.debug("Effect modifiers: " + ",".join(effect_modifier_names))
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.logger.debug("Back-door variables used:" + ",".join(self._observed_common_causes_names))
# Instrumental variables names, if present
# choosing the instrumental variable to use
if getattr(self, "iv_instrument_name", None) is None:
self.estimating_instrument_names = self._target_estimand.instrumental_variables
else:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
if self.estimating_instrument_names:
self._estimating_instruments = self._data[self.estimating_instrument_names]
self._estimating_instruments = pd.get_dummies(self._estimating_instruments, drop_first=True)
else:
self._estimating_instruments = None
self.estimator = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _get_econml_class_object(self, module_method_name, *args, **kwargs):
# from https://www.bnmetrics.com/blog/factory-pattern-in-python3-simple-version
try:
(module_name, _, class_name) = module_method_name.rpartition(".")
estimator_module = import_module(module_name)
estimator_class = getattr(estimator_module, class_name)
except (AttributeError, AssertionError, ImportError):
raise ImportError(
"Error loading {}.{}. Double-check the method name and ensure that all econml dependencies are installed.".format(
module_name, class_name
)
)
return estimator_class
def _estimate_effect(self):
n_samples = self._treatment.shape[0]
X = None # Effect modifiers
W = None # common causes/ confounders
Z = None # Instruments
Y = self._outcome
T = self._treatment
if self._effect_modifiers is not None:
X = self._effect_modifiers
if self._observed_common_causes_names:
W = self._observed_common_causes
if self.estimating_instrument_names:
Z = self._estimating_instruments
named_data_args = {"Y": Y, "T": T, "X": X, "W": W, "Z": Z}
if self.estimator is None:
estimator_class = self._get_econml_class_object(self._econml_methodname)
self.estimator = estimator_class(**self.method_params["init_params"])
# Calling the econml estimator's fit method
estimator_argspec = inspect.getfullargspec(inspect.unwrap(self.estimator.fit))
# As of v0.9, econml has some kewyord only arguments
estimator_named_args = estimator_argspec.args + estimator_argspec.kwonlyargs
estimator_data_args = {
arg: named_data_args[arg] for arg in named_data_args.keys() if arg in estimator_named_args
}
if self.method_params["fit_params"] is not False:
self.estimator.fit(**estimator_data_args, **self.method_params["fit_params"])
X_test = X
if X is not None:
if type(self._target_units) is pd.DataFrame:
X_test = self._target_units
elif callable(self._target_units):
filtered_rows = self._data.where(self._target_units)
boolean_criterion = np.array(filtered_rows.notnull().iloc[:, 0])
X_test = X[boolean_criterion]
# Changing shape to a list for a singleton value
self._treatment_value = parse_state(self._treatment_value)
est = self.effect(X_test)
ate = np.mean(est, axis=0) # one value per treatment value
if len(ate) == 1:
ate = ate[0]
if self._confidence_intervals:
self.effect_intervals = self.effect_interval(X_test)
else:
self.effect_intervals = None
estimate = CausalEstimate(
estimate=ate,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
cate_estimates=est,
effect_intervals=self.effect_intervals,
_estimator_object=self.estimator,
)
return estimate
def _estimate_confidence_intervals(self, confidence_level=None, method=None):
"""Returns None if the confidence interval has not been calculated."""
return self.effect_intervals
def _do(self, x):
raise NotImplementedError
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
(module_name, _, class_name) = self._econml_methodname.rpartition(".")
if module_name.endswith("metalearners"):
var_list = estimand.treatment_variable + self._effect_modifier_names
expr += "+".join(var_list)
else:
var_list = estimand.treatment_variable + self._observed_common_causes_names
expr += "+".join(var_list)
expr += " | " + ",".join(self._effect_modifier_names)
return expr
def shap_values(self, df: pd.DataFrame, *args, **kwargs):
return self.estimator.shap_values(df[self._effect_modifier_names].values, *args, **kwargs)
def apply_multitreatment(self, df: pd.DataFrame, fun: Callable, *args, **kwargs):
ests = []
assert not isinstance(self._treatment_value, str)
assert is_sequence(self._treatment_value)
if df is None:
filtered_df = None
else:
filtered_df = df[self._effect_modifier_names].values
for tv in self._treatment_value:
ests.append(
fun(
filtered_df,
T0=self._control_value,
T1=tv,
*args,
**kwargs,
)
)
est = np.stack(ests, axis=1)
return est
def effect(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise estimated treatment effect,
output shape n_units x n_treatment_values (not counting control)
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_fun, *args, **kwargs)
def effect_interval(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise confidence intervals for the estimated treatment effect
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_interval_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_interval(
filtered_df, T0=T0, T1=T1, alpha=1 - self.confidence_level, *args, **kwargs
)
return self.apply_multitreatment(df, effect_interval_fun, *args, **kwargs)
def effect_inference(self, df: pd.DataFrame, *args, **kwargs):
"""
Inference (uncertainty) results produced by the underlying EconML estimator
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_inference_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_inference(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_inference_fun, *args, **kwargs)
def effect_tt(self, df: pd.DataFrame, *args, **kwargs):
"""
Effect of the actual treatment that was applied to each unit
("effect of Treatment on the Treated")
:param df: Features of the units to evaluate
:param args: passed through to estimator.effect()
:param kwargs: passed through to estimator.effect()
"""
eff = self.effect(df, *args, **kwargs).reshape((len(df), len(self._treatment_value)))
out = np.zeros(len(df))
treatment_value = parse_state(self._treatment_value)
treatment_name = parse_state(self._treatment_name)[0]
eff = np.reshape(eff, (len(df), len(treatment_value)))
# For each unit, return the estimated effect of the treatment value
# that was actually applied to the unit
for c, col in enumerate(treatment_value):
out[df[treatment_name] == col] = eff[df[treatment_name] == col, c]
return pd.Series(data=out, index=df.index)
| import inspect
from importlib import import_module
from typing import Any, Callable, List, Optional, Protocol, Union
from warnings import warn
import numpy as np
import pandas as pd
from numpy.distutils.misc_util import is_sequence
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier import IdentifiedEstimand
from dowhy.utils.api import parse_state
class _EconmlEstimator(Protocol):
def fit(self, *args, **kwargs):
...
def effect(self, *args, **kwargs):
...
def effect_interval(self, *args, **kwargs):
...
def effect_inference(self, *args, **kwargs):
...
def shap_values(self, *args, **kwargs):
...
class Econml(CausalEstimator):
"""Wrapper class for estimators from the EconML library.
Supports additional parameters as listed below. For init and fit
parameters of each estimator, refer to the EconML docs.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
econml_estimator: Union[_EconmlEstimator, str],
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param econml_methodname: Fully qualified name of econml estimator
class. For example, 'econml.dml.DML'
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
econml_estimator=econml_estimator,
**kwargs,
)
if isinstance(econml_estimator, str):
warn(
"Using a string to specify the value for econml_estimator is now deprecated, please provide an instance of a econml object",
DeprecationWarning,
stacklevel=2,
)
estimator_class = self._get_econml_class_object(econml_estimator)
self.estimator = estimator_class(**kwargs["init_params"])
else:
self.estimator = econml_estimator
self.logger.info("INFO: Using EconML Estimator")
self.identifier_method = self._target_estimand.identifier_method
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
**kwargs,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
# Save parameters for later refutter fitting
self._econml_fit_params = kwargs
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables().copy()
# Enforcing this ordering is necessary to feed through the propensity values from dataset
self._observed_common_causes_names = [
c for c in self._observed_common_causes_names if "propensity" not in c
] + sorted([c for c in self._observed_common_causes_names if "propensity" in c])
# For metalearners only--issue a warning if w contains variables not in x
if self.estimator.__module__.endswith("metalearners"):
effect_modifier_names = []
if self._effect_modifier_names is not None:
effect_modifier_names = self._effect_modifier_names.copy()
w_diff_x = [w for w in self._observed_common_causes_names if w not in effect_modifier_names]
if len(w_diff_x) > 0:
self.logger.warn(
"Concatenating common_causes and effect_modifiers and providing a single list of variables to metalearner estimator method, "
+ self.estimator.__class__.__name__
+ ". EconML metalearners accept a single X argument."
)
effect_modifier_names.extend(w_diff_x)
# Override the effect_modifiers set in CausalEstimator.__init__()
# Also only update self._effect_modifiers, and create a copy of self._effect_modifier_names
# the latter can be used by other estimator methods later
self._effect_modifiers = self._data[effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self._effect_modifier_names = effect_modifier_names
self.logger.debug("Effect modifiers: " + ",".join(effect_modifier_names))
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.logger.debug("Back-door variables used:" + ",".join(self._observed_common_causes_names))
# Instrumental variables names, if present
# choosing the instrumental variable to use
if getattr(self, "iv_instrument_name", None) is None:
self.estimating_instrument_names = self._target_estimand.instrumental_variables
else:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
if self.estimating_instrument_names:
self._estimating_instruments = self._data[self.estimating_instrument_names]
self._estimating_instruments = pd.get_dummies(self._estimating_instruments, drop_first=True)
else:
self._estimating_instruments = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
X = None
W = None # common causes/ confounders
Z = None # Instruments
Y = self._outcome
T = self._treatment
if self._effect_modifiers is not None and len(self._effect_modifiers) > 0:
X = self._effect_modifiers
if self._observed_common_causes_names:
W = self._observed_common_causes
if self.estimating_instrument_names:
Z = self._estimating_instruments
named_data_args = {"Y": Y, "T": T, "X": X, "W": W, "Z": Z}
# Calling the econml estimator's fit method
estimator_argspec = inspect.getfullargspec(inspect.unwrap(self.estimator.fit))
# As of v0.9, econml has some kewyord only arguments
estimator_named_args = estimator_argspec.args + estimator_argspec.kwonlyargs
estimator_data_args = {
arg: named_data_args[arg] for arg in named_data_args.keys() if arg in estimator_named_args
}
self.estimator.fit(**estimator_data_args, **kwargs)
return self
def _get_econml_class_object(self, module_method_name, *args, **kwargs):
# from https://www.bnmetrics.com/blog/factory-pattern-in-python3-simple-version
try:
(module_name, _, class_name) = module_method_name.rpartition(".")
estimator_module = import_module(module_name)
estimator_class = getattr(estimator_module, class_name)
except (AttributeError, AssertionError, ImportError):
raise ImportError(
"Error loading {}.{}. Double-check the method name and ensure that all econml dependencies are installed.".format(
module_name, class_name
)
)
return estimator_class
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
"""
data: dataframe containing the data on which treatment effect is to be estimated.
treatment_value: value of the treatment variable for which the effect is to be estimated.
control_value: value of the treatment variable that denotes its absence (usually 0)
target_units: The units for which the treatment effect should be estimated.
It can be a DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
It can also be a lambda function that can be used as an index for the data (pandas DataFrame) to select the required rows.
"""
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
n_samples = self._treatment.shape[0]
X = None # Effect modifiers
if self._effect_modifiers is not None and len(self._effect_modifiers) > 0:
X = self._effect_modifiers
X_test = X
if X is not None:
if type(target_units) is pd.DataFrame:
X_test = target_units
elif callable(target_units):
filtered_rows = data.where(target_units)
boolean_criterion = np.array(filtered_rows.notnull().iloc[:, 0])
X_test = X[boolean_criterion]
# Changing shape to a list for a singleton value
self._treatment_value = parse_state(self._treatment_value)
est = self.effect(X_test)
ate = np.mean(est, axis=0) # one value per treatment value
if len(ate) == 1:
ate = ate[0]
if self._confidence_intervals:
self.effect_intervals = self.effect_interval(X_test)
else:
self.effect_intervals = None
estimate = CausalEstimate(
estimate=ate,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
cate_estimates=est,
effect_intervals=self.effect_intervals,
_estimator_object=self.estimator,
)
estimate.add_estimator(self)
return estimate
def _estimate_confidence_intervals(self, confidence_level=None, method=None):
"""Returns None if the confidence interval has not been calculated."""
return self.effect_intervals
def _do(self, x):
raise NotImplementedError
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
if self.estimator.__module__.endswith("metalearners"):
var_list = estimand.treatment_variable + self._effect_modifier_names
expr += "+".join(var_list)
else:
var_list = estimand.treatment_variable + self._observed_common_causes_names
expr += "+".join(var_list)
expr += " | " + ",".join(self._effect_modifier_names)
return expr
def shap_values(self, df: pd.DataFrame, *args, **kwargs):
return self.estimator.shap_values(df[self._effect_modifier_names].values, *args, **kwargs)
def apply_multitreatment(self, df: pd.DataFrame, fun: Callable, *args, **kwargs):
ests = []
assert not isinstance(self._treatment_value, str)
assert is_sequence(self._treatment_value)
if df is None:
filtered_df = None
else:
filtered_df = df[self._effect_modifier_names].values
for tv in self._treatment_value:
ests.append(
fun(
filtered_df,
T0=self._control_value,
T1=tv,
*args,
**kwargs,
)
)
est = np.stack(ests, axis=1)
return est
def effect(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise estimated treatment effect,
output shape n_units x n_treatment_values (not counting control)
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_fun, *args, **kwargs)
def effect_interval(self, df: pd.DataFrame, *args, **kwargs) -> np.ndarray:
"""
Pointwise confidence intervals for the estimated treatment effect
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_interval_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_interval(
filtered_df, T0=T0, T1=T1, alpha=1 - self.confidence_level, *args, **kwargs
)
return self.apply_multitreatment(df, effect_interval_fun, *args, **kwargs)
def effect_inference(self, df: pd.DataFrame, *args, **kwargs):
"""
Inference (uncertainty) results produced by the underlying EconML estimator
:param df: Features of the units to evaluate
:param args: passed through to the underlying estimator
:param kwargs: passed through to the underlying estimator
"""
def effect_inference_fun(filtered_df, T0, T1, *args, **kwargs):
return self.estimator.effect_inference(filtered_df, T0=T0, T1=T1, *args, **kwargs)
return self.apply_multitreatment(df, effect_inference_fun, *args, **kwargs)
def effect_tt(self, df: pd.DataFrame, *args, **kwargs):
"""
Effect of the actual treatment that was applied to each unit
("effect of Treatment on the Treated")
:param df: Features of the units to evaluate
:param args: passed through to estimator.effect()
:param kwargs: passed through to estimator.effect()
"""
eff = self.effect(df, *args, **kwargs).reshape((len(df), len(self._treatment_value)))
out = np.zeros(len(df))
treatment_value = parse_state(self._treatment_value)
treatment_name = parse_state(self._treatment_name)[0]
eff = np.reshape(eff, (len(df), len(treatment_value)))
# For each unit, return the estimated effect of the treatment value
# that was actually applied to the unit
for c, col in enumerate(treatment_value):
out[df[treatment_name] == col] = eff[df[treatment_name] == col, c]
return pd.Series(data=out, index=df.index)
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | data: dataframe containing the data on which treatment effect is to be estimated.
treatment_value: value of the treatment variable for which the effect is to be estimated.
control_value: value of the treatment variable that denotes its absence (usually 0)
target_units: The units for which the treatment effect should be estimated. It can be a DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data. It can also be a lambda function that can be used as an index for the data (pandas DataFrame) to select the required rows.
| amit-sharma | 224 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/generalized_linear_model_estimator.py | import itertools
import statsmodels.api as sm
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
class GeneralizedLinearModelEstimator(RegressionEstimator):
"""Compute effect of treatment using a generalized linear model such as logistic regression.
Implementation uses statsmodels.api.GLM.
Needs an additional parameter, "glm_family" to be specified in method_params. The value of this parameter can be any valid statsmodels.api families object. For example, to use logistic regression, specify "glm_family" as statsmodels.api.families.Binomial().
"""
def __init__(self, *args, glm_family=None, predict_score=True, **kwargs):
"""For a list of args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
:param glm_family: statsmodels family for the generalized linear model.
For example, use statsmodels.api.families.Binomial() for logistic
regression or statsmodels.api.families.Poisson() for count data.
:param predict_score: For models that have a binary output, whether
to output the model's score or the binary output based on the score.
"""
# Required to ensure that self.method_params contains all the
# parameters needed to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self.logger.info("INFO: Using Generalized Linear Model Estimator")
if glm_family is not None:
self.family = glm_family
else:
raise ValueError(
"Need to specify the family for the generalized linear model. Provide a 'glm_family' parameter in method_params, such as statsmodels.api.families.Binomial() for logistic regression."
)
self.predict_score = predict_score
# Checking if Y is binary
outcome_values = self._data[self._outcome_name].astype(int).unique()
self.outcome_is_binary = all([v in [0, 1] for v in outcome_values])
def _build_model(self):
features = self._build_features()
model = sm.GLM(self._outcome, features, family=self.family).fit()
return (features, model)
def predict_fn(self, model, features):
if self.outcome_is_binary:
if self.predict_score:
return model.predict(features)
else:
return (model.predict(features) > 0.5).astype(int)
else:
return model.predict(features)
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ",".join(estimand.outcome_variable) + "~" + "Sigmoid("
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
if self._effect_modifier_names:
interaction_terms = [
"{0}*{1}".format(x[0], x[1])
for x in itertools.product(estimand.treatment_variable, self._effect_modifier_names)
]
expr += "+" + "+".join(interaction_terms)
expr += ")"
return expr
| import itertools
from typing import Any, List, Optional, Union
import pandas as pd
import statsmodels.api as sm
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class GeneralizedLinearModelEstimator(RegressionEstimator):
"""Compute effect of treatment using a generalized linear model such as logistic regression.
Implementation uses statsmodels.api.GLM.
Needs an additional parameter, "glm_family" to be specified in method_params. The value of this parameter can be any valid statsmodels.api families object. For example, to use logistic regression, specify "glm_family" as statsmodels.api.families.Binomial().
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
glm_family: Optional[Any] = None,
predict_score: bool = True,
**kwargs,
):
"""For a list of args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param glm_family: statsmodels family for the generalized linear model.
For example, use statsmodels.api.families.Binomial() for logistic
regression or statsmodels.api.families.Poisson() for count data.
:param predict_score: For models that have a binary output, whether
to output the model's score or the binary output based on the score.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
glm_family=glm_family,
predict_score=predict_score,
**kwargs,
)
self.logger.info("INFO: Using Generalized Linear Model Estimator")
if glm_family is not None:
self.family = glm_family
else:
raise ValueError(
"Need to specify the family for the generalized linear model. Provide a 'glm_family' parameter in method_params, such as statsmodels.api.families.Binomial() for logistic regression."
)
self.predict_score = predict_score
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
return super().fit(data, treatment_name, outcome_name, effect_modifier_names=effect_modifier_names)
def _build_model(self):
features = self._build_features()
model = sm.GLM(self._outcome, features, family=self.family).fit()
return (features, model)
def predict_fn(self, model, features):
# Checking if Y is binary
outcome_values = self._data[self._outcome_name].astype(int).unique()
outcome_is_binary = all([v in [0, 1] for v in outcome_values])
if outcome_is_binary:
if self.predict_score:
return model.predict(features)
else:
return (model.predict(features) > 0.5).astype(int)
else:
return model.predict(features)
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ",".join(estimand.outcome_variable) + "~" + "Sigmoid("
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
if self._effect_modifier_names:
interaction_terms = [
"{0}*{1}".format(x[0], x[1])
for x in itertools.product(estimand.treatment_variable, self._effect_modifier_names)
]
expr += "+" + "+".join(interaction_terms)
expr += ")"
return expr
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | since this is a fit method, `outcome_is_binary` looks out of place. Shall we move it to the `predict_fn` where it is used?
Ideally we would want to avoid side-effects of the fit method. | amit-sharma | 225 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/instrumental_variable_estimator.py | import numpy as np
import sympy as sp
import sympy.stats as spstats
from statsmodels.sandbox.regression.gmm import IV2SLS
from dowhy.causal_estimator import CausalEstimate, CausalEstimator, RealizedEstimand
from dowhy.utils.api import parse_state
class InstrumentalVariableEstimator(CausalEstimator):
"""Compute effect of treatment using the instrumental variables method.
This is also a superclass that can be inherited by other specific methods.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
def __init__(self, *args, iv_instrument_name=None, **kwargs):
"""
:param iv_instrument_name: Name of the specific instrumental variable
to be used. Needs to be one of the IVs identified in the
identification step. Default is to use all the IV variables
from the identification step.
"""
# Required to ensure that self.method_params contains all the information
# to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
# choosing the instrumental variable to use
self.estimating_instrument_names = self._target_estimand.instrumental_variables
if iv_instrument_name is not None:
self.estimating_instrument_names = parse_state(iv_instrument_name)
self.logger.debug("Instrumental Variables used:" + ",".join(self.estimating_instrument_names))
if not self.estimating_instrument_names:
raise ValueError("No valid instruments found. IV Method not applicable")
if len(self.estimating_instrument_names) < len(self._treatment_name):
# TODO move this to the identification step
raise ValueError(
"Number of instruments fewer than number of treatments. 2SLS requires at least as many instruments as treatments."
)
self._estimating_instruments = self._data[self.estimating_instrument_names]
self.logger.info("INFO: Using Instrumental Variable Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _estimate_effect(self):
if len(self.estimating_instrument_names) == 1 and len(self._treatment_name) == 1:
instrument = self._estimating_instruments.iloc[:, 0]
self.logger.debug("Instrument Variable values: {0}".format(instrument))
num_unique_values = len(np.unique(instrument))
instrument_is_binary = num_unique_values <= 2
if instrument_is_binary:
# Obtain estimate by Wald Estimator
y1_z = np.mean(self._outcome[instrument == 1])
y0_z = np.mean(self._outcome[instrument == 0])
x1_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 1])
x0_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 0])
num = y1_z - y0_z
deno = x1_z - x0_z
iv_est = num / deno
else:
# Obtain estimate by 2SLS estimator: Cov(y,z) / Cov(x,z)
num_yz = np.cov(self._outcome, instrument)[0, 1]
deno_xz = np.cov(self._treatment[self._treatment_name[0]], instrument)[0, 1]
iv_est = num_yz / deno_xz
else:
# More than 1 instrument. Use 2sls.
est_treatment = self._treatment.astype(np.float32)
est_outcome = self._outcome.astype(np.float32)
ivmodel = IV2SLS(est_outcome, est_treatment, self._estimating_instruments)
reg_results = ivmodel.fit()
self.logger.debug(reg_results.summary())
iv_est = sum(
reg_results.params
) # the effect is the same for any treatment value (assume treatment goes from 0 to 1)
estimate = CausalEstimate(
estimate=iv_est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
return estimate
def construct_symbolic_estimator(self, estimand):
sym_outcome = spstats.Normal(",".join(estimand.outcome_variable), 0, 1)
sym_treatment = spstats.Normal(",".join(estimand.treatment_variable), 0, 1)
sym_instrument = sp.Symbol(",".join(self.estimating_instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative) / sp.stats.Expectation(sym_treatment_derivative)
estimator_assumptions = {
"treatment_effect_homogeneity": (
"Each unit's treatment {0} is ".format(self._treatment_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
"outcome_effect_homogeneity": (
"Each unit's outcome {0} is ".format(self._outcome_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
}
sym_assumptions = {**estimand.estimands["iv"]["assumptions"], **estimator_assumptions}
symbolic_estimand = RealizedEstimand(estimand, estimator_name="Wald Estimator")
symbolic_estimand.update_assumptions(sym_assumptions)
symbolic_estimand.update_estimand_expression(sym_effect)
return symbolic_estimand
| from typing import Any, Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
import sympy.stats as spstats
from statsmodels.sandbox.regression.gmm import IV2SLS
from dowhy.causal_estimator import CausalEstimate, CausalEstimator, RealizedEstimand
from dowhy.causal_identifier import IdentifiedEstimand
from dowhy.utils.api import parse_state
class InstrumentalVariableEstimator(CausalEstimator):
"""Compute effect of treatment using the instrumental variables method.
This is also a superclass that can be inherited by other specific methods.
Supports additional parameters as listed below.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
iv_instrument_name: Optional[Union[List, Dict, str]] = None,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param iv_instrument_name: Name of the specific instrumental variable
to be used. Needs to be one of the IVs identified in the
identification step. Default is to use all the IV variables
from the identification step.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
iv_instrument_name=iv_instrument_name,
**kwargs,
)
self.iv_instrument_name = iv_instrument_name
self.logger.info("INFO: Using Instrumental Variable Estimator")
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
self.estimating_instrument_names = self._target_estimand.instrumental_variables
if self.iv_instrument_name is not None:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
self.logger.debug("Instrumental Variables used:" + ",".join(self.estimating_instrument_names))
if not self.estimating_instrument_names:
raise ValueError("No valid instruments found. IV Method not applicable")
if len(self.estimating_instrument_names) < len(self._treatment_name):
# TODO move this to the identification step
raise ValueError(
"Number of instruments fewer than number of treatments. 2SLS requires at least as many instruments as treatments."
)
self._estimating_instruments = self._data[self.estimating_instrument_names]
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
"""
data: dataframe containing the data on which treatment effect is to be estimated.
treatment_value: value of the treatment variable for which the effect is to be estimated.
control_value: value of the treatment variable that denotes its absence (usually 0)
target_units: The units for which the treatment effect should be estimated.
It can be a DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
It can also be a lambda function that can be used as an index for the data (pandas DataFrame) to select the required rows.
"""
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
if len(self.estimating_instrument_names) == 1 and len(self._treatment_name) == 1:
instrument = self._estimating_instruments.iloc[:, 0]
self.logger.debug("Instrument Variable values: {0}".format(instrument))
num_unique_values = len(np.unique(instrument))
instrument_is_binary = num_unique_values <= 2
if instrument_is_binary:
# Obtain estimate by Wald Estimator
y1_z = np.mean(self._outcome[instrument == 1])
y0_z = np.mean(self._outcome[instrument == 0])
x1_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 1])
x0_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 0])
num = y1_z - y0_z
deno = x1_z - x0_z
iv_est = num / deno
else:
# Obtain estimate by 2SLS estimator: Cov(y,z) / Cov(x,z)
num_yz = np.cov(self._outcome, instrument)[0, 1]
deno_xz = np.cov(self._treatment[self._treatment_name[0]], instrument)[0, 1]
iv_est = num_yz / deno_xz
else:
# More than 1 instrument. Use 2sls.
est_treatment = self._treatment.astype(np.float32)
est_outcome = self._outcome.astype(np.float32)
ivmodel = IV2SLS(est_outcome, est_treatment, self._estimating_instruments)
reg_results = ivmodel.fit()
self.logger.debug(reg_results.summary())
iv_est = sum(
reg_results.params
) # the effect is the same for any treatment value (assume treatment goes from 0 to 1)
estimate = CausalEstimate(
estimate=iv_est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
estimate.add_estimator(self)
return estimate
def construct_symbolic_estimator(self, estimand):
sym_outcome = spstats.Normal(",".join(estimand.outcome_variable), 0, 1)
sym_treatment = spstats.Normal(",".join(estimand.treatment_variable), 0, 1)
sym_instrument = sp.Symbol(",".join(self.estimating_instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative) / sp.stats.Expectation(sym_treatment_derivative)
estimator_assumptions = {
"treatment_effect_homogeneity": (
"Each unit's treatment {0} is ".format(self._treatment_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
"outcome_effect_homogeneity": (
"Each unit's outcome {0} is ".format(self._outcome_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
}
sym_assumptions = {**estimand.estimands["iv"]["assumptions"], **estimator_assumptions}
symbolic_estimand = RealizedEstimand(estimand, estimator_name="Wald Estimator")
symbolic_estimand.update_assumptions(sym_assumptions)
symbolic_estimand.update_estimand_expression(sym_effect)
return symbolic_estimand
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | empty line can be removed.
| amit-sharma | 226 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/instrumental_variable_estimator.py | import numpy as np
import sympy as sp
import sympy.stats as spstats
from statsmodels.sandbox.regression.gmm import IV2SLS
from dowhy.causal_estimator import CausalEstimate, CausalEstimator, RealizedEstimand
from dowhy.utils.api import parse_state
class InstrumentalVariableEstimator(CausalEstimator):
"""Compute effect of treatment using the instrumental variables method.
This is also a superclass that can be inherited by other specific methods.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
def __init__(self, *args, iv_instrument_name=None, **kwargs):
"""
:param iv_instrument_name: Name of the specific instrumental variable
to be used. Needs to be one of the IVs identified in the
identification step. Default is to use all the IV variables
from the identification step.
"""
# Required to ensure that self.method_params contains all the information
# to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
# choosing the instrumental variable to use
self.estimating_instrument_names = self._target_estimand.instrumental_variables
if iv_instrument_name is not None:
self.estimating_instrument_names = parse_state(iv_instrument_name)
self.logger.debug("Instrumental Variables used:" + ",".join(self.estimating_instrument_names))
if not self.estimating_instrument_names:
raise ValueError("No valid instruments found. IV Method not applicable")
if len(self.estimating_instrument_names) < len(self._treatment_name):
# TODO move this to the identification step
raise ValueError(
"Number of instruments fewer than number of treatments. 2SLS requires at least as many instruments as treatments."
)
self._estimating_instruments = self._data[self.estimating_instrument_names]
self.logger.info("INFO: Using Instrumental Variable Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _estimate_effect(self):
if len(self.estimating_instrument_names) == 1 and len(self._treatment_name) == 1:
instrument = self._estimating_instruments.iloc[:, 0]
self.logger.debug("Instrument Variable values: {0}".format(instrument))
num_unique_values = len(np.unique(instrument))
instrument_is_binary = num_unique_values <= 2
if instrument_is_binary:
# Obtain estimate by Wald Estimator
y1_z = np.mean(self._outcome[instrument == 1])
y0_z = np.mean(self._outcome[instrument == 0])
x1_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 1])
x0_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 0])
num = y1_z - y0_z
deno = x1_z - x0_z
iv_est = num / deno
else:
# Obtain estimate by 2SLS estimator: Cov(y,z) / Cov(x,z)
num_yz = np.cov(self._outcome, instrument)[0, 1]
deno_xz = np.cov(self._treatment[self._treatment_name[0]], instrument)[0, 1]
iv_est = num_yz / deno_xz
else:
# More than 1 instrument. Use 2sls.
est_treatment = self._treatment.astype(np.float32)
est_outcome = self._outcome.astype(np.float32)
ivmodel = IV2SLS(est_outcome, est_treatment, self._estimating_instruments)
reg_results = ivmodel.fit()
self.logger.debug(reg_results.summary())
iv_est = sum(
reg_results.params
) # the effect is the same for any treatment value (assume treatment goes from 0 to 1)
estimate = CausalEstimate(
estimate=iv_est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
return estimate
def construct_symbolic_estimator(self, estimand):
sym_outcome = spstats.Normal(",".join(estimand.outcome_variable), 0, 1)
sym_treatment = spstats.Normal(",".join(estimand.treatment_variable), 0, 1)
sym_instrument = sp.Symbol(",".join(self.estimating_instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative) / sp.stats.Expectation(sym_treatment_derivative)
estimator_assumptions = {
"treatment_effect_homogeneity": (
"Each unit's treatment {0} is ".format(self._treatment_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
"outcome_effect_homogeneity": (
"Each unit's outcome {0} is ".format(self._outcome_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
}
sym_assumptions = {**estimand.estimands["iv"]["assumptions"], **estimator_assumptions}
symbolic_estimand = RealizedEstimand(estimand, estimator_name="Wald Estimator")
symbolic_estimand.update_assumptions(sym_assumptions)
symbolic_estimand.update_estimand_expression(sym_effect)
return symbolic_estimand
| from typing import Any, Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
import sympy.stats as spstats
from statsmodels.sandbox.regression.gmm import IV2SLS
from dowhy.causal_estimator import CausalEstimate, CausalEstimator, RealizedEstimand
from dowhy.causal_identifier import IdentifiedEstimand
from dowhy.utils.api import parse_state
class InstrumentalVariableEstimator(CausalEstimator):
"""Compute effect of treatment using the instrumental variables method.
This is also a superclass that can be inherited by other specific methods.
Supports additional parameters as listed below.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
iv_instrument_name: Optional[Union[List, Dict, str]] = None,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param iv_instrument_name: Name of the specific instrumental variable
to be used. Needs to be one of the IVs identified in the
identification step. Default is to use all the IV variables
from the identification step.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
iv_instrument_name=iv_instrument_name,
**kwargs,
)
self.iv_instrument_name = iv_instrument_name
self.logger.info("INFO: Using Instrumental Variable Estimator")
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
self.estimating_instrument_names = self._target_estimand.instrumental_variables
if self.iv_instrument_name is not None:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
self.logger.debug("Instrumental Variables used:" + ",".join(self.estimating_instrument_names))
if not self.estimating_instrument_names:
raise ValueError("No valid instruments found. IV Method not applicable")
if len(self.estimating_instrument_names) < len(self._treatment_name):
# TODO move this to the identification step
raise ValueError(
"Number of instruments fewer than number of treatments. 2SLS requires at least as many instruments as treatments."
)
self._estimating_instruments = self._data[self.estimating_instrument_names]
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
"""
data: dataframe containing the data on which treatment effect is to be estimated.
treatment_value: value of the treatment variable for which the effect is to be estimated.
control_value: value of the treatment variable that denotes its absence (usually 0)
target_units: The units for which the treatment effect should be estimated.
It can be a DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
It can also be a lambda function that can be used as an index for the data (pandas DataFrame) to select the required rows.
"""
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
if len(self.estimating_instrument_names) == 1 and len(self._treatment_name) == 1:
instrument = self._estimating_instruments.iloc[:, 0]
self.logger.debug("Instrument Variable values: {0}".format(instrument))
num_unique_values = len(np.unique(instrument))
instrument_is_binary = num_unique_values <= 2
if instrument_is_binary:
# Obtain estimate by Wald Estimator
y1_z = np.mean(self._outcome[instrument == 1])
y0_z = np.mean(self._outcome[instrument == 0])
x1_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 1])
x0_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 0])
num = y1_z - y0_z
deno = x1_z - x0_z
iv_est = num / deno
else:
# Obtain estimate by 2SLS estimator: Cov(y,z) / Cov(x,z)
num_yz = np.cov(self._outcome, instrument)[0, 1]
deno_xz = np.cov(self._treatment[self._treatment_name[0]], instrument)[0, 1]
iv_est = num_yz / deno_xz
else:
# More than 1 instrument. Use 2sls.
est_treatment = self._treatment.astype(np.float32)
est_outcome = self._outcome.astype(np.float32)
ivmodel = IV2SLS(est_outcome, est_treatment, self._estimating_instruments)
reg_results = ivmodel.fit()
self.logger.debug(reg_results.summary())
iv_est = sum(
reg_results.params
) # the effect is the same for any treatment value (assume treatment goes from 0 to 1)
estimate = CausalEstimate(
estimate=iv_est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
estimate.add_estimator(self)
return estimate
def construct_symbolic_estimator(self, estimand):
sym_outcome = spstats.Normal(",".join(estimand.outcome_variable), 0, 1)
sym_treatment = spstats.Normal(",".join(estimand.treatment_variable), 0, 1)
sym_instrument = sp.Symbol(",".join(self.estimating_instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative) / sp.stats.Expectation(sym_treatment_derivative)
estimator_assumptions = {
"treatment_effect_homogeneity": (
"Each unit's treatment {0} is ".format(self._treatment_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
"outcome_effect_homogeneity": (
"Each unit's outcome {0} is ".format(self._outcome_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
}
sym_assumptions = {**estimand.estimands["iv"]["assumptions"], **estimator_assumptions}
symbolic_estimand = RealizedEstimand(estimand, estimator_name="Wald Estimator")
symbolic_estimand.update_assumptions(sym_assumptions)
symbolic_estimand.update_estimand_expression(sym_effect)
return symbolic_estimand
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | needs docstring | amit-sharma | 227 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/instrumental_variable_estimator.py | import numpy as np
import sympy as sp
import sympy.stats as spstats
from statsmodels.sandbox.regression.gmm import IV2SLS
from dowhy.causal_estimator import CausalEstimate, CausalEstimator, RealizedEstimand
from dowhy.utils.api import parse_state
class InstrumentalVariableEstimator(CausalEstimator):
"""Compute effect of treatment using the instrumental variables method.
This is also a superclass that can be inherited by other specific methods.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
def __init__(self, *args, iv_instrument_name=None, **kwargs):
"""
:param iv_instrument_name: Name of the specific instrumental variable
to be used. Needs to be one of the IVs identified in the
identification step. Default is to use all the IV variables
from the identification step.
"""
# Required to ensure that self.method_params contains all the information
# to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
# choosing the instrumental variable to use
self.estimating_instrument_names = self._target_estimand.instrumental_variables
if iv_instrument_name is not None:
self.estimating_instrument_names = parse_state(iv_instrument_name)
self.logger.debug("Instrumental Variables used:" + ",".join(self.estimating_instrument_names))
if not self.estimating_instrument_names:
raise ValueError("No valid instruments found. IV Method not applicable")
if len(self.estimating_instrument_names) < len(self._treatment_name):
# TODO move this to the identification step
raise ValueError(
"Number of instruments fewer than number of treatments. 2SLS requires at least as many instruments as treatments."
)
self._estimating_instruments = self._data[self.estimating_instrument_names]
self.logger.info("INFO: Using Instrumental Variable Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _estimate_effect(self):
if len(self.estimating_instrument_names) == 1 and len(self._treatment_name) == 1:
instrument = self._estimating_instruments.iloc[:, 0]
self.logger.debug("Instrument Variable values: {0}".format(instrument))
num_unique_values = len(np.unique(instrument))
instrument_is_binary = num_unique_values <= 2
if instrument_is_binary:
# Obtain estimate by Wald Estimator
y1_z = np.mean(self._outcome[instrument == 1])
y0_z = np.mean(self._outcome[instrument == 0])
x1_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 1])
x0_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 0])
num = y1_z - y0_z
deno = x1_z - x0_z
iv_est = num / deno
else:
# Obtain estimate by 2SLS estimator: Cov(y,z) / Cov(x,z)
num_yz = np.cov(self._outcome, instrument)[0, 1]
deno_xz = np.cov(self._treatment[self._treatment_name[0]], instrument)[0, 1]
iv_est = num_yz / deno_xz
else:
# More than 1 instrument. Use 2sls.
est_treatment = self._treatment.astype(np.float32)
est_outcome = self._outcome.astype(np.float32)
ivmodel = IV2SLS(est_outcome, est_treatment, self._estimating_instruments)
reg_results = ivmodel.fit()
self.logger.debug(reg_results.summary())
iv_est = sum(
reg_results.params
) # the effect is the same for any treatment value (assume treatment goes from 0 to 1)
estimate = CausalEstimate(
estimate=iv_est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
return estimate
def construct_symbolic_estimator(self, estimand):
sym_outcome = spstats.Normal(",".join(estimand.outcome_variable), 0, 1)
sym_treatment = spstats.Normal(",".join(estimand.treatment_variable), 0, 1)
sym_instrument = sp.Symbol(",".join(self.estimating_instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative) / sp.stats.Expectation(sym_treatment_derivative)
estimator_assumptions = {
"treatment_effect_homogeneity": (
"Each unit's treatment {0} is ".format(self._treatment_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
"outcome_effect_homogeneity": (
"Each unit's outcome {0} is ".format(self._outcome_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
}
sym_assumptions = {**estimand.estimands["iv"]["assumptions"], **estimator_assumptions}
symbolic_estimand = RealizedEstimand(estimand, estimator_name="Wald Estimator")
symbolic_estimand.update_assumptions(sym_assumptions)
symbolic_estimand.update_estimand_expression(sym_effect)
return symbolic_estimand
| from typing import Any, Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
import sympy.stats as spstats
from statsmodels.sandbox.regression.gmm import IV2SLS
from dowhy.causal_estimator import CausalEstimate, CausalEstimator, RealizedEstimand
from dowhy.causal_identifier import IdentifiedEstimand
from dowhy.utils.api import parse_state
class InstrumentalVariableEstimator(CausalEstimator):
"""Compute effect of treatment using the instrumental variables method.
This is also a superclass that can be inherited by other specific methods.
Supports additional parameters as listed below.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
iv_instrument_name: Optional[Union[List, Dict, str]] = None,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param iv_instrument_name: Name of the specific instrumental variable
to be used. Needs to be one of the IVs identified in the
identification step. Default is to use all the IV variables
from the identification step.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
iv_instrument_name=iv_instrument_name,
**kwargs,
)
self.iv_instrument_name = iv_instrument_name
self.logger.info("INFO: Using Instrumental Variable Estimator")
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
self.estimating_instrument_names = self._target_estimand.instrumental_variables
if self.iv_instrument_name is not None:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
self.logger.debug("Instrumental Variables used:" + ",".join(self.estimating_instrument_names))
if not self.estimating_instrument_names:
raise ValueError("No valid instruments found. IV Method not applicable")
if len(self.estimating_instrument_names) < len(self._treatment_name):
# TODO move this to the identification step
raise ValueError(
"Number of instruments fewer than number of treatments. 2SLS requires at least as many instruments as treatments."
)
self._estimating_instruments = self._data[self.estimating_instrument_names]
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
"""
data: dataframe containing the data on which treatment effect is to be estimated.
treatment_value: value of the treatment variable for which the effect is to be estimated.
control_value: value of the treatment variable that denotes its absence (usually 0)
target_units: The units for which the treatment effect should be estimated.
It can be a DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
It can also be a lambda function that can be used as an index for the data (pandas DataFrame) to select the required rows.
"""
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
if len(self.estimating_instrument_names) == 1 and len(self._treatment_name) == 1:
instrument = self._estimating_instruments.iloc[:, 0]
self.logger.debug("Instrument Variable values: {0}".format(instrument))
num_unique_values = len(np.unique(instrument))
instrument_is_binary = num_unique_values <= 2
if instrument_is_binary:
# Obtain estimate by Wald Estimator
y1_z = np.mean(self._outcome[instrument == 1])
y0_z = np.mean(self._outcome[instrument == 0])
x1_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 1])
x0_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 0])
num = y1_z - y0_z
deno = x1_z - x0_z
iv_est = num / deno
else:
# Obtain estimate by 2SLS estimator: Cov(y,z) / Cov(x,z)
num_yz = np.cov(self._outcome, instrument)[0, 1]
deno_xz = np.cov(self._treatment[self._treatment_name[0]], instrument)[0, 1]
iv_est = num_yz / deno_xz
else:
# More than 1 instrument. Use 2sls.
est_treatment = self._treatment.astype(np.float32)
est_outcome = self._outcome.astype(np.float32)
ivmodel = IV2SLS(est_outcome, est_treatment, self._estimating_instruments)
reg_results = ivmodel.fit()
self.logger.debug(reg_results.summary())
iv_est = sum(
reg_results.params
) # the effect is the same for any treatment value (assume treatment goes from 0 to 1)
estimate = CausalEstimate(
estimate=iv_est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
estimate.add_estimator(self)
return estimate
def construct_symbolic_estimator(self, estimand):
sym_outcome = spstats.Normal(",".join(estimand.outcome_variable), 0, 1)
sym_treatment = spstats.Normal(",".join(estimand.treatment_variable), 0, 1)
sym_instrument = sp.Symbol(",".join(self.estimating_instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative) / sp.stats.Expectation(sym_treatment_derivative)
estimator_assumptions = {
"treatment_effect_homogeneity": (
"Each unit's treatment {0} is ".format(self._treatment_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
"outcome_effect_homogeneity": (
"Each unit's outcome {0} is ".format(self._outcome_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
}
sym_assumptions = {**estimand.estimands["iv"]["assumptions"], **estimator_assumptions}
symbolic_estimand = RealizedEstimand(estimand, estimator_name="Wald Estimator")
symbolic_estimand.update_assumptions(sym_assumptions)
symbolic_estimand.update_estimand_expression(sym_effect)
return symbolic_estimand
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | effect modifiers are not used by IV method. Can update the docstring and say that effect_modifiers are not supported.
Shall we also raise a valueerror here if a user provides them? | amit-sharma | 228 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/instrumental_variable_estimator.py | import numpy as np
import sympy as sp
import sympy.stats as spstats
from statsmodels.sandbox.regression.gmm import IV2SLS
from dowhy.causal_estimator import CausalEstimate, CausalEstimator, RealizedEstimand
from dowhy.utils.api import parse_state
class InstrumentalVariableEstimator(CausalEstimator):
"""Compute effect of treatment using the instrumental variables method.
This is also a superclass that can be inherited by other specific methods.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
def __init__(self, *args, iv_instrument_name=None, **kwargs):
"""
:param iv_instrument_name: Name of the specific instrumental variable
to be used. Needs to be one of the IVs identified in the
identification step. Default is to use all the IV variables
from the identification step.
"""
# Required to ensure that self.method_params contains all the information
# to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
# choosing the instrumental variable to use
self.estimating_instrument_names = self._target_estimand.instrumental_variables
if iv_instrument_name is not None:
self.estimating_instrument_names = parse_state(iv_instrument_name)
self.logger.debug("Instrumental Variables used:" + ",".join(self.estimating_instrument_names))
if not self.estimating_instrument_names:
raise ValueError("No valid instruments found. IV Method not applicable")
if len(self.estimating_instrument_names) < len(self._treatment_name):
# TODO move this to the identification step
raise ValueError(
"Number of instruments fewer than number of treatments. 2SLS requires at least as many instruments as treatments."
)
self._estimating_instruments = self._data[self.estimating_instrument_names]
self.logger.info("INFO: Using Instrumental Variable Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _estimate_effect(self):
if len(self.estimating_instrument_names) == 1 and len(self._treatment_name) == 1:
instrument = self._estimating_instruments.iloc[:, 0]
self.logger.debug("Instrument Variable values: {0}".format(instrument))
num_unique_values = len(np.unique(instrument))
instrument_is_binary = num_unique_values <= 2
if instrument_is_binary:
# Obtain estimate by Wald Estimator
y1_z = np.mean(self._outcome[instrument == 1])
y0_z = np.mean(self._outcome[instrument == 0])
x1_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 1])
x0_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 0])
num = y1_z - y0_z
deno = x1_z - x0_z
iv_est = num / deno
else:
# Obtain estimate by 2SLS estimator: Cov(y,z) / Cov(x,z)
num_yz = np.cov(self._outcome, instrument)[0, 1]
deno_xz = np.cov(self._treatment[self._treatment_name[0]], instrument)[0, 1]
iv_est = num_yz / deno_xz
else:
# More than 1 instrument. Use 2sls.
est_treatment = self._treatment.astype(np.float32)
est_outcome = self._outcome.astype(np.float32)
ivmodel = IV2SLS(est_outcome, est_treatment, self._estimating_instruments)
reg_results = ivmodel.fit()
self.logger.debug(reg_results.summary())
iv_est = sum(
reg_results.params
) # the effect is the same for any treatment value (assume treatment goes from 0 to 1)
estimate = CausalEstimate(
estimate=iv_est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
return estimate
def construct_symbolic_estimator(self, estimand):
sym_outcome = spstats.Normal(",".join(estimand.outcome_variable), 0, 1)
sym_treatment = spstats.Normal(",".join(estimand.treatment_variable), 0, 1)
sym_instrument = sp.Symbol(",".join(self.estimating_instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative) / sp.stats.Expectation(sym_treatment_derivative)
estimator_assumptions = {
"treatment_effect_homogeneity": (
"Each unit's treatment {0} is ".format(self._treatment_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
"outcome_effect_homogeneity": (
"Each unit's outcome {0} is ".format(self._outcome_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
}
sym_assumptions = {**estimand.estimands["iv"]["assumptions"], **estimator_assumptions}
symbolic_estimand = RealizedEstimand(estimand, estimator_name="Wald Estimator")
symbolic_estimand.update_assumptions(sym_assumptions)
symbolic_estimand.update_estimand_expression(sym_effect)
return symbolic_estimand
| from typing import Any, Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
import sympy.stats as spstats
from statsmodels.sandbox.regression.gmm import IV2SLS
from dowhy.causal_estimator import CausalEstimate, CausalEstimator, RealizedEstimand
from dowhy.causal_identifier import IdentifiedEstimand
from dowhy.utils.api import parse_state
class InstrumentalVariableEstimator(CausalEstimator):
"""Compute effect of treatment using the instrumental variables method.
This is also a superclass that can be inherited by other specific methods.
Supports additional parameters as listed below.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
iv_instrument_name: Optional[Union[List, Dict, str]] = None,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param iv_instrument_name: Name of the specific instrumental variable
to be used. Needs to be one of the IVs identified in the
identification step. Default is to use all the IV variables
from the identification step.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
iv_instrument_name=iv_instrument_name,
**kwargs,
)
self.iv_instrument_name = iv_instrument_name
self.logger.info("INFO: Using Instrumental Variable Estimator")
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
self.estimating_instrument_names = self._target_estimand.instrumental_variables
if self.iv_instrument_name is not None:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
self.logger.debug("Instrumental Variables used:" + ",".join(self.estimating_instrument_names))
if not self.estimating_instrument_names:
raise ValueError("No valid instruments found. IV Method not applicable")
if len(self.estimating_instrument_names) < len(self._treatment_name):
# TODO move this to the identification step
raise ValueError(
"Number of instruments fewer than number of treatments. 2SLS requires at least as many instruments as treatments."
)
self._estimating_instruments = self._data[self.estimating_instrument_names]
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
"""
data: dataframe containing the data on which treatment effect is to be estimated.
treatment_value: value of the treatment variable for which the effect is to be estimated.
control_value: value of the treatment variable that denotes its absence (usually 0)
target_units: The units for which the treatment effect should be estimated.
It can be a DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
It can also be a lambda function that can be used as an index for the data (pandas DataFrame) to select the required rows.
"""
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
if len(self.estimating_instrument_names) == 1 and len(self._treatment_name) == 1:
instrument = self._estimating_instruments.iloc[:, 0]
self.logger.debug("Instrument Variable values: {0}".format(instrument))
num_unique_values = len(np.unique(instrument))
instrument_is_binary = num_unique_values <= 2
if instrument_is_binary:
# Obtain estimate by Wald Estimator
y1_z = np.mean(self._outcome[instrument == 1])
y0_z = np.mean(self._outcome[instrument == 0])
x1_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 1])
x0_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 0])
num = y1_z - y0_z
deno = x1_z - x0_z
iv_est = num / deno
else:
# Obtain estimate by 2SLS estimator: Cov(y,z) / Cov(x,z)
num_yz = np.cov(self._outcome, instrument)[0, 1]
deno_xz = np.cov(self._treatment[self._treatment_name[0]], instrument)[0, 1]
iv_est = num_yz / deno_xz
else:
# More than 1 instrument. Use 2sls.
est_treatment = self._treatment.astype(np.float32)
est_outcome = self._outcome.astype(np.float32)
ivmodel = IV2SLS(est_outcome, est_treatment, self._estimating_instruments)
reg_results = ivmodel.fit()
self.logger.debug(reg_results.summary())
iv_est = sum(
reg_results.params
) # the effect is the same for any treatment value (assume treatment goes from 0 to 1)
estimate = CausalEstimate(
estimate=iv_est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
estimate.add_estimator(self)
return estimate
def construct_symbolic_estimator(self, estimand):
sym_outcome = spstats.Normal(",".join(estimand.outcome_variable), 0, 1)
sym_treatment = spstats.Normal(",".join(estimand.treatment_variable), 0, 1)
sym_instrument = sp.Symbol(",".join(self.estimating_instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative) / sp.stats.Expectation(sym_treatment_derivative)
estimator_assumptions = {
"treatment_effect_homogeneity": (
"Each unit's treatment {0} is ".format(self._treatment_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
"outcome_effect_homogeneity": (
"Each unit's outcome {0} is ".format(self._outcome_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
}
sym_assumptions = {**estimand.estimands["iv"]["assumptions"], **estimator_assumptions}
symbolic_estimand = RealizedEstimand(estimand, estimator_name="Wald Estimator")
symbolic_estimand.update_assumptions(sym_assumptions)
symbolic_estimand.update_estimand_expression(sym_effect)
return symbolic_estimand
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | I think it is a better idea to just remove it since it is not used. | andresmor-ms | 229 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/instrumental_variable_estimator.py | import numpy as np
import sympy as sp
import sympy.stats as spstats
from statsmodels.sandbox.regression.gmm import IV2SLS
from dowhy.causal_estimator import CausalEstimate, CausalEstimator, RealizedEstimand
from dowhy.utils.api import parse_state
class InstrumentalVariableEstimator(CausalEstimator):
"""Compute effect of treatment using the instrumental variables method.
This is also a superclass that can be inherited by other specific methods.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
def __init__(self, *args, iv_instrument_name=None, **kwargs):
"""
:param iv_instrument_name: Name of the specific instrumental variable
to be used. Needs to be one of the IVs identified in the
identification step. Default is to use all the IV variables
from the identification step.
"""
# Required to ensure that self.method_params contains all the information
# to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
# choosing the instrumental variable to use
self.estimating_instrument_names = self._target_estimand.instrumental_variables
if iv_instrument_name is not None:
self.estimating_instrument_names = parse_state(iv_instrument_name)
self.logger.debug("Instrumental Variables used:" + ",".join(self.estimating_instrument_names))
if not self.estimating_instrument_names:
raise ValueError("No valid instruments found. IV Method not applicable")
if len(self.estimating_instrument_names) < len(self._treatment_name):
# TODO move this to the identification step
raise ValueError(
"Number of instruments fewer than number of treatments. 2SLS requires at least as many instruments as treatments."
)
self._estimating_instruments = self._data[self.estimating_instrument_names]
self.logger.info("INFO: Using Instrumental Variable Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _estimate_effect(self):
if len(self.estimating_instrument_names) == 1 and len(self._treatment_name) == 1:
instrument = self._estimating_instruments.iloc[:, 0]
self.logger.debug("Instrument Variable values: {0}".format(instrument))
num_unique_values = len(np.unique(instrument))
instrument_is_binary = num_unique_values <= 2
if instrument_is_binary:
# Obtain estimate by Wald Estimator
y1_z = np.mean(self._outcome[instrument == 1])
y0_z = np.mean(self._outcome[instrument == 0])
x1_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 1])
x0_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 0])
num = y1_z - y0_z
deno = x1_z - x0_z
iv_est = num / deno
else:
# Obtain estimate by 2SLS estimator: Cov(y,z) / Cov(x,z)
num_yz = np.cov(self._outcome, instrument)[0, 1]
deno_xz = np.cov(self._treatment[self._treatment_name[0]], instrument)[0, 1]
iv_est = num_yz / deno_xz
else:
# More than 1 instrument. Use 2sls.
est_treatment = self._treatment.astype(np.float32)
est_outcome = self._outcome.astype(np.float32)
ivmodel = IV2SLS(est_outcome, est_treatment, self._estimating_instruments)
reg_results = ivmodel.fit()
self.logger.debug(reg_results.summary())
iv_est = sum(
reg_results.params
) # the effect is the same for any treatment value (assume treatment goes from 0 to 1)
estimate = CausalEstimate(
estimate=iv_est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
return estimate
def construct_symbolic_estimator(self, estimand):
sym_outcome = spstats.Normal(",".join(estimand.outcome_variable), 0, 1)
sym_treatment = spstats.Normal(",".join(estimand.treatment_variable), 0, 1)
sym_instrument = sp.Symbol(",".join(self.estimating_instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative) / sp.stats.Expectation(sym_treatment_derivative)
estimator_assumptions = {
"treatment_effect_homogeneity": (
"Each unit's treatment {0} is ".format(self._treatment_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
"outcome_effect_homogeneity": (
"Each unit's outcome {0} is ".format(self._outcome_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
}
sym_assumptions = {**estimand.estimands["iv"]["assumptions"], **estimator_assumptions}
symbolic_estimand = RealizedEstimand(estimand, estimator_name="Wald Estimator")
symbolic_estimand.update_assumptions(sym_assumptions)
symbolic_estimand.update_estimand_expression(sym_effect)
return symbolic_estimand
| from typing import Any, Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
import sympy.stats as spstats
from statsmodels.sandbox.regression.gmm import IV2SLS
from dowhy.causal_estimator import CausalEstimate, CausalEstimator, RealizedEstimand
from dowhy.causal_identifier import IdentifiedEstimand
from dowhy.utils.api import parse_state
class InstrumentalVariableEstimator(CausalEstimator):
"""Compute effect of treatment using the instrumental variables method.
This is also a superclass that can be inherited by other specific methods.
Supports additional parameters as listed below.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
iv_instrument_name: Optional[Union[List, Dict, str]] = None,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param iv_instrument_name: Name of the specific instrumental variable
to be used. Needs to be one of the IVs identified in the
identification step. Default is to use all the IV variables
from the identification step.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
iv_instrument_name=iv_instrument_name,
**kwargs,
)
self.iv_instrument_name = iv_instrument_name
self.logger.info("INFO: Using Instrumental Variable Estimator")
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
self.estimating_instrument_names = self._target_estimand.instrumental_variables
if self.iv_instrument_name is not None:
self.estimating_instrument_names = parse_state(self.iv_instrument_name)
self.logger.debug("Instrumental Variables used:" + ",".join(self.estimating_instrument_names))
if not self.estimating_instrument_names:
raise ValueError("No valid instruments found. IV Method not applicable")
if len(self.estimating_instrument_names) < len(self._treatment_name):
# TODO move this to the identification step
raise ValueError(
"Number of instruments fewer than number of treatments. 2SLS requires at least as many instruments as treatments."
)
self._estimating_instruments = self._data[self.estimating_instrument_names]
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
"""
data: dataframe containing the data on which treatment effect is to be estimated.
treatment_value: value of the treatment variable for which the effect is to be estimated.
control_value: value of the treatment variable that denotes its absence (usually 0)
target_units: The units for which the treatment effect should be estimated.
It can be a DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
It can also be a lambda function that can be used as an index for the data (pandas DataFrame) to select the required rows.
"""
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
if len(self.estimating_instrument_names) == 1 and len(self._treatment_name) == 1:
instrument = self._estimating_instruments.iloc[:, 0]
self.logger.debug("Instrument Variable values: {0}".format(instrument))
num_unique_values = len(np.unique(instrument))
instrument_is_binary = num_unique_values <= 2
if instrument_is_binary:
# Obtain estimate by Wald Estimator
y1_z = np.mean(self._outcome[instrument == 1])
y0_z = np.mean(self._outcome[instrument == 0])
x1_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 1])
x0_z = np.mean(self._treatment[self._treatment_name[0]][instrument == 0])
num = y1_z - y0_z
deno = x1_z - x0_z
iv_est = num / deno
else:
# Obtain estimate by 2SLS estimator: Cov(y,z) / Cov(x,z)
num_yz = np.cov(self._outcome, instrument)[0, 1]
deno_xz = np.cov(self._treatment[self._treatment_name[0]], instrument)[0, 1]
iv_est = num_yz / deno_xz
else:
# More than 1 instrument. Use 2sls.
est_treatment = self._treatment.astype(np.float32)
est_outcome = self._outcome.astype(np.float32)
ivmodel = IV2SLS(est_outcome, est_treatment, self._estimating_instruments)
reg_results = ivmodel.fit()
self.logger.debug(reg_results.summary())
iv_est = sum(
reg_results.params
) # the effect is the same for any treatment value (assume treatment goes from 0 to 1)
estimate = CausalEstimate(
estimate=iv_est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
estimate.add_estimator(self)
return estimate
def construct_symbolic_estimator(self, estimand):
sym_outcome = spstats.Normal(",".join(estimand.outcome_variable), 0, 1)
sym_treatment = spstats.Normal(",".join(estimand.treatment_variable), 0, 1)
sym_instrument = sp.Symbol(",".join(self.estimating_instrument_names))
sym_outcome_derivative = sp.Derivative(sym_outcome, sym_instrument)
sym_treatment_derivative = sp.Derivative(sym_treatment, sym_instrument)
sym_effect = spstats.Expectation(sym_outcome_derivative) / sp.stats.Expectation(sym_treatment_derivative)
estimator_assumptions = {
"treatment_effect_homogeneity": (
"Each unit's treatment {0} is ".format(self._treatment_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
"outcome_effect_homogeneity": (
"Each unit's outcome {0} is ".format(self._outcome_name)
+ "affected in the same way by common causes of "
"{0} and {1}".format(self._treatment_name, self._outcome_name)
),
}
sym_assumptions = {**estimand.estimands["iv"]["assumptions"], **estimator_assumptions}
symbolic_estimand = RealizedEstimand(estimand, estimator_name="Wald Estimator")
symbolic_estimand.update_assumptions(sym_assumptions)
symbolic_estimand.update_estimand_expression(sym_effect)
return symbolic_estimand
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | If I remove this the tests will fail, I'll remove it as part of another PR once this one is completed. | andresmor-ms | 230 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/propensity_score_estimator.py | import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn.exceptions import NotFittedError
from dowhy.causal_estimator import CausalEstimator
class PropensityScoreEstimator(CausalEstimator):
"""
Base class for estimators that estimate effects based on propensity of
treatment assignment.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
def __init__(
self,
*args,
propensity_score_model=None,
recalculate_propensity_score=True,
propensity_score_column="propensity_score",
**kwargs,
):
"""
:param propensity_score_model: Model used to compute propensity score.
Can be any classification model that supports fit() and
predict_proba() methods. If None, LogisticRegression is used.
:param recalculate_propensity_score: Whether the propensity score
should be estimated. To use pre-computed propensity scores,
set this value to False. Default=True.
:param propensity_score_column: Column name that stores the
propensity score. Default='propensity_score'
"""
# Required to ensure that self.method_params contains all the
# parameters to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
# Enable the user to pass params for a custom propensity model
self.propensity_score_model = propensity_score_model
self.recalculate_propensity_score = recalculate_propensity_score
self.propensity_score_column = propensity_score_column
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
# Checking if the treatment is binary
treatment_values = self._data[self._treatment_name[0]].astype(int).unique()
if any([v not in [0, 1] for v in treatment_values]):
error_msg = "Propensity score methods are applicable only for binary treatments"
self.logger.error(error_msg)
raise Exception(error_msg)
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
# Convert the categorical variables into dummy/indicator variables
# Basically, this gives a one hot encoding for each category
# The first category is taken to be the base line.
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
error_msg = "No common causes/confounders present. Propensity score based methods are not applicable"
self.logger.error(error_msg)
raise Exception(error_msg)
def _refresh_propensity_score(self):
"""
A custom estimator based on the way the propensity score estimates are to be used.
Invoked from the '_estimate_effect' method of various propensity score subclasses when the propensity score is not pre-computed.
"""
if self.recalculate_propensity_score is True:
if self.propensity_score_model is None:
self.propensity_score_model = linear_model.LogisticRegression()
treatment_reshaped = np.ravel(self._treatment)
self.propensity_score_model.fit(self._observed_common_causes, treatment_reshaped)
self._data[self.propensity_score_column] = self.propensity_score_model.predict_proba(
self._observed_common_causes
)[:, 1]
else:
# check if user provides the propensity score column
if self.propensity_score_column not in self._data.columns:
if self.propensity_score_model is None:
raise ValueError(
f"""Propensity score column {self.propensity_score_column} does not exist, nor does a propensity_model.
Please specify the column name that has your pre-computed propensity score, or a model to compute it."""
)
else:
try:
self._data[self.propensity_score_column] = self.propensity_score_model.predict_proba(
self._observed_common_causes
)[:, 1]
except NotFittedError:
raise NotFittedError("Please fit the propensity score model before calling predict_proba")
else:
self.logger.info(f"INFO: Using pre-computed propensity score in column {self.propensity_score_column}")
def construct_symbolic_estimator(self, estimand):
"""
A symbolic string that conveys what each estimator does.
For instance, linear regression is expressed as
y ~ bx + e
"""
raise NotImplementedError
def _estimate_effect(self):
"""
A custom estimator based on the way the propensity score estimates are to be used.
"""
raise NotImplementedError
| from typing import Any, List, Optional, Union
import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn.exceptions import NotFittedError
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class PropensityScoreEstimator(CausalEstimator):
"""
Base class for estimators that estimate effects based on propensity of
treatment assignment.
Supports additional parameters as listed below.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
propensity_score_model: Optional[Any] = None,
propensity_score_column: str = "propensity_score",
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param propensity_score_model: Model used to compute propensity score.
Can be any classification model that supports fit() and
predict_proba() methods. If None, LogisticRegression is used.
:param propensity_score_column: Column name that stores the
propensity score. Default='propensity_score'
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
propensity_score_model=propensity_score_model,
propensity_score_column=propensity_score_column,
**kwargs,
)
# Enable the user to pass params for a custom propensity model
self.propensity_score_model = propensity_score_model
self.propensity_score_column = propensity_score_column
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
# Convert the categorical variables into dummy/indicator variables
# Basically, this gives a one hot encoding for each category
# The first category is taken to be the base line.
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
error_msg = "No common causes/confounders present. Propensity score based methods are not applicable"
self.logger.error(error_msg)
raise Exception(error_msg)
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
# Checking if the treatment is binary
treatment_values = self._data[self._treatment_name[0]].astype(int).unique()
if any([v not in [0, 1] for v in treatment_values]):
error_msg = "Propensity score methods are applicable only for binary treatments"
self.logger.error(error_msg)
raise Exception(error_msg)
if self.propensity_score_column not in self._data:
if self.propensity_score_model is None:
self.propensity_score_model = linear_model.LogisticRegression()
treatment_reshaped = np.ravel(self._treatment)
self.propensity_score_model.fit(self._observed_common_causes, treatment_reshaped)
return self
def estimate_propensity_score_column(self, data):
try:
data[self.propensity_score_column] = self.propensity_score_model.predict_proba(
self._observed_common_causes
)[:, 1]
except NotFittedError:
raise NotFittedError("Please fit the propensity score model before calling predict_proba")
def construct_symbolic_estimator(self, estimand):
"""
A symbolic string that conveys what each estimator does.
For instance, linear regression is expressed as
y ~ bx + e
"""
raise NotImplementedError
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | we do not need this parameter now, recalculate_propensity_score, because we have separate fit and estimate. can remove this | amit-sharma | 231 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/propensity_score_estimator.py | import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn.exceptions import NotFittedError
from dowhy.causal_estimator import CausalEstimator
class PropensityScoreEstimator(CausalEstimator):
"""
Base class for estimators that estimate effects based on propensity of
treatment assignment.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
def __init__(
self,
*args,
propensity_score_model=None,
recalculate_propensity_score=True,
propensity_score_column="propensity_score",
**kwargs,
):
"""
:param propensity_score_model: Model used to compute propensity score.
Can be any classification model that supports fit() and
predict_proba() methods. If None, LogisticRegression is used.
:param recalculate_propensity_score: Whether the propensity score
should be estimated. To use pre-computed propensity scores,
set this value to False. Default=True.
:param propensity_score_column: Column name that stores the
propensity score. Default='propensity_score'
"""
# Required to ensure that self.method_params contains all the
# parameters to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
# Enable the user to pass params for a custom propensity model
self.propensity_score_model = propensity_score_model
self.recalculate_propensity_score = recalculate_propensity_score
self.propensity_score_column = propensity_score_column
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
# Checking if the treatment is binary
treatment_values = self._data[self._treatment_name[0]].astype(int).unique()
if any([v not in [0, 1] for v in treatment_values]):
error_msg = "Propensity score methods are applicable only for binary treatments"
self.logger.error(error_msg)
raise Exception(error_msg)
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
# Convert the categorical variables into dummy/indicator variables
# Basically, this gives a one hot encoding for each category
# The first category is taken to be the base line.
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
error_msg = "No common causes/confounders present. Propensity score based methods are not applicable"
self.logger.error(error_msg)
raise Exception(error_msg)
def _refresh_propensity_score(self):
"""
A custom estimator based on the way the propensity score estimates are to be used.
Invoked from the '_estimate_effect' method of various propensity score subclasses when the propensity score is not pre-computed.
"""
if self.recalculate_propensity_score is True:
if self.propensity_score_model is None:
self.propensity_score_model = linear_model.LogisticRegression()
treatment_reshaped = np.ravel(self._treatment)
self.propensity_score_model.fit(self._observed_common_causes, treatment_reshaped)
self._data[self.propensity_score_column] = self.propensity_score_model.predict_proba(
self._observed_common_causes
)[:, 1]
else:
# check if user provides the propensity score column
if self.propensity_score_column not in self._data.columns:
if self.propensity_score_model is None:
raise ValueError(
f"""Propensity score column {self.propensity_score_column} does not exist, nor does a propensity_model.
Please specify the column name that has your pre-computed propensity score, or a model to compute it."""
)
else:
try:
self._data[self.propensity_score_column] = self.propensity_score_model.predict_proba(
self._observed_common_causes
)[:, 1]
except NotFittedError:
raise NotFittedError("Please fit the propensity score model before calling predict_proba")
else:
self.logger.info(f"INFO: Using pre-computed propensity score in column {self.propensity_score_column}")
def construct_symbolic_estimator(self, estimand):
"""
A symbolic string that conveys what each estimator does.
For instance, linear regression is expressed as
y ~ bx + e
"""
raise NotImplementedError
def _estimate_effect(self):
"""
A custom estimator based on the way the propensity score estimates are to be used.
"""
raise NotImplementedError
| from typing import Any, List, Optional, Union
import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn.exceptions import NotFittedError
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class PropensityScoreEstimator(CausalEstimator):
"""
Base class for estimators that estimate effects based on propensity of
treatment assignment.
Supports additional parameters as listed below.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
propensity_score_model: Optional[Any] = None,
propensity_score_column: str = "propensity_score",
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param propensity_score_model: Model used to compute propensity score.
Can be any classification model that supports fit() and
predict_proba() methods. If None, LogisticRegression is used.
:param propensity_score_column: Column name that stores the
propensity score. Default='propensity_score'
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
propensity_score_model=propensity_score_model,
propensity_score_column=propensity_score_column,
**kwargs,
)
# Enable the user to pass params for a custom propensity model
self.propensity_score_model = propensity_score_model
self.propensity_score_column = propensity_score_column
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if self._observed_common_causes_names:
self._observed_common_causes = self._data[self._observed_common_causes_names]
# Convert the categorical variables into dummy/indicator variables
# Basically, this gives a one hot encoding for each category
# The first category is taken to be the base line.
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
error_msg = "No common causes/confounders present. Propensity score based methods are not applicable"
self.logger.error(error_msg)
raise Exception(error_msg)
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
# Checking if the treatment is binary
treatment_values = self._data[self._treatment_name[0]].astype(int).unique()
if any([v not in [0, 1] for v in treatment_values]):
error_msg = "Propensity score methods are applicable only for binary treatments"
self.logger.error(error_msg)
raise Exception(error_msg)
if self.propensity_score_column not in self._data:
if self.propensity_score_model is None:
self.propensity_score_model = linear_model.LogisticRegression()
treatment_reshaped = np.ravel(self._treatment)
self.propensity_score_model.fit(self._observed_common_causes, treatment_reshaped)
return self
def estimate_propensity_score_column(self, data):
try:
data[self.propensity_score_column] = self.propensity_score_model.predict_proba(
self._observed_common_causes
)[:, 1]
except NotFittedError:
raise NotFittedError("Please fit the propensity score model before calling predict_proba")
def construct_symbolic_estimator(self, estimand):
"""
A symbolic string that conveys what each estimator does.
For instance, linear regression is expressed as
y ~ bx + e
"""
raise NotImplementedError
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | This method needs to be wrapped inside/moved to the fit method.
It was useful earlier to prevent re-fitting because we did not have an explicit fit method.
We need to support three usecases in the fit method:
1. user provides their own scores in the propensity_score_column of dataframe. In that case, fit is a no-operation.
2. user provides model. In that case, use the user's model to fit. If the user has provided an already fitted model, then it is their responsibility to not call `fit` for the estimator.
3. user does not provide model. In that case, use the default model to fit.
Specifically, I suggest the following logic: If user provides the propensity score column and that column exists in the data, then do not fit anything. If user provides column name but column does not exist, then we check if user has provided propensity score model. If user has provided model, then we set `self.propensity_score_model` to that class, otherwise logisticregression is default. After that, we fit the model.
Then in estimate_effect, we check again whether the propensity_score_column exists in the data (same check). If it does, we do nothing. If it does not, we call the self.propensity_score_model.predict_proba and fill the values of the propensity score column.
To summarize,
1. fit method fits the propensity model.
2. estimate effect method uses the fitted model to fill the propensity score, if not filled already.
all of the above logic can be obtained by moving/restructuring the code inside `refresh_propensity_score`.
Also, finally, recalculate_propensity_score variable can be removed.
| amit-sharma | 232 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/propensity_score_matching_estimator.py | import pandas as pd
from sklearn import linear_model
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate
from dowhy.causal_estimators.propensity_score_estimator import PropensityScoreEstimator
class PropensityScoreMatchingEstimator(PropensityScoreEstimator):
"""Estimate effect of treatment by finding matching treated and control
units based on propensity score.
Straightforward application of the back-door criterion.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
def __init__(
self,
*args,
propensity_score_model=None,
recalculate_propensity_score=True,
propensity_score_column="propensity_score",
**kwargs,
):
"""
:param propensity_score_model: Model used to compute propensity score.
Can be any classification model that supports fit() and
predict_proba() methods. If None, LogisticRegression is used.
:param recalculate_propensity_score: Whether the propensity score
should be estimated. To use pre-computed propensity scores,
set this value to False. Default=True.
:param propensity_score_column: Column name that stores the
propensity score. Default='propensity_score'
"""
super().__init__(
*args,
propensity_score_model=propensity_score_model,
recalculate_propensity_score=recalculate_propensity_score,
propensity_score_column=propensity_score_column,
**kwargs,
)
self.logger.info("INFO: Using Propensity Score Matching Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _estimate_effect(self):
self._refresh_propensity_score()
# this assumes a binary treatment regime
treated = self._data.loc[self._data[self._treatment_name[0]] == 1]
control = self._data.loc[self._data[self._treatment_name[0]] == 0]
# TODO remove neighbors that are more than a given radius apart
# estimate ATT on treated by summing over difference between matched neighbors
control_neighbors = NearestNeighbors(n_neighbors=1, algorithm="ball_tree").fit(
control[self.propensity_score_column].values.reshape(-1, 1)
)
distances, indices = control_neighbors.kneighbors(treated[self.propensity_score_column].values.reshape(-1, 1))
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
numtreatedunits = treated.shape[0]
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = control.iloc[indices[i]][self._outcome_name].item()
att += treated_outcome - control_outcome
att /= numtreatedunits
# Now computing ATC
treated_neighbors = NearestNeighbors(n_neighbors=1, algorithm="ball_tree").fit(
treated[self.propensity_score_column].values.reshape(-1, 1)
)
distances, indices = treated_neighbors.kneighbors(control[self.propensity_score_column].values.reshape(-1, 1))
atc = 0
numcontrolunits = control.shape[0]
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = treated.iloc[indices[i]][self._outcome_name].item()
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if self._target_units == "att":
est = att
elif self._target_units == "atc":
est = atc
elif self._target_units == "ate":
est = (att * numtreatedunits + atc * numcontrolunits) / (numtreatedunits + numcontrolunits)
else:
raise ValueError("Target units string value not supported")
estimate = CausalEstimate(
estimate=est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
propensity_scores=self._data[self.propensity_score_column],
)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| from typing import Any, List, Optional, Union
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.propensity_score_estimator import PropensityScoreEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class PropensityScoreMatchingEstimator(PropensityScoreEstimator):
"""Estimate effect of treatment by finding matching treated and control
units based on propensity score.
Straightforward application of the back-door criterion.
Supports additional parameters as listed below.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
propensity_score_model: Optional[Any] = None,
propensity_score_column: str = "propensity_score",
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param propensity_score_model: Model used to compute propensity score.
Can be any classification model that supports fit() and
predict_proba() methods. If None, LogisticRegression is used.
:param propensity_score_column: Column name that stores the
propensity score. Default='propensity_score'
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
propensity_score_model=propensity_score_model,
propensity_score_column=propensity_score_column,
**kwargs,
)
self.logger.info("INFO: Using Propensity Score Matching Estimator")
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
super().fit(data, treatment_name, outcome_name, effect_modifier_names=effect_modifier_names)
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
if self.propensity_score_column not in data:
self.estimate_propensity_score_column(data)
# this assumes a binary treatment regime
treated = data.loc[data[self._treatment_name[0]] == 1]
control = data.loc[data[self._treatment_name[0]] == 0]
# TODO remove neighbors that are more than a given radius apart
# estimate ATT on treated by summing over difference between matched neighbors
control_neighbors = NearestNeighbors(n_neighbors=1, algorithm="ball_tree").fit(
control[self.propensity_score_column].values.reshape(-1, 1)
)
distances, indices = control_neighbors.kneighbors(treated[self.propensity_score_column].values.reshape(-1, 1))
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
numtreatedunits = treated.shape[0]
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = control.iloc[indices[i]][self._outcome_name].item()
att += treated_outcome - control_outcome
att /= numtreatedunits
# Now computing ATC
treated_neighbors = NearestNeighbors(n_neighbors=1, algorithm="ball_tree").fit(
treated[self.propensity_score_column].values.reshape(-1, 1)
)
distances, indices = treated_neighbors.kneighbors(control[self.propensity_score_column].values.reshape(-1, 1))
atc = 0
numcontrolunits = control.shape[0]
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = treated.iloc[indices[i]][self._outcome_name].item()
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if target_units == "att":
est = att
elif target_units == "atc":
est = atc
elif target_units == "ate":
est = (att * numtreatedunits + atc * numcontrolunits) / (numtreatedunits + numcontrolunits)
else:
raise ValueError("Target units string value not supported")
estimate = CausalEstimate(
estimate=est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
propensity_scores=data[self.propensity_score_column],
)
estimate.add_estimator(self)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | The line below, "For a list of standards args and kwargs" , can be removed. | amit-sharma | 233 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/propensity_score_matching_estimator.py | import pandas as pd
from sklearn import linear_model
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate
from dowhy.causal_estimators.propensity_score_estimator import PropensityScoreEstimator
class PropensityScoreMatchingEstimator(PropensityScoreEstimator):
"""Estimate effect of treatment by finding matching treated and control
units based on propensity score.
Straightforward application of the back-door criterion.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
def __init__(
self,
*args,
propensity_score_model=None,
recalculate_propensity_score=True,
propensity_score_column="propensity_score",
**kwargs,
):
"""
:param propensity_score_model: Model used to compute propensity score.
Can be any classification model that supports fit() and
predict_proba() methods. If None, LogisticRegression is used.
:param recalculate_propensity_score: Whether the propensity score
should be estimated. To use pre-computed propensity scores,
set this value to False. Default=True.
:param propensity_score_column: Column name that stores the
propensity score. Default='propensity_score'
"""
super().__init__(
*args,
propensity_score_model=propensity_score_model,
recalculate_propensity_score=recalculate_propensity_score,
propensity_score_column=propensity_score_column,
**kwargs,
)
self.logger.info("INFO: Using Propensity Score Matching Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _estimate_effect(self):
self._refresh_propensity_score()
# this assumes a binary treatment regime
treated = self._data.loc[self._data[self._treatment_name[0]] == 1]
control = self._data.loc[self._data[self._treatment_name[0]] == 0]
# TODO remove neighbors that are more than a given radius apart
# estimate ATT on treated by summing over difference between matched neighbors
control_neighbors = NearestNeighbors(n_neighbors=1, algorithm="ball_tree").fit(
control[self.propensity_score_column].values.reshape(-1, 1)
)
distances, indices = control_neighbors.kneighbors(treated[self.propensity_score_column].values.reshape(-1, 1))
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
numtreatedunits = treated.shape[0]
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = control.iloc[indices[i]][self._outcome_name].item()
att += treated_outcome - control_outcome
att /= numtreatedunits
# Now computing ATC
treated_neighbors = NearestNeighbors(n_neighbors=1, algorithm="ball_tree").fit(
treated[self.propensity_score_column].values.reshape(-1, 1)
)
distances, indices = treated_neighbors.kneighbors(control[self.propensity_score_column].values.reshape(-1, 1))
atc = 0
numcontrolunits = control.shape[0]
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = treated.iloc[indices[i]][self._outcome_name].item()
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if self._target_units == "att":
est = att
elif self._target_units == "atc":
est = atc
elif self._target_units == "ate":
est = (att * numtreatedunits + atc * numcontrolunits) / (numtreatedunits + numcontrolunits)
else:
raise ValueError("Target units string value not supported")
estimate = CausalEstimate(
estimate=est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
propensity_scores=self._data[self.propensity_score_column],
)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| from typing import Any, List, Optional, Union
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.propensity_score_estimator import PropensityScoreEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class PropensityScoreMatchingEstimator(PropensityScoreEstimator):
"""Estimate effect of treatment by finding matching treated and control
units based on propensity score.
Straightforward application of the back-door criterion.
Supports additional parameters as listed below.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
propensity_score_model: Optional[Any] = None,
propensity_score_column: str = "propensity_score",
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param propensity_score_model: Model used to compute propensity score.
Can be any classification model that supports fit() and
predict_proba() methods. If None, LogisticRegression is used.
:param propensity_score_column: Column name that stores the
propensity score. Default='propensity_score'
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
propensity_score_model=propensity_score_model,
propensity_score_column=propensity_score_column,
**kwargs,
)
self.logger.info("INFO: Using Propensity Score Matching Estimator")
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
super().fit(data, treatment_name, outcome_name, effect_modifier_names=effect_modifier_names)
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
if self.propensity_score_column not in data:
self.estimate_propensity_score_column(data)
# this assumes a binary treatment regime
treated = data.loc[data[self._treatment_name[0]] == 1]
control = data.loc[data[self._treatment_name[0]] == 0]
# TODO remove neighbors that are more than a given radius apart
# estimate ATT on treated by summing over difference between matched neighbors
control_neighbors = NearestNeighbors(n_neighbors=1, algorithm="ball_tree").fit(
control[self.propensity_score_column].values.reshape(-1, 1)
)
distances, indices = control_neighbors.kneighbors(treated[self.propensity_score_column].values.reshape(-1, 1))
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
numtreatedunits = treated.shape[0]
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = control.iloc[indices[i]][self._outcome_name].item()
att += treated_outcome - control_outcome
att /= numtreatedunits
# Now computing ATC
treated_neighbors = NearestNeighbors(n_neighbors=1, algorithm="ball_tree").fit(
treated[self.propensity_score_column].values.reshape(-1, 1)
)
distances, indices = treated_neighbors.kneighbors(control[self.propensity_score_column].values.reshape(-1, 1))
atc = 0
numcontrolunits = control.shape[0]
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = treated.iloc[indices[i]][self._outcome_name].item()
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if target_units == "att":
est = att
elif target_units == "atc":
est = atc
elif target_units == "ate":
est = (att * numtreatedunits + atc * numcontrolunits) / (numtreatedunits + numcontrolunits)
else:
raise ValueError("Target units string value not supported")
estimate = CausalEstimate(
estimate=est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
propensity_scores=data[self.propensity_score_column],
)
estimate.add_estimator(self)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | do not need recalculate_ps param | amit-sharma | 234 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/propensity_score_matching_estimator.py | import pandas as pd
from sklearn import linear_model
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate
from dowhy.causal_estimators.propensity_score_estimator import PropensityScoreEstimator
class PropensityScoreMatchingEstimator(PropensityScoreEstimator):
"""Estimate effect of treatment by finding matching treated and control
units based on propensity score.
Straightforward application of the back-door criterion.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
def __init__(
self,
*args,
propensity_score_model=None,
recalculate_propensity_score=True,
propensity_score_column="propensity_score",
**kwargs,
):
"""
:param propensity_score_model: Model used to compute propensity score.
Can be any classification model that supports fit() and
predict_proba() methods. If None, LogisticRegression is used.
:param recalculate_propensity_score: Whether the propensity score
should be estimated. To use pre-computed propensity scores,
set this value to False. Default=True.
:param propensity_score_column: Column name that stores the
propensity score. Default='propensity_score'
"""
super().__init__(
*args,
propensity_score_model=propensity_score_model,
recalculate_propensity_score=recalculate_propensity_score,
propensity_score_column=propensity_score_column,
**kwargs,
)
self.logger.info("INFO: Using Propensity Score Matching Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _estimate_effect(self):
self._refresh_propensity_score()
# this assumes a binary treatment regime
treated = self._data.loc[self._data[self._treatment_name[0]] == 1]
control = self._data.loc[self._data[self._treatment_name[0]] == 0]
# TODO remove neighbors that are more than a given radius apart
# estimate ATT on treated by summing over difference between matched neighbors
control_neighbors = NearestNeighbors(n_neighbors=1, algorithm="ball_tree").fit(
control[self.propensity_score_column].values.reshape(-1, 1)
)
distances, indices = control_neighbors.kneighbors(treated[self.propensity_score_column].values.reshape(-1, 1))
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
numtreatedunits = treated.shape[0]
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = control.iloc[indices[i]][self._outcome_name].item()
att += treated_outcome - control_outcome
att /= numtreatedunits
# Now computing ATC
treated_neighbors = NearestNeighbors(n_neighbors=1, algorithm="ball_tree").fit(
treated[self.propensity_score_column].values.reshape(-1, 1)
)
distances, indices = treated_neighbors.kneighbors(control[self.propensity_score_column].values.reshape(-1, 1))
atc = 0
numcontrolunits = control.shape[0]
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = treated.iloc[indices[i]][self._outcome_name].item()
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if self._target_units == "att":
est = att
elif self._target_units == "atc":
est = atc
elif self._target_units == "ate":
est = (att * numtreatedunits + atc * numcontrolunits) / (numtreatedunits + numcontrolunits)
else:
raise ValueError("Target units string value not supported")
estimate = CausalEstimate(
estimate=est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
propensity_scores=self._data[self.propensity_score_column],
)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| from typing import Any, List, Optional, Union
import pandas as pd
from sklearn.neighbors import NearestNeighbors
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.propensity_score_estimator import PropensityScoreEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class PropensityScoreMatchingEstimator(PropensityScoreEstimator):
"""Estimate effect of treatment by finding matching treated and control
units based on propensity score.
Straightforward application of the back-door criterion.
Supports additional parameters as listed below.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
propensity_score_model: Optional[Any] = None,
propensity_score_column: str = "propensity_score",
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param propensity_score_model: Model used to compute propensity score.
Can be any classification model that supports fit() and
predict_proba() methods. If None, LogisticRegression is used.
:param propensity_score_column: Column name that stores the
propensity score. Default='propensity_score'
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
propensity_score_model=propensity_score_model,
propensity_score_column=propensity_score_column,
**kwargs,
)
self.logger.info("INFO: Using Propensity Score Matching Estimator")
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
super().fit(data, treatment_name, outcome_name, effect_modifier_names=effect_modifier_names)
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
if self.propensity_score_column not in data:
self.estimate_propensity_score_column(data)
# this assumes a binary treatment regime
treated = data.loc[data[self._treatment_name[0]] == 1]
control = data.loc[data[self._treatment_name[0]] == 0]
# TODO remove neighbors that are more than a given radius apart
# estimate ATT on treated by summing over difference between matched neighbors
control_neighbors = NearestNeighbors(n_neighbors=1, algorithm="ball_tree").fit(
control[self.propensity_score_column].values.reshape(-1, 1)
)
distances, indices = control_neighbors.kneighbors(treated[self.propensity_score_column].values.reshape(-1, 1))
self.logger.debug("distances:")
self.logger.debug(distances)
att = 0
numtreatedunits = treated.shape[0]
for i in range(numtreatedunits):
treated_outcome = treated.iloc[i][self._outcome_name].item()
control_outcome = control.iloc[indices[i]][self._outcome_name].item()
att += treated_outcome - control_outcome
att /= numtreatedunits
# Now computing ATC
treated_neighbors = NearestNeighbors(n_neighbors=1, algorithm="ball_tree").fit(
treated[self.propensity_score_column].values.reshape(-1, 1)
)
distances, indices = treated_neighbors.kneighbors(control[self.propensity_score_column].values.reshape(-1, 1))
atc = 0
numcontrolunits = control.shape[0]
for i in range(numcontrolunits):
control_outcome = control.iloc[i][self._outcome_name].item()
treated_outcome = treated.iloc[indices[i]][self._outcome_name].item()
atc += treated_outcome - control_outcome
atc /= numcontrolunits
if target_units == "att":
est = att
elif target_units == "atc":
est = atc
elif target_units == "ate":
est = (att * numtreatedunits + atc * numcontrolunits) / (numtreatedunits + numcontrolunits)
else:
raise ValueError("Target units string value not supported")
estimate = CausalEstimate(
estimate=est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
propensity_scores=data[self.propensity_score_column],
)
estimate.add_estimator(self)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ", ".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | the code of "refresh propensity score" should be inside the fit function.
This logic was created because we did not have a fit method. So refresh method checks whether model is fitted already, if not it fits it. We can remove this refresh function and move its code to the fit method. | amit-sharma | 235 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/propensity_score_stratification_estimator.py | import pandas as pd
from sklearn import linear_model
from dowhy.causal_estimator import CausalEstimate
from dowhy.causal_estimators.propensity_score_estimator import PropensityScoreEstimator
class PropensityScoreStratificationEstimator(PropensityScoreEstimator):
"""Estimate effect of treatment by stratifying the data into bins with
identical common causes.
Straightforward application of the back-door criterion.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
def __init__(
self,
*args,
num_strata="auto",
clipping_threshold=10,
propensity_score_model=None,
recalculate_propensity_score=True,
propensity_score_column="propensity_score",
**kwargs,
):
"""
:param num_strata: Number of bins by which data will be stratified.
Default is automatically determined.
:param clipping_threshold: Mininum number of treated or control units
per strata. Default=10
:param propensity_score_model: The model used to compute propensity
score. Can be any classification model that supports fit() and
predict_proba() methods. If None, use
LogisticRegression model as the default.
:param recalculate_propensity_score: If true, force the estimator to
estimate the propensity score. To use pre-computed propensity
scores, set this value to False. Default=True
:param propensity_score_column: Column name that stores the propensity
score. Default='propensity_score'
"""
# Required to ensure that self.method_params contains all the information
# to create an object of this class
args_dict = kwargs
args_dict.update({"num_strata": num_strata, "clipping_threshold": clipping_threshold})
super().__init__(
*args,
propensity_score_model=propensity_score_model,
recalculate_propensity_score=recalculate_propensity_score,
propensity_score_column=propensity_score_column,
**args_dict,
)
self.logger.info("Using Propensity Score Stratification Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
# setting method-specific parameters
self.num_strata = num_strata
self.clipping_threshold = clipping_threshold
def _estimate_effect(self):
self._refresh_propensity_score()
clipped = None
# Infer the right strata based on clipping threshold
if self.num_strata == "auto":
# 0.5 because there are two values for the treatment
clipping_t = self.clipping_threshold
num_strata = 0.5 * self._data.shape[0] / clipping_t
# To be conservative and allow most strata to be included in the
# analysis
strata_found = False
while not strata_found:
self.logger.info("'num_strata' selected as {}".format(num_strata))
try:
clipped = self._get_strata(num_strata, self.clipping_threshold)
num_ret_strata = clipped.groupby(["strata"]).count().reset_index()
# At least 90% of the strata should be included in analysis
if num_ret_strata.shape[0] >= 0.5 * num_strata:
strata_found = True
else:
num_strata = int(num_strata / 2)
self.logger.info(
f"Less than half the strata have at least {self.clipping_threshold} data points. Selecting fewer number of strata."
)
if num_strata < 2:
raise ValueError(
"Not enough data to generate at least two strata. This error may be due to a high value of 'clipping_threshold'."
)
except ValueError:
self.logger.info(
"No strata found with at least {} data points. Selecting fewer number of strata".format(
self.clipping_threshold
)
)
num_strata = int(num_strata / 2)
if num_strata < 2:
raise ValueError(
"Not enough data to generate at least two strata. This error may be due to a high value of 'clipping_threshold'."
)
else:
clipped = self._get_strata(self.num_strata, self.clipping_threshold)
# sum weighted outcomes over all strata (weight by treated population)
weighted_outcomes = clipped.groupby("strata").agg(
{self._treatment_name[0]: ["sum"], "dbar": ["sum"], "d_y": ["sum"], "dbar_y": ["sum"]}
)
weighted_outcomes.columns = ["_".join(x) for x in weighted_outcomes.columns.to_numpy().ravel()]
treatment_sum_name = self._treatment_name[0] + "_sum"
control_sum_name = "dbar_sum"
weighted_outcomes["d_y_mean"] = weighted_outcomes["d_y_sum"] / weighted_outcomes[treatment_sum_name]
weighted_outcomes["dbar_y_mean"] = weighted_outcomes["dbar_y_sum"] / weighted_outcomes["dbar_sum"]
weighted_outcomes["effect"] = weighted_outcomes["d_y_mean"] - weighted_outcomes["dbar_y_mean"]
total_treatment_population = weighted_outcomes[treatment_sum_name].sum()
total_control_population = weighted_outcomes[control_sum_name].sum()
total_population = total_treatment_population + total_control_population
self.logger.debug(
"Total number of data points is {0}, including {1} from treatment and {2} from control.".format(
total_population, total_treatment_population, total_control_population
)
)
if self._target_units == "att":
est = (
weighted_outcomes["effect"] * weighted_outcomes[treatment_sum_name]
).sum() / total_treatment_population
elif self._target_units == "atc":
est = (weighted_outcomes["effect"] * weighted_outcomes[control_sum_name]).sum() / total_control_population
elif self._target_units == "ate":
est = (
weighted_outcomes["effect"]
* (weighted_outcomes[control_sum_name] + weighted_outcomes[treatment_sum_name])
).sum() / total_population
else:
raise ValueError("Target units string value not supported")
# TODO - how can we add additional information into the returned estimate?
# such as how much clipping was done, or per-strata info for debugging?
estimate = CausalEstimate(
estimate=est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
propensity_scores=self._data[self.propensity_score_column],
)
return estimate
def _get_strata(self, num_strata, clipping_threshold):
# sort the dataframe by propensity score
# create a column 'strata' for each element that marks what strata it belongs to
num_rows = self._data[self._outcome_name].shape[0]
self._data["strata"] = (
(self._data[self.propensity_score_column].rank(ascending=True) / num_rows) * num_strata
).round(0)
# for each strata, count how many treated and control units there are
# throw away strata that have insufficient treatment or control
self._data["dbar"] = 1 - self._data[self._treatment_name[0]] # 1-Treatment
self._data["d_y"] = self._data[self._treatment_name[0]] * self._data[self._outcome_name]
self._data["dbar_y"] = self._data["dbar"] * self._data[self._outcome_name]
stratified = self._data.groupby("strata")
clipped = stratified.filter(
lambda strata: min(
strata.loc[strata[self._treatment_name[0]] == 1].shape[0],
strata.loc[strata[self._treatment_name[0]] == 0].shape[0],
)
> clipping_threshold
)
self.logger.debug(
"After using clipping_threshold={0}, here are the number of data points in each strata:\n {1}".format(
clipping_threshold, clipped.groupby(["strata", self._treatment_name[0]])[self._outcome_name].count()
)
)
if clipped.empty:
raise ValueError(
"Method requires strata with number of data points per treatment > clipping_threshold (={0}). No such strata exists. Consider decreasing 'num_strata' or 'clipping_threshold' parameters.".format(
clipping_threshold
)
)
return clipped
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ",".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| from typing import Any, List, Optional, Union
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.propensity_score_estimator import PropensityScoreEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class PropensityScoreStratificationEstimator(PropensityScoreEstimator):
"""Estimate effect of treatment by stratifying the data into bins with
identical common causes.
Straightforward application of the back-door criterion.
Supports additional parameters as listed below.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
num_strata: Union[str, int] = "auto",
clipping_threshold: int = 10,
propensity_score_model: Optional[Any] = None,
propensity_score_column: str = "propensity_score",
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param num_strata: Number of bins by which data will be stratified.
Default is automatically determined.
:param clipping_threshold: Mininum number of treated or control units
per strata. Default=10
:param propensity_score_model: The model used to compute propensity
score. Can be any classification model that supports fit() and
predict_proba() methods. If None, use
LogisticRegression model as the default.
:param propensity_score_column: Column name that stores the propensity
score. Default='propensity_score'
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
propensity_score_model=propensity_score_model,
propensity_score_column=propensity_score_column,
num_strata=num_strata,
clipping_threshold=clipping_threshold,
**kwargs,
)
self.logger.info("Using Propensity Score Stratification Estimator")
# setting method-specific parameters
self.num_strata = num_strata
self.clipping_threshold = clipping_threshold
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
super().fit(data, treatment_name, outcome_name, effect_modifier_names=effect_modifier_names)
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
if self.propensity_score_column not in data:
self.estimate_propensity_score_column(data)
clipped = None
# Infer the right strata based on clipping threshold
if self.num_strata == "auto":
# 0.5 because there are two values for the treatment
clipping_t = self.clipping_threshold
num_strata = 0.5 * data.shape[0] / clipping_t
# To be conservative and allow most strata to be included in the
# analysis
strata_found = False
while not strata_found:
self.logger.info("'num_strata' selected as {}".format(num_strata))
try:
clipped = self._get_strata(num_strata, self.clipping_threshold)
num_ret_strata = clipped.groupby(["strata"]).count().reset_index()
# At least 90% of the strata should be included in analysis
if num_ret_strata.shape[0] >= 0.5 * num_strata:
strata_found = True
else:
num_strata = int(num_strata / 2)
self.logger.info(
f"Less than half the strata have at least {self.clipping_threshold} data points. Selecting fewer number of strata."
)
if num_strata < 2:
raise ValueError(
"Not enough data to generate at least two strata. This error may be due to a high value of 'clipping_threshold'."
)
except ValueError:
self.logger.info(
"No strata found with at least {} data points. Selecting fewer number of strata".format(
self.clipping_threshold
)
)
num_strata = int(num_strata / 2)
if num_strata < 2:
raise ValueError(
"Not enough data to generate at least two strata. This error may be due to a high value of 'clipping_threshold'."
)
else:
clipped = self._get_strata(self.num_strata, self.clipping_threshold)
# sum weighted outcomes over all strata (weight by treated population)
weighted_outcomes = clipped.groupby("strata").agg(
{self._treatment_name[0]: ["sum"], "dbar": ["sum"], "d_y": ["sum"], "dbar_y": ["sum"]}
)
weighted_outcomes.columns = ["_".join(x) for x in weighted_outcomes.columns.to_numpy().ravel()]
treatment_sum_name = self._treatment_name[0] + "_sum"
control_sum_name = "dbar_sum"
weighted_outcomes["d_y_mean"] = weighted_outcomes["d_y_sum"] / weighted_outcomes[treatment_sum_name]
weighted_outcomes["dbar_y_mean"] = weighted_outcomes["dbar_y_sum"] / weighted_outcomes["dbar_sum"]
weighted_outcomes["effect"] = weighted_outcomes["d_y_mean"] - weighted_outcomes["dbar_y_mean"]
total_treatment_population = weighted_outcomes[treatment_sum_name].sum()
total_control_population = weighted_outcomes[control_sum_name].sum()
total_population = total_treatment_population + total_control_population
self.logger.debug(
"Total number of data points is {0}, including {1} from treatment and {2} from control.".format(
total_population, total_treatment_population, total_control_population
)
)
if target_units == "att":
est = (
weighted_outcomes["effect"] * weighted_outcomes[treatment_sum_name]
).sum() / total_treatment_population
elif target_units == "atc":
est = (weighted_outcomes["effect"] * weighted_outcomes[control_sum_name]).sum() / total_control_population
elif target_units == "ate":
est = (
weighted_outcomes["effect"]
* (weighted_outcomes[control_sum_name] + weighted_outcomes[treatment_sum_name])
).sum() / total_population
else:
raise ValueError("Target units string value not supported")
# TODO - how can we add additional information into the returned estimate?
# such as how much clipping was done, or per-strata info for debugging?
estimate = CausalEstimate(
estimate=est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
propensity_scores=data[self.propensity_score_column],
)
estimate.add_estimator(self)
return estimate
def _get_strata(self, num_strata, clipping_threshold):
# sort the dataframe by propensity score
# create a column 'strata' for each element that marks what strata it belongs to
num_rows = self._data[self._outcome_name].shape[0]
self._data["strata"] = (
(self._data[self.propensity_score_column].rank(ascending=True) / num_rows) * num_strata
).round(0)
# for each strata, count how many treated and control units there are
# throw away strata that have insufficient treatment or control
self._data["dbar"] = 1 - self._data[self._treatment_name[0]] # 1-Treatment
self._data["d_y"] = self._data[self._treatment_name[0]] * self._data[self._outcome_name]
self._data["dbar_y"] = self._data["dbar"] * self._data[self._outcome_name]
stratified = self._data.groupby("strata")
clipped = stratified.filter(
lambda strata: min(
strata.loc[strata[self._treatment_name[0]] == 1].shape[0],
strata.loc[strata[self._treatment_name[0]] == 0].shape[0],
)
> clipping_threshold
)
self.logger.debug(
"After using clipping_threshold={0}, here are the number of data points in each strata:\n {1}".format(
clipping_threshold, clipped.groupby(["strata", self._treatment_name[0]])[self._outcome_name].count()
)
)
if clipped.empty:
raise ValueError(
"Method requires strata with number of data points per treatment > clipping_threshold (={0}). No such strata exists. Consider decreasing 'num_strata' or 'clipping_threshold' parameters.".format(
clipping_threshold
)
)
return clipped
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ",".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | do not need refresh propensity score here, just the assignment of model's predictions to the propensity score column | amit-sharma | 236 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/propensity_score_weighting_estimator.py | import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate
from dowhy.causal_estimators.propensity_score_estimator import PropensityScoreEstimator
class PropensityScoreWeightingEstimator(PropensityScoreEstimator):
"""Estimate effect of treatment by weighing the data by
inverse probability of occurrence.
Straightforward application of the back-door criterion.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
def __init__(
self,
*args,
min_ps_score=0.05,
max_ps_score=0.95,
weighting_scheme="ips_weight",
propensity_score_model=None,
recalculate_propensity_score=True,
propensity_score_column="propensity_score",
**kwargs,
):
"""
:param min_ps_score: Lower bound used to clip the propensity score.
Default=0.05
:param max_ps_score: Upper bound used to clip the propensity score.
Default=0.95
:param weighting_scheme: Weighting method to use. Can be inverse
propensity score ("ips_weight", default), stabilized IPS score
("ips_stabilized_weight"), or normalized IPS score
("ips_normalized_weight").
:param propensity_score_model: The model used to compute propensity
score. Can be any classification model that supports fit() and
predict_proba() methods. If None, use LogisticRegression model as
the default. Default=None
:param recalculate_propensity_score: If true, force the estimator to
estimate the propensity score. To use pre-computed propensity
scores, set this value to false. Default=True
:param propensity_score_column: Column name that stores the
propensity score. Default='propensity_score'
"""
# Required to ensure that self.method_params contains all the information
# to create an object of this class
args_dict = kwargs
args_dict.update(
{"min_ps_score": min_ps_score, "max_ps_score": max_ps_score, "weighting_scheme": weighting_scheme}
)
super().__init__(
*args,
propensity_score_model=propensity_score_model,
recalculate_propensity_score=recalculate_propensity_score,
propensity_score_column=propensity_score_column,
**args_dict,
)
self.logger.info("INFO: Using Propensity Score Weighting Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
# Setting method specific parameters
self.weighting_scheme = weighting_scheme
self.min_ps_score = min_ps_score
self.max_ps_score = max_ps_score
def _estimate_effect(self):
self._refresh_propensity_score()
# trim propensity score weights
self._data[self.propensity_score_column] = np.minimum(
self.max_ps_score, self._data[self.propensity_score_column]
)
self._data[self.propensity_score_column] = np.maximum(
self.min_ps_score, self._data[self.propensity_score_column]
)
# ips ==> (isTreated(y)/ps(y)) + ((1-isTreated(y))/(1-ps(y)))
# nips ==> ips / (sum of ips over all units)
# icps ==> ps(y)/(1-ps(y)) / (sum of (ps(y)/(1-ps(y))) over all control units)
# itps ==> ps(y)/(1-ps(y)) / (sum of (ps(y)/(1-ps(y))) over all treatment units)
ipst_sum = sum(self._data[self._treatment_name[0]] / self._data[self.propensity_score_column])
ipsc_sum = sum((1 - self._data[self._treatment_name[0]]) / (1 - self._data[self.propensity_score_column]))
num_units = len(self._data[self._treatment_name[0]])
num_treatment_units = sum(self._data[self._treatment_name[0]])
num_control_units = num_units - num_treatment_units
# Vanilla IPS estimator
self._data["ips_weight"] = self._data[self._treatment_name[0]] / self._data[self.propensity_score_column] + (
1 - self._data[self._treatment_name[0]]
) / (1 - self._data[self.propensity_score_column])
self._data["tips_weight"] = self._data[self._treatment_name[0]] + (
1 - self._data[self._treatment_name[0]]
) * self._data[self.propensity_score_column] / (1 - self._data[self.propensity_score_column])
self._data["cips_weight"] = self._data[self._treatment_name[0]] * (
1 - self._data[self.propensity_score_column]
) / self._data[self.propensity_score_column] + (1 - self._data[self._treatment_name[0]])
# The Hajek estimator (or the self-normalized estimator)
self._data["ips_normalized_weight"] = (
self._data[self._treatment_name[0]] / self._data[self.propensity_score_column] / ipst_sum
+ (1 - self._data[self._treatment_name[0]]) / (1 - self._data[self.propensity_score_column]) / ipsc_sum
)
ipst_for_att_sum = sum(self._data[self._treatment_name[0]])
ipsc_for_att_sum = sum(
(1 - self._data[self._treatment_name[0]])
/ (1 - self._data[self.propensity_score_column])
* self._data[self.propensity_score_column]
)
self._data["tips_normalized_weight"] = (
self._data[self._treatment_name[0]] / ipst_for_att_sum
+ (1 - self._data[self._treatment_name[0]])
* self._data[self.propensity_score_column]
/ (1 - self._data[self.propensity_score_column])
/ ipsc_for_att_sum
)
ipst_for_atc_sum = sum(
self._data[self._treatment_name[0]]
/ self._data[self.propensity_score_column]
* (1 - self._data[self.propensity_score_column])
)
ipsc_for_atc_sum = sum((1 - self._data[self._treatment_name[0]]))
self._data["cips_normalized_weight"] = (
self._data[self._treatment_name[0]]
* (1 - self._data[self.propensity_score_column])
/ self._data[self.propensity_score_column]
/ ipst_for_atc_sum
+ (1 - self._data[self._treatment_name[0]]) / ipsc_for_atc_sum
)
# Stabilized weights (from Robins, Hernan, Brumback (2000))
# Paper: Marginal Structural Models and Causal Inference in Epidemiology
p_treatment = sum(self._data[self._treatment_name[0]]) / num_units
self._data["ips_stabilized_weight"] = self._data[self._treatment_name[0]] / self._data[
self.propensity_score_column
] * p_treatment + (1 - self._data[self._treatment_name[0]]) / (1 - self._data[self.propensity_score_column]) * (
1 - p_treatment
)
self._data["tips_stabilized_weight"] = self._data[self._treatment_name[0]] * p_treatment + (
1 - self._data[self._treatment_name[0]]
) * self._data[self.propensity_score_column] / (1 - self._data[self.propensity_score_column]) * (
1 - p_treatment
)
self._data["cips_stabilized_weight"] = self._data[self._treatment_name[0]] * (
1 - self._data[self.propensity_score_column]
) / self._data[self.propensity_score_column] * p_treatment + (1 - self._data[self._treatment_name[0]]) * (
1 - p_treatment
)
if isinstance(self._target_units, pd.DataFrame) or self._target_units == "ate":
weighting_scheme_name = self.weighting_scheme
elif self._target_units == "att":
weighting_scheme_name = "t" + self.weighting_scheme
elif self._target_units == "atc":
weighting_scheme_name = "c" + self.weighting_scheme
else:
raise ValueError(f"Target units value {self._target_units} not supported")
# Calculating the effect
self._data["d_y"] = (
self._data[weighting_scheme_name] * self._data[self._treatment_name[0]] * self._data[self._outcome_name]
)
self._data["dbar_y"] = (
self._data[weighting_scheme_name]
* (1 - self._data[self._treatment_name[0]])
* self._data[self._outcome_name]
)
sum_dy_weights = np.sum(self._data[self._treatment_name[0]] * self._data[weighting_scheme_name])
sum_dbary_weights = np.sum((1 - self._data[self._treatment_name[0]]) * self._data[weighting_scheme_name])
# Subtracting the weighted means
est = self._data["d_y"].sum() / sum_dy_weights - self._data["dbar_y"].sum() / sum_dbary_weights
# TODO - how can we add additional information into the returned estimate?
estimate = CausalEstimate(
estimate=est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
propensity_scores=self._data[self.propensity_score_column],
)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ",".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| from typing import Any, List, Optional, Union
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.propensity_score_estimator import PropensityScoreEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class PropensityScoreWeightingEstimator(PropensityScoreEstimator):
"""Estimate effect of treatment by weighing the data by
inverse probability of occurrence.
Straightforward application of the back-door criterion.
Supports additional parameters as listed below.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
min_ps_score: float = 0.05,
max_ps_score: float = 0.95,
weighting_scheme: str = "ips_weight",
propensity_score_model: Optional[Any] = None,
propensity_score_column: str = "propensity_score",
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param min_ps_score: Lower bound used to clip the propensity score.
Default=0.05
:param max_ps_score: Upper bound used to clip the propensity score.
Default=0.95
:param weighting_scheme: Weighting method to use. Can be inverse
propensity score ("ips_weight", default), stabilized IPS score
("ips_stabilized_weight"), or normalized IPS score
("ips_normalized_weight").
:param propensity_score_model: The model used to compute propensity
score. Can be any classification model that supports fit() and
predict_proba() methods. If None, use LogisticRegression model as
the default. Default=None
:param propensity_score_column: Column name that stores the
propensity score. Default='propensity_score'
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
propensity_score_model=propensity_score_model,
propensity_score_column=propensity_score_column,
min_ps_score=min_ps_score,
max_ps_score=max_ps_score,
weighting_scheme=weighting_scheme,
**kwargs,
)
self.logger.info("INFO: Using Propensity Score Weighting Estimator")
# Setting method specific parameters
self.weighting_scheme = weighting_scheme
self.min_ps_score = min_ps_score
self.max_ps_score = max_ps_score
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
super().fit(data, treatment_name, outcome_name, effect_modifier_names=effect_modifier_names)
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
if self.propensity_score_column not in data:
self.estimate_propensity_score_column(data)
# trim propensity score weights
data[self.propensity_score_column] = np.minimum(self.max_ps_score, data[self.propensity_score_column])
data[self.propensity_score_column] = np.maximum(self.min_ps_score, data[self.propensity_score_column])
# ips ==> (isTreated(y)/ps(y)) + ((1-isTreated(y))/(1-ps(y)))
# nips ==> ips / (sum of ips over all units)
# icps ==> ps(y)/(1-ps(y)) / (sum of (ps(y)/(1-ps(y))) over all control units)
# itps ==> ps(y)/(1-ps(y)) / (sum of (ps(y)/(1-ps(y))) over all treatment units)
ipst_sum = sum(data[self._treatment_name[0]] / data[self.propensity_score_column])
ipsc_sum = sum((1 - data[self._treatment_name[0]]) / (1 - data[self.propensity_score_column]))
num_units = len(data[self._treatment_name[0]])
num_treatment_units = sum(data[self._treatment_name[0]])
num_control_units = num_units - num_treatment_units
# Vanilla IPS estimator
data["ips_weight"] = data[self._treatment_name[0]] / data[self.propensity_score_column] + (
1 - data[self._treatment_name[0]]
) / (1 - data[self.propensity_score_column])
data["tips_weight"] = data[self._treatment_name[0]] + (1 - data[self._treatment_name[0]]) * data[
self.propensity_score_column
] / (1 - data[self.propensity_score_column])
data["cips_weight"] = data[self._treatment_name[0]] * (1 - data[self.propensity_score_column]) / data[
self.propensity_score_column
] + (1 - data[self._treatment_name[0]])
# The Hajek estimator (or the self-normalized estimator)
data["ips_normalized_weight"] = (
data[self._treatment_name[0]] / data[self.propensity_score_column] / ipst_sum
+ (1 - data[self._treatment_name[0]]) / (1 - data[self.propensity_score_column]) / ipsc_sum
)
ipst_for_att_sum = sum(data[self._treatment_name[0]])
ipsc_for_att_sum = sum(
(1 - data[self._treatment_name[0]])
/ (1 - data[self.propensity_score_column])
* data[self.propensity_score_column]
)
data["tips_normalized_weight"] = (
data[self._treatment_name[0]] / ipst_for_att_sum
+ (1 - data[self._treatment_name[0]])
* data[self.propensity_score_column]
/ (1 - data[self.propensity_score_column])
/ ipsc_for_att_sum
)
ipst_for_atc_sum = sum(
data[self._treatment_name[0]]
/ data[self.propensity_score_column]
* (1 - data[self.propensity_score_column])
)
ipsc_for_atc_sum = sum((1 - data[self._treatment_name[0]]))
data["cips_normalized_weight"] = (
data[self._treatment_name[0]]
* (1 - data[self.propensity_score_column])
/ data[self.propensity_score_column]
/ ipst_for_atc_sum
+ (1 - data[self._treatment_name[0]]) / ipsc_for_atc_sum
)
# Stabilized weights (from Robins, Hernan, Brumback (2000))
# Paper: Marginal Structural Models and Causal Inference in Epidemiology
p_treatment = sum(data[self._treatment_name[0]]) / num_units
data["ips_stabilized_weight"] = data[self._treatment_name[0]] / data[
self.propensity_score_column
] * p_treatment + (1 - data[self._treatment_name[0]]) / (1 - data[self.propensity_score_column]) * (
1 - p_treatment
)
data["tips_stabilized_weight"] = data[self._treatment_name[0]] * p_treatment + (
1 - data[self._treatment_name[0]]
) * data[self.propensity_score_column] / (1 - data[self.propensity_score_column]) * (1 - p_treatment)
data["cips_stabilized_weight"] = data[self._treatment_name[0]] * (
1 - data[self.propensity_score_column]
) / data[self.propensity_score_column] * p_treatment + (1 - data[self._treatment_name[0]]) * (1 - p_treatment)
if isinstance(target_units, pd.DataFrame) or target_units == "ate":
weighting_scheme_name = self.weighting_scheme
elif target_units == "att":
weighting_scheme_name = "t" + self.weighting_scheme
elif target_units == "atc":
weighting_scheme_name = "c" + self.weighting_scheme
else:
raise ValueError(f"Target units value {target_units} not supported")
# Calculating the effect
data["d_y"] = data[weighting_scheme_name] * data[self._treatment_name[0]] * data[self._outcome_name]
data["dbar_y"] = data[weighting_scheme_name] * (1 - data[self._treatment_name[0]]) * data[self._outcome_name]
sum_dy_weights = np.sum(data[self._treatment_name[0]] * data[weighting_scheme_name])
sum_dbary_weights = np.sum((1 - data[self._treatment_name[0]]) * data[weighting_scheme_name])
# Subtracting the weighted means
est = data["d_y"].sum() / sum_dy_weights - data["dbar_y"].sum() / sum_dbary_weights
# TODO - how can we add additional information into the returned estimate?
estimate = CausalEstimate(
estimate=est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
propensity_scores=data[self.propensity_score_column],
)
estimate.add_estimator(self)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ",".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | recalculate ps param is not needed. | amit-sharma | 237 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/propensity_score_weighting_estimator.py | import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate
from dowhy.causal_estimators.propensity_score_estimator import PropensityScoreEstimator
class PropensityScoreWeightingEstimator(PropensityScoreEstimator):
"""Estimate effect of treatment by weighing the data by
inverse probability of occurrence.
Straightforward application of the back-door criterion.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
def __init__(
self,
*args,
min_ps_score=0.05,
max_ps_score=0.95,
weighting_scheme="ips_weight",
propensity_score_model=None,
recalculate_propensity_score=True,
propensity_score_column="propensity_score",
**kwargs,
):
"""
:param min_ps_score: Lower bound used to clip the propensity score.
Default=0.05
:param max_ps_score: Upper bound used to clip the propensity score.
Default=0.95
:param weighting_scheme: Weighting method to use. Can be inverse
propensity score ("ips_weight", default), stabilized IPS score
("ips_stabilized_weight"), or normalized IPS score
("ips_normalized_weight").
:param propensity_score_model: The model used to compute propensity
score. Can be any classification model that supports fit() and
predict_proba() methods. If None, use LogisticRegression model as
the default. Default=None
:param recalculate_propensity_score: If true, force the estimator to
estimate the propensity score. To use pre-computed propensity
scores, set this value to false. Default=True
:param propensity_score_column: Column name that stores the
propensity score. Default='propensity_score'
"""
# Required to ensure that self.method_params contains all the information
# to create an object of this class
args_dict = kwargs
args_dict.update(
{"min_ps_score": min_ps_score, "max_ps_score": max_ps_score, "weighting_scheme": weighting_scheme}
)
super().__init__(
*args,
propensity_score_model=propensity_score_model,
recalculate_propensity_score=recalculate_propensity_score,
propensity_score_column=propensity_score_column,
**args_dict,
)
self.logger.info("INFO: Using Propensity Score Weighting Estimator")
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
# Setting method specific parameters
self.weighting_scheme = weighting_scheme
self.min_ps_score = min_ps_score
self.max_ps_score = max_ps_score
def _estimate_effect(self):
self._refresh_propensity_score()
# trim propensity score weights
self._data[self.propensity_score_column] = np.minimum(
self.max_ps_score, self._data[self.propensity_score_column]
)
self._data[self.propensity_score_column] = np.maximum(
self.min_ps_score, self._data[self.propensity_score_column]
)
# ips ==> (isTreated(y)/ps(y)) + ((1-isTreated(y))/(1-ps(y)))
# nips ==> ips / (sum of ips over all units)
# icps ==> ps(y)/(1-ps(y)) / (sum of (ps(y)/(1-ps(y))) over all control units)
# itps ==> ps(y)/(1-ps(y)) / (sum of (ps(y)/(1-ps(y))) over all treatment units)
ipst_sum = sum(self._data[self._treatment_name[0]] / self._data[self.propensity_score_column])
ipsc_sum = sum((1 - self._data[self._treatment_name[0]]) / (1 - self._data[self.propensity_score_column]))
num_units = len(self._data[self._treatment_name[0]])
num_treatment_units = sum(self._data[self._treatment_name[0]])
num_control_units = num_units - num_treatment_units
# Vanilla IPS estimator
self._data["ips_weight"] = self._data[self._treatment_name[0]] / self._data[self.propensity_score_column] + (
1 - self._data[self._treatment_name[0]]
) / (1 - self._data[self.propensity_score_column])
self._data["tips_weight"] = self._data[self._treatment_name[0]] + (
1 - self._data[self._treatment_name[0]]
) * self._data[self.propensity_score_column] / (1 - self._data[self.propensity_score_column])
self._data["cips_weight"] = self._data[self._treatment_name[0]] * (
1 - self._data[self.propensity_score_column]
) / self._data[self.propensity_score_column] + (1 - self._data[self._treatment_name[0]])
# The Hajek estimator (or the self-normalized estimator)
self._data["ips_normalized_weight"] = (
self._data[self._treatment_name[0]] / self._data[self.propensity_score_column] / ipst_sum
+ (1 - self._data[self._treatment_name[0]]) / (1 - self._data[self.propensity_score_column]) / ipsc_sum
)
ipst_for_att_sum = sum(self._data[self._treatment_name[0]])
ipsc_for_att_sum = sum(
(1 - self._data[self._treatment_name[0]])
/ (1 - self._data[self.propensity_score_column])
* self._data[self.propensity_score_column]
)
self._data["tips_normalized_weight"] = (
self._data[self._treatment_name[0]] / ipst_for_att_sum
+ (1 - self._data[self._treatment_name[0]])
* self._data[self.propensity_score_column]
/ (1 - self._data[self.propensity_score_column])
/ ipsc_for_att_sum
)
ipst_for_atc_sum = sum(
self._data[self._treatment_name[0]]
/ self._data[self.propensity_score_column]
* (1 - self._data[self.propensity_score_column])
)
ipsc_for_atc_sum = sum((1 - self._data[self._treatment_name[0]]))
self._data["cips_normalized_weight"] = (
self._data[self._treatment_name[0]]
* (1 - self._data[self.propensity_score_column])
/ self._data[self.propensity_score_column]
/ ipst_for_atc_sum
+ (1 - self._data[self._treatment_name[0]]) / ipsc_for_atc_sum
)
# Stabilized weights (from Robins, Hernan, Brumback (2000))
# Paper: Marginal Structural Models and Causal Inference in Epidemiology
p_treatment = sum(self._data[self._treatment_name[0]]) / num_units
self._data["ips_stabilized_weight"] = self._data[self._treatment_name[0]] / self._data[
self.propensity_score_column
] * p_treatment + (1 - self._data[self._treatment_name[0]]) / (1 - self._data[self.propensity_score_column]) * (
1 - p_treatment
)
self._data["tips_stabilized_weight"] = self._data[self._treatment_name[0]] * p_treatment + (
1 - self._data[self._treatment_name[0]]
) * self._data[self.propensity_score_column] / (1 - self._data[self.propensity_score_column]) * (
1 - p_treatment
)
self._data["cips_stabilized_weight"] = self._data[self._treatment_name[0]] * (
1 - self._data[self.propensity_score_column]
) / self._data[self.propensity_score_column] * p_treatment + (1 - self._data[self._treatment_name[0]]) * (
1 - p_treatment
)
if isinstance(self._target_units, pd.DataFrame) or self._target_units == "ate":
weighting_scheme_name = self.weighting_scheme
elif self._target_units == "att":
weighting_scheme_name = "t" + self.weighting_scheme
elif self._target_units == "atc":
weighting_scheme_name = "c" + self.weighting_scheme
else:
raise ValueError(f"Target units value {self._target_units} not supported")
# Calculating the effect
self._data["d_y"] = (
self._data[weighting_scheme_name] * self._data[self._treatment_name[0]] * self._data[self._outcome_name]
)
self._data["dbar_y"] = (
self._data[weighting_scheme_name]
* (1 - self._data[self._treatment_name[0]])
* self._data[self._outcome_name]
)
sum_dy_weights = np.sum(self._data[self._treatment_name[0]] * self._data[weighting_scheme_name])
sum_dbary_weights = np.sum((1 - self._data[self._treatment_name[0]]) * self._data[weighting_scheme_name])
# Subtracting the weighted means
est = self._data["d_y"].sum() / sum_dy_weights - self._data["dbar_y"].sum() / sum_dbary_weights
# TODO - how can we add additional information into the returned estimate?
estimate = CausalEstimate(
estimate=est,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
propensity_scores=self._data[self.propensity_score_column],
)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ",".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| from typing import Any, List, Optional, Union
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.propensity_score_estimator import PropensityScoreEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class PropensityScoreWeightingEstimator(PropensityScoreEstimator):
"""Estimate effect of treatment by weighing the data by
inverse probability of occurrence.
Straightforward application of the back-door criterion.
Supports additional parameters as listed below.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
min_ps_score: float = 0.05,
max_ps_score: float = 0.95,
weighting_scheme: str = "ips_weight",
propensity_score_model: Optional[Any] = None,
propensity_score_column: str = "propensity_score",
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param min_ps_score: Lower bound used to clip the propensity score.
Default=0.05
:param max_ps_score: Upper bound used to clip the propensity score.
Default=0.95
:param weighting_scheme: Weighting method to use. Can be inverse
propensity score ("ips_weight", default), stabilized IPS score
("ips_stabilized_weight"), or normalized IPS score
("ips_normalized_weight").
:param propensity_score_model: The model used to compute propensity
score. Can be any classification model that supports fit() and
predict_proba() methods. If None, use LogisticRegression model as
the default. Default=None
:param propensity_score_column: Column name that stores the
propensity score. Default='propensity_score'
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
propensity_score_model=propensity_score_model,
propensity_score_column=propensity_score_column,
min_ps_score=min_ps_score,
max_ps_score=max_ps_score,
weighting_scheme=weighting_scheme,
**kwargs,
)
self.logger.info("INFO: Using Propensity Score Weighting Estimator")
# Setting method specific parameters
self.weighting_scheme = weighting_scheme
self.min_ps_score = min_ps_score
self.max_ps_score = max_ps_score
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
super().fit(data, treatment_name, outcome_name, effect_modifier_names=effect_modifier_names)
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
if self.propensity_score_column not in data:
self.estimate_propensity_score_column(data)
# trim propensity score weights
data[self.propensity_score_column] = np.minimum(self.max_ps_score, data[self.propensity_score_column])
data[self.propensity_score_column] = np.maximum(self.min_ps_score, data[self.propensity_score_column])
# ips ==> (isTreated(y)/ps(y)) + ((1-isTreated(y))/(1-ps(y)))
# nips ==> ips / (sum of ips over all units)
# icps ==> ps(y)/(1-ps(y)) / (sum of (ps(y)/(1-ps(y))) over all control units)
# itps ==> ps(y)/(1-ps(y)) / (sum of (ps(y)/(1-ps(y))) over all treatment units)
ipst_sum = sum(data[self._treatment_name[0]] / data[self.propensity_score_column])
ipsc_sum = sum((1 - data[self._treatment_name[0]]) / (1 - data[self.propensity_score_column]))
num_units = len(data[self._treatment_name[0]])
num_treatment_units = sum(data[self._treatment_name[0]])
num_control_units = num_units - num_treatment_units
# Vanilla IPS estimator
data["ips_weight"] = data[self._treatment_name[0]] / data[self.propensity_score_column] + (
1 - data[self._treatment_name[0]]
) / (1 - data[self.propensity_score_column])
data["tips_weight"] = data[self._treatment_name[0]] + (1 - data[self._treatment_name[0]]) * data[
self.propensity_score_column
] / (1 - data[self.propensity_score_column])
data["cips_weight"] = data[self._treatment_name[0]] * (1 - data[self.propensity_score_column]) / data[
self.propensity_score_column
] + (1 - data[self._treatment_name[0]])
# The Hajek estimator (or the self-normalized estimator)
data["ips_normalized_weight"] = (
data[self._treatment_name[0]] / data[self.propensity_score_column] / ipst_sum
+ (1 - data[self._treatment_name[0]]) / (1 - data[self.propensity_score_column]) / ipsc_sum
)
ipst_for_att_sum = sum(data[self._treatment_name[0]])
ipsc_for_att_sum = sum(
(1 - data[self._treatment_name[0]])
/ (1 - data[self.propensity_score_column])
* data[self.propensity_score_column]
)
data["tips_normalized_weight"] = (
data[self._treatment_name[0]] / ipst_for_att_sum
+ (1 - data[self._treatment_name[0]])
* data[self.propensity_score_column]
/ (1 - data[self.propensity_score_column])
/ ipsc_for_att_sum
)
ipst_for_atc_sum = sum(
data[self._treatment_name[0]]
/ data[self.propensity_score_column]
* (1 - data[self.propensity_score_column])
)
ipsc_for_atc_sum = sum((1 - data[self._treatment_name[0]]))
data["cips_normalized_weight"] = (
data[self._treatment_name[0]]
* (1 - data[self.propensity_score_column])
/ data[self.propensity_score_column]
/ ipst_for_atc_sum
+ (1 - data[self._treatment_name[0]]) / ipsc_for_atc_sum
)
# Stabilized weights (from Robins, Hernan, Brumback (2000))
# Paper: Marginal Structural Models and Causal Inference in Epidemiology
p_treatment = sum(data[self._treatment_name[0]]) / num_units
data["ips_stabilized_weight"] = data[self._treatment_name[0]] / data[
self.propensity_score_column
] * p_treatment + (1 - data[self._treatment_name[0]]) / (1 - data[self.propensity_score_column]) * (
1 - p_treatment
)
data["tips_stabilized_weight"] = data[self._treatment_name[0]] * p_treatment + (
1 - data[self._treatment_name[0]]
) * data[self.propensity_score_column] / (1 - data[self.propensity_score_column]) * (1 - p_treatment)
data["cips_stabilized_weight"] = data[self._treatment_name[0]] * (
1 - data[self.propensity_score_column]
) / data[self.propensity_score_column] * p_treatment + (1 - data[self._treatment_name[0]]) * (1 - p_treatment)
if isinstance(target_units, pd.DataFrame) or target_units == "ate":
weighting_scheme_name = self.weighting_scheme
elif target_units == "att":
weighting_scheme_name = "t" + self.weighting_scheme
elif target_units == "atc":
weighting_scheme_name = "c" + self.weighting_scheme
else:
raise ValueError(f"Target units value {target_units} not supported")
# Calculating the effect
data["d_y"] = data[weighting_scheme_name] * data[self._treatment_name[0]] * data[self._outcome_name]
data["dbar_y"] = data[weighting_scheme_name] * (1 - data[self._treatment_name[0]]) * data[self._outcome_name]
sum_dy_weights = np.sum(data[self._treatment_name[0]] * data[weighting_scheme_name])
sum_dbary_weights = np.sum((1 - data[self._treatment_name[0]]) * data[weighting_scheme_name])
# Subtracting the weighted means
est = data["d_y"].sum() / sum_dy_weights - data["dbar_y"].sum() / sum_dbary_weights
# TODO - how can we add additional information into the returned estimate?
estimate = CausalEstimate(
estimate=est,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
propensity_scores=data[self.propensity_score_column],
)
estimate.add_estimator(self)
return estimate
def construct_symbolic_estimator(self, estimand):
expr = "b: " + ",".join(estimand.outcome_variable) + "~"
# TODO -- fix: we are actually conditioning on positive treatment (d=1)
var_list = estimand.treatment_variable + estimand.get_backdoor_variables()
expr += "+".join(var_list)
return expr
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | same comment for refresh ps method | amit-sharma | 238 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/regression_discontinuity_estimator.py | import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.instrumental_variable_estimator import InstrumentalVariableEstimator
class RegressionDiscontinuityEstimator(CausalEstimator):
"""Compute effect of treatment using the regression discontinuity method.
Estimates effect by transforming the problem to an instrumental variables
problem.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
def __init__(self, *args, rd_variable_name=None, rd_threshold_value=None, rd_bandwidth=None, **kwargs):
"""
:param rd_variable_name: Name of the variable on which the
discontinuity occurs. This is the instrument.
:param rd_threshold_value: Threshold at which the discontinuity occurs.
:param rd_bandwidth: Distance from the threshold within which
confounders can be considered the same between treatment and
control. Considered band is (threshold +- bandwidth)
"""
# Required to ensure that self.method_params contains all the information
# to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self.logger.info("Using Regression Discontinuity Estimator")
self.rd_variable_name = rd_variable_name
self.rd_threshold_value = rd_threshold_value
self.rd_bandwidth = rd_bandwidth
self.rd_variable = self._data[self.rd_variable_name]
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _estimate_effect(self):
upper_limit = self.rd_threshold_value + self.rd_bandwidth
lower_limit = self.rd_threshold_value - self.rd_bandwidth
rows_filter = np.s_[(self.rd_variable >= lower_limit) & (self.rd_variable <= upper_limit)]
local_rd_variable = self.rd_variable[rows_filter]
local_treatment_variable = self._treatment[self._treatment_name[0]][
rows_filter
] # indexing by treatment name again since this method assumes a single-dimensional treatment
local_outcome_variable = self._outcome[rows_filter]
local_df = pd.DataFrame(
data={
"local_rd_variable": local_rd_variable,
"local_treatment": local_treatment_variable,
"local_outcome": local_outcome_variable,
}
)
self.logger.debug(local_df)
iv_estimator = InstrumentalVariableEstimator(
local_df,
self._target_estimand,
["local_treatment"],
["local_outcome"],
test_significance=self._significance_test,
iv_instrument_name="local_rd_variable",
)
est = iv_estimator.estimate_effect()
return est
def construct_symbolic_estimator(self, estimand):
return ""
| from typing import Any, List, Optional, Union
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.instrumental_variable_estimator import InstrumentalVariableEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class RegressionDiscontinuityEstimator(CausalEstimator):
"""Compute effect of treatment using the regression discontinuity method.
Estimates effect by transforming the problem to an instrumental variables
problem.
Supports additional parameters as listed below.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
rd_variable_name: Optional[str] = None,
rd_threshold_value: Optional[float] = None,
rd_bandwidth: Optional[float] = None,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param rd_variable_name: Name of the variable on which the
discontinuity occurs. This is the instrument.
:param rd_threshold_value: Threshold at which the discontinuity occurs.
:param rd_bandwidth: Distance from the threshold within which
confounders can be considered the same between treatment and
control. Considered band is (threshold +- bandwidth)
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
rd_variable_name=rd_variable_name,
rd_threshold_value=rd_threshold_value,
rd_bandwidth=rd_bandwidth,
**kwargs,
)
self.logger.info("Using Regression Discontinuity Estimator")
self.rd_variable_name = rd_variable_name
self.rd_threshold_value = rd_threshold_value
self.rd_bandwidth = rd_bandwidth
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
self.rd_variable = self._data[self.rd_variable_name]
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
upper_limit = self.rd_threshold_value + self.rd_bandwidth
lower_limit = self.rd_threshold_value - self.rd_bandwidth
rows_filter = np.s_[(self.rd_variable >= lower_limit) & (self.rd_variable <= upper_limit)]
local_rd_variable = self.rd_variable[rows_filter]
local_treatment_variable = self._treatment[self._treatment_name[0]][
rows_filter
] # indexing by treatment name again since this method assumes a single-dimensional treatment
local_outcome_variable = self._outcome[rows_filter]
local_df = pd.DataFrame(
data={
"local_rd_variable": local_rd_variable,
"local_treatment": local_treatment_variable,
"local_outcome": local_outcome_variable,
}
)
self.logger.debug(local_df)
self.iv_estimator = InstrumentalVariableEstimator(
self._target_estimand,
test_significance=self._significance_test,
iv_instrument_name="local_rd_variable",
)
self.iv_estimator.fit(
local_df,
["local_treatment"],
["local_outcome"],
)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
est = self.iv_estimator.estimate_effect(
treatment_value=treatment_value, control_value=control_value, target_units=target_units
)
est.add_estimator(self)
return est
def construct_symbolic_estimator(self, estimand):
return ""
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | up until this line, it may be better to move the code to the fit method. | amit-sharma | 239 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/regression_discontinuity_estimator.py | import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.instrumental_variable_estimator import InstrumentalVariableEstimator
class RegressionDiscontinuityEstimator(CausalEstimator):
"""Compute effect of treatment using the regression discontinuity method.
Estimates effect by transforming the problem to an instrumental variables
problem.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
def __init__(self, *args, rd_variable_name=None, rd_threshold_value=None, rd_bandwidth=None, **kwargs):
"""
:param rd_variable_name: Name of the variable on which the
discontinuity occurs. This is the instrument.
:param rd_threshold_value: Threshold at which the discontinuity occurs.
:param rd_bandwidth: Distance from the threshold within which
confounders can be considered the same between treatment and
control. Considered band is (threshold +- bandwidth)
"""
# Required to ensure that self.method_params contains all the information
# to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self.logger.info("Using Regression Discontinuity Estimator")
self.rd_variable_name = rd_variable_name
self.rd_threshold_value = rd_threshold_value
self.rd_bandwidth = rd_bandwidth
self.rd_variable = self._data[self.rd_variable_name]
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
def _estimate_effect(self):
upper_limit = self.rd_threshold_value + self.rd_bandwidth
lower_limit = self.rd_threshold_value - self.rd_bandwidth
rows_filter = np.s_[(self.rd_variable >= lower_limit) & (self.rd_variable <= upper_limit)]
local_rd_variable = self.rd_variable[rows_filter]
local_treatment_variable = self._treatment[self._treatment_name[0]][
rows_filter
] # indexing by treatment name again since this method assumes a single-dimensional treatment
local_outcome_variable = self._outcome[rows_filter]
local_df = pd.DataFrame(
data={
"local_rd_variable": local_rd_variable,
"local_treatment": local_treatment_variable,
"local_outcome": local_outcome_variable,
}
)
self.logger.debug(local_df)
iv_estimator = InstrumentalVariableEstimator(
local_df,
self._target_estimand,
["local_treatment"],
["local_outcome"],
test_significance=self._significance_test,
iv_instrument_name="local_rd_variable",
)
est = iv_estimator.estimate_effect()
return est
def construct_symbolic_estimator(self, estimand):
return ""
| from typing import Any, List, Optional, Union
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimator
from dowhy.causal_estimators.instrumental_variable_estimator import InstrumentalVariableEstimator
from dowhy.causal_identifier import IdentifiedEstimand
class RegressionDiscontinuityEstimator(CausalEstimator):
"""Compute effect of treatment using the regression discontinuity method.
Estimates effect by transforming the problem to an instrumental variables
problem.
Supports additional parameters as listed below.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
rd_variable_name: Optional[str] = None,
rd_threshold_value: Optional[float] = None,
rd_bandwidth: Optional[float] = None,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param rd_variable_name: Name of the variable on which the
discontinuity occurs. This is the instrument.
:param rd_threshold_value: Threshold at which the discontinuity occurs.
:param rd_bandwidth: Distance from the threshold within which
confounders can be considered the same between treatment and
control. Considered band is (threshold +- bandwidth)
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
rd_variable_name=rd_variable_name,
rd_threshold_value=rd_threshold_value,
rd_bandwidth=rd_bandwidth,
**kwargs,
)
self.logger.info("Using Regression Discontinuity Estimator")
self.rd_variable_name = rd_variable_name
self.rd_threshold_value = rd_threshold_value
self.rd_bandwidth = rd_bandwidth
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
self.rd_variable = self._data[self.rd_variable_name]
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
upper_limit = self.rd_threshold_value + self.rd_bandwidth
lower_limit = self.rd_threshold_value - self.rd_bandwidth
rows_filter = np.s_[(self.rd_variable >= lower_limit) & (self.rd_variable <= upper_limit)]
local_rd_variable = self.rd_variable[rows_filter]
local_treatment_variable = self._treatment[self._treatment_name[0]][
rows_filter
] # indexing by treatment name again since this method assumes a single-dimensional treatment
local_outcome_variable = self._outcome[rows_filter]
local_df = pd.DataFrame(
data={
"local_rd_variable": local_rd_variable,
"local_treatment": local_treatment_variable,
"local_outcome": local_outcome_variable,
}
)
self.logger.debug(local_df)
self.iv_estimator = InstrumentalVariableEstimator(
self._target_estimand,
test_significance=self._significance_test,
iv_instrument_name="local_rd_variable",
)
self.iv_estimator.fit(
local_df,
["local_treatment"],
["local_outcome"],
)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
est = self.iv_estimator.estimate_effect(
treatment_value=treatment_value, control_value=control_value, target_units=target_units
)
est.add_estimator(self)
return est
def construct_symbolic_estimator(self, estimand):
return ""
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | treatment_value, control_value etc. should be passed to the IV estimate_effect. | amit-sharma | 240 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/regression_estimator.py | import numpy as np
import pandas as pd
import statsmodels.api as sm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
class RegressionEstimator(CausalEstimator):
"""Compute effect of treatment using some regression function.
Fits a regression model for estimating the outcome using treatment(s) and
confounders.
Base class for all regression models, inherited by
LinearRegressionEstimator and GeneralizedLinearModelEstimator.
"""
def __init__(self, *args, **kwargs):
"""For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
"""
super().__init__(*args, **kwargs)
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(self._observed_common_causes_names) > 0:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
self.model = None
def _estimate_effect(self, data_df=None, need_conditional_estimates=None):
# TODO make treatment_value and control value also as local parameters
if data_df is None:
data_df = self._data
if need_conditional_estimates is None:
need_conditional_estimates = self.need_conditional_estimates
# Checking if the model is already trained
if not self.model:
# The model is always built on the entire data
_, self.model = self._build_model()
coefficients = self.model.params[1:] # first coefficient is the intercept
self.logger.debug("Coefficients of the fitted model: " + ",".join(map(str, coefficients)))
self.logger.debug(self.model.summary())
# All treatments are set to the same constant value
effect_estimate = self._do(self._treatment_value, data_df) - self._do(self._control_value, data_df)
conditional_effect_estimates = None
if need_conditional_estimates:
conditional_effect_estimates = self._estimate_conditional_effects(
self._estimate_effect_fn, effect_modifier_names=self._effect_modifier_names
)
intercept_parameter = self.model.params[0]
estimate = CausalEstimate(
estimate=effect_estimate,
control_value=self._control_value,
treatment_value=self._treatment_value,
conditional_estimates=conditional_effect_estimates,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
intercept=intercept_parameter,
)
return estimate
def _estimate_effect_fn(self, data_df):
est = self._estimate_effect(data_df, need_conditional_estimates=False)
return est.value
def _build_features(self, treatment_values=None, data_df=None):
# Using all data by default
if data_df is None:
data_df = self._data
treatment_vals = pd.get_dummies(self._treatment, drop_first=True)
observed_common_causes_vals = self._observed_common_causes
effect_modifiers_vals = self._effect_modifiers
else:
treatment_vals = pd.get_dummies(data_df[self._treatment_name], drop_first=True)
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
# Fixing treatment value to the specified value, if provided
if treatment_values is not None:
treatment_vals = treatment_values
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
# treatment_vals and data_df should have same number of rows
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_treatment_cols = 1 if len(treatment_vals.shape) == 1 else treatment_vals.shape[1]
n_samples = treatment_vals.shape[0]
treatment_2d = treatment_vals.reshape((n_samples, n_treatment_cols))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
features = sm.add_constant(features, has_constant="add") # to add an intercept term
return features
def _do(self, treatment_val, data_df=None):
if data_df is None:
data_df = self._data
if not self.model:
# The model is always built on the entire data
_, self.model = self._build_model()
# Replacing treatment values by given x
# First, create interventional tensor in original space
interventional_treatment_values = np.full((data_df.shape[0], len(self._treatment_name)), treatment_val)
# Then, use pandas to ensure that the dummies are assigned correctly for a categorical treatment
interventional_treatment_2d = pd.concat(
[
self._treatment.copy(),
pd.DataFrame(data=interventional_treatment_values, columns=self._treatment.columns),
],
axis=0,
).astype(self._treatment.dtypes, copy=False)
interventional_treatment_2d = pd.get_dummies(interventional_treatment_2d, drop_first=True)
interventional_treatment_2d = interventional_treatment_2d[self._treatment.shape[0] :]
new_features = self._build_features(treatment_values=interventional_treatment_2d, data_df=data_df)
interventional_outcomes = self.predict_fn(self.model, new_features)
return interventional_outcomes.mean()
| from typing import Any, List, Optional, Union
import numpy as np
import pandas as pd
import statsmodels.api as sm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator, IdentifiedEstimand
class RegressionEstimator(CausalEstimator):
"""Compute effect of treatment using some regression function.
Fits a regression model for estimating the outcome using treatment(s) and
confounders.
Base class for all regression models, inherited by
LinearRegressionEstimator and GeneralizedLinearModelEstimator.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
**kwargs,
)
self.model = None
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(self._observed_common_causes_names) > 0:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
# The model is always built on the entire data
_, self.model = self._build_model()
coefficients = self.model.params[1:] # first coefficient is the intercept
self.logger.debug("Coefficients of the fitted model: " + ",".join(map(str, coefficients)))
self.logger.debug(self.model.summary())
return self
def estimate_effect(
self,
data: pd.DataFrame = None,
treatment_value: Any = 1,
control_value: Any = 0,
target_units=None,
need_conditional_estimates=None,
**_,
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
# TODO make treatment_value and control value also as local parameters
# All treatments are set to the same constant value
effect_estimate = self._do(treatment_value, data) - self._do(control_value, data)
conditional_effect_estimates = None
if need_conditional_estimates:
conditional_effect_estimates = self._estimate_conditional_effects(
self._estimate_effect_fn, effect_modifier_names=self._effect_modifier_names
)
intercept_parameter = self.model.params[0]
estimate = CausalEstimate(
estimate=effect_estimate,
control_value=control_value,
treatment_value=treatment_value,
conditional_estimates=conditional_effect_estimates,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
intercept=intercept_parameter,
)
estimate.add_estimator(self)
return estimate
def _estimate_effect_fn(self, data_df):
est = self.estimate_effect(data=data_df, need_conditional_estimates=False)
return est.value
def _build_features(self, treatment_values=None, data_df=None):
# Using all data by default
if data_df is None:
data_df = self._data
treatment_vals = pd.get_dummies(self._treatment, drop_first=True)
observed_common_causes_vals = self._observed_common_causes
effect_modifiers_vals = self._effect_modifiers
else:
treatment_vals = pd.get_dummies(data_df[self._treatment_name], drop_first=True)
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
# Fixing treatment value to the specified value, if provided
if treatment_values is not None:
treatment_vals = treatment_values
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
# treatment_vals and data_df should have same number of rows
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_treatment_cols = 1 if len(treatment_vals.shape) == 1 else treatment_vals.shape[1]
n_samples = treatment_vals.shape[0]
treatment_2d = treatment_vals.reshape((n_samples, n_treatment_cols))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
features = sm.add_constant(features, has_constant="add") # to add an intercept term
return features
def _do(self, treatment_val, data_df=None):
if data_df is None:
data_df = self._data
if not self.model:
# The model is always built on the entire data
_, self.model = self._build_model()
# Replacing treatment values by given x
# First, create interventional tensor in original space
interventional_treatment_values = np.full((data_df.shape[0], len(self._treatment_name)), treatment_val)
# Then, use pandas to ensure that the dummies are assigned correctly for a categorical treatment
interventional_treatment_2d = pd.concat(
[
self._treatment.copy(),
pd.DataFrame(data=interventional_treatment_values, columns=self._treatment.columns),
],
axis=0,
).astype(self._treatment.dtypes, copy=False)
interventional_treatment_2d = pd.get_dummies(interventional_treatment_2d, drop_first=True)
interventional_treatment_2d = interventional_treatment_2d[self._treatment.shape[0] :]
new_features = self._build_features(treatment_values=interventional_treatment_2d, data_df=data_df)
interventional_outcomes = self.predict_fn(self.model, new_features)
return interventional_outcomes.mean()
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | up until line 126, a model is being fit. All this code should be inside fit method. | amit-sharma | 241 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/regression_estimator.py | import numpy as np
import pandas as pd
import statsmodels.api as sm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
class RegressionEstimator(CausalEstimator):
"""Compute effect of treatment using some regression function.
Fits a regression model for estimating the outcome using treatment(s) and
confounders.
Base class for all regression models, inherited by
LinearRegressionEstimator and GeneralizedLinearModelEstimator.
"""
def __init__(self, *args, **kwargs):
"""For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
"""
super().__init__(*args, **kwargs)
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(self._observed_common_causes_names) > 0:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
self.model = None
def _estimate_effect(self, data_df=None, need_conditional_estimates=None):
# TODO make treatment_value and control value also as local parameters
if data_df is None:
data_df = self._data
if need_conditional_estimates is None:
need_conditional_estimates = self.need_conditional_estimates
# Checking if the model is already trained
if not self.model:
# The model is always built on the entire data
_, self.model = self._build_model()
coefficients = self.model.params[1:] # first coefficient is the intercept
self.logger.debug("Coefficients of the fitted model: " + ",".join(map(str, coefficients)))
self.logger.debug(self.model.summary())
# All treatments are set to the same constant value
effect_estimate = self._do(self._treatment_value, data_df) - self._do(self._control_value, data_df)
conditional_effect_estimates = None
if need_conditional_estimates:
conditional_effect_estimates = self._estimate_conditional_effects(
self._estimate_effect_fn, effect_modifier_names=self._effect_modifier_names
)
intercept_parameter = self.model.params[0]
estimate = CausalEstimate(
estimate=effect_estimate,
control_value=self._control_value,
treatment_value=self._treatment_value,
conditional_estimates=conditional_effect_estimates,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
intercept=intercept_parameter,
)
return estimate
def _estimate_effect_fn(self, data_df):
est = self._estimate_effect(data_df, need_conditional_estimates=False)
return est.value
def _build_features(self, treatment_values=None, data_df=None):
# Using all data by default
if data_df is None:
data_df = self._data
treatment_vals = pd.get_dummies(self._treatment, drop_first=True)
observed_common_causes_vals = self._observed_common_causes
effect_modifiers_vals = self._effect_modifiers
else:
treatment_vals = pd.get_dummies(data_df[self._treatment_name], drop_first=True)
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
# Fixing treatment value to the specified value, if provided
if treatment_values is not None:
treatment_vals = treatment_values
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
# treatment_vals and data_df should have same number of rows
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_treatment_cols = 1 if len(treatment_vals.shape) == 1 else treatment_vals.shape[1]
n_samples = treatment_vals.shape[0]
treatment_2d = treatment_vals.reshape((n_samples, n_treatment_cols))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
features = sm.add_constant(features, has_constant="add") # to add an intercept term
return features
def _do(self, treatment_val, data_df=None):
if data_df is None:
data_df = self._data
if not self.model:
# The model is always built on the entire data
_, self.model = self._build_model()
# Replacing treatment values by given x
# First, create interventional tensor in original space
interventional_treatment_values = np.full((data_df.shape[0], len(self._treatment_name)), treatment_val)
# Then, use pandas to ensure that the dummies are assigned correctly for a categorical treatment
interventional_treatment_2d = pd.concat(
[
self._treatment.copy(),
pd.DataFrame(data=interventional_treatment_values, columns=self._treatment.columns),
],
axis=0,
).astype(self._treatment.dtypes, copy=False)
interventional_treatment_2d = pd.get_dummies(interventional_treatment_2d, drop_first=True)
interventional_treatment_2d = interventional_treatment_2d[self._treatment.shape[0] :]
new_features = self._build_features(treatment_values=interventional_treatment_2d, data_df=data_df)
interventional_outcomes = self.predict_fn(self.model, new_features)
return interventional_outcomes.mean()
| from typing import Any, List, Optional, Union
import numpy as np
import pandas as pd
import statsmodels.api as sm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator, IdentifiedEstimand
class RegressionEstimator(CausalEstimator):
"""Compute effect of treatment using some regression function.
Fits a regression model for estimating the outcome using treatment(s) and
confounders.
Base class for all regression models, inherited by
LinearRegressionEstimator and GeneralizedLinearModelEstimator.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
**kwargs,
)
self.model = None
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(self._observed_common_causes_names) > 0:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
# The model is always built on the entire data
_, self.model = self._build_model()
coefficients = self.model.params[1:] # first coefficient is the intercept
self.logger.debug("Coefficients of the fitted model: " + ",".join(map(str, coefficients)))
self.logger.debug(self.model.summary())
return self
def estimate_effect(
self,
data: pd.DataFrame = None,
treatment_value: Any = 1,
control_value: Any = 0,
target_units=None,
need_conditional_estimates=None,
**_,
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
# TODO make treatment_value and control value also as local parameters
# All treatments are set to the same constant value
effect_estimate = self._do(treatment_value, data) - self._do(control_value, data)
conditional_effect_estimates = None
if need_conditional_estimates:
conditional_effect_estimates = self._estimate_conditional_effects(
self._estimate_effect_fn, effect_modifier_names=self._effect_modifier_names
)
intercept_parameter = self.model.params[0]
estimate = CausalEstimate(
estimate=effect_estimate,
control_value=control_value,
treatment_value=treatment_value,
conditional_estimates=conditional_effect_estimates,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
intercept=intercept_parameter,
)
estimate.add_estimator(self)
return estimate
def _estimate_effect_fn(self, data_df):
est = self.estimate_effect(data=data_df, need_conditional_estimates=False)
return est.value
def _build_features(self, treatment_values=None, data_df=None):
# Using all data by default
if data_df is None:
data_df = self._data
treatment_vals = pd.get_dummies(self._treatment, drop_first=True)
observed_common_causes_vals = self._observed_common_causes
effect_modifiers_vals = self._effect_modifiers
else:
treatment_vals = pd.get_dummies(data_df[self._treatment_name], drop_first=True)
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
# Fixing treatment value to the specified value, if provided
if treatment_values is not None:
treatment_vals = treatment_values
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
# treatment_vals and data_df should have same number of rows
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_treatment_cols = 1 if len(treatment_vals.shape) == 1 else treatment_vals.shape[1]
n_samples = treatment_vals.shape[0]
treatment_2d = treatment_vals.reshape((n_samples, n_treatment_cols))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
features = sm.add_constant(features, has_constant="add") # to add an intercept term
return features
def _do(self, treatment_val, data_df=None):
if data_df is None:
data_df = self._data
if not self.model:
# The model is always built on the entire data
_, self.model = self._build_model()
# Replacing treatment values by given x
# First, create interventional tensor in original space
interventional_treatment_values = np.full((data_df.shape[0], len(self._treatment_name)), treatment_val)
# Then, use pandas to ensure that the dummies are assigned correctly for a categorical treatment
interventional_treatment_2d = pd.concat(
[
self._treatment.copy(),
pd.DataFrame(data=interventional_treatment_values, columns=self._treatment.columns),
],
axis=0,
).astype(self._treatment.dtypes, copy=False)
interventional_treatment_2d = pd.get_dummies(interventional_treatment_2d, drop_first=True)
interventional_treatment_2d = interventional_treatment_2d[self._treatment.shape[0] :]
new_features = self._build_features(treatment_values=interventional_treatment_2d, data_df=data_df)
interventional_outcomes = self.predict_fn(self.model, new_features)
return interventional_outcomes.mean()
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | need docstring | amit-sharma | 242 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/regression_estimator.py | import numpy as np
import pandas as pd
import statsmodels.api as sm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
class RegressionEstimator(CausalEstimator):
"""Compute effect of treatment using some regression function.
Fits a regression model for estimating the outcome using treatment(s) and
confounders.
Base class for all regression models, inherited by
LinearRegressionEstimator and GeneralizedLinearModelEstimator.
"""
def __init__(self, *args, **kwargs):
"""For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
"""
super().__init__(*args, **kwargs)
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(self._observed_common_causes_names) > 0:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
self.model = None
def _estimate_effect(self, data_df=None, need_conditional_estimates=None):
# TODO make treatment_value and control value also as local parameters
if data_df is None:
data_df = self._data
if need_conditional_estimates is None:
need_conditional_estimates = self.need_conditional_estimates
# Checking if the model is already trained
if not self.model:
# The model is always built on the entire data
_, self.model = self._build_model()
coefficients = self.model.params[1:] # first coefficient is the intercept
self.logger.debug("Coefficients of the fitted model: " + ",".join(map(str, coefficients)))
self.logger.debug(self.model.summary())
# All treatments are set to the same constant value
effect_estimate = self._do(self._treatment_value, data_df) - self._do(self._control_value, data_df)
conditional_effect_estimates = None
if need_conditional_estimates:
conditional_effect_estimates = self._estimate_conditional_effects(
self._estimate_effect_fn, effect_modifier_names=self._effect_modifier_names
)
intercept_parameter = self.model.params[0]
estimate = CausalEstimate(
estimate=effect_estimate,
control_value=self._control_value,
treatment_value=self._treatment_value,
conditional_estimates=conditional_effect_estimates,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
intercept=intercept_parameter,
)
return estimate
def _estimate_effect_fn(self, data_df):
est = self._estimate_effect(data_df, need_conditional_estimates=False)
return est.value
def _build_features(self, treatment_values=None, data_df=None):
# Using all data by default
if data_df is None:
data_df = self._data
treatment_vals = pd.get_dummies(self._treatment, drop_first=True)
observed_common_causes_vals = self._observed_common_causes
effect_modifiers_vals = self._effect_modifiers
else:
treatment_vals = pd.get_dummies(data_df[self._treatment_name], drop_first=True)
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
# Fixing treatment value to the specified value, if provided
if treatment_values is not None:
treatment_vals = treatment_values
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
# treatment_vals and data_df should have same number of rows
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_treatment_cols = 1 if len(treatment_vals.shape) == 1 else treatment_vals.shape[1]
n_samples = treatment_vals.shape[0]
treatment_2d = treatment_vals.reshape((n_samples, n_treatment_cols))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
features = sm.add_constant(features, has_constant="add") # to add an intercept term
return features
def _do(self, treatment_val, data_df=None):
if data_df is None:
data_df = self._data
if not self.model:
# The model is always built on the entire data
_, self.model = self._build_model()
# Replacing treatment values by given x
# First, create interventional tensor in original space
interventional_treatment_values = np.full((data_df.shape[0], len(self._treatment_name)), treatment_val)
# Then, use pandas to ensure that the dummies are assigned correctly for a categorical treatment
interventional_treatment_2d = pd.concat(
[
self._treatment.copy(),
pd.DataFrame(data=interventional_treatment_values, columns=self._treatment.columns),
],
axis=0,
).astype(self._treatment.dtypes, copy=False)
interventional_treatment_2d = pd.get_dummies(interventional_treatment_2d, drop_first=True)
interventional_treatment_2d = interventional_treatment_2d[self._treatment.shape[0] :]
new_features = self._build_features(treatment_values=interventional_treatment_2d, data_df=data_df)
interventional_outcomes = self.predict_fn(self.model, new_features)
return interventional_outcomes.mean()
| from typing import Any, List, Optional, Union
import numpy as np
import pandas as pd
import statsmodels.api as sm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator, IdentifiedEstimand
class RegressionEstimator(CausalEstimator):
"""Compute effect of treatment using some regression function.
Fits a regression model for estimating the outcome using treatment(s) and
confounders.
Base class for all regression models, inherited by
LinearRegressionEstimator and GeneralizedLinearModelEstimator.
"""
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
**kwargs,
)
self.model = None
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
self.logger.debug("Back-door variables used:" + ",".join(self._target_estimand.get_backdoor_variables()))
self._observed_common_causes_names = self._target_estimand.get_backdoor_variables()
if len(self._observed_common_causes_names) > 0:
self._observed_common_causes = self._data[self._observed_common_causes_names]
self._observed_common_causes = pd.get_dummies(self._observed_common_causes, drop_first=True)
else:
self._observed_common_causes = None
self.symbolic_estimator = self.construct_symbolic_estimator(self._target_estimand)
self.logger.info(self.symbolic_estimator)
# The model is always built on the entire data
_, self.model = self._build_model()
coefficients = self.model.params[1:] # first coefficient is the intercept
self.logger.debug("Coefficients of the fitted model: " + ",".join(map(str, coefficients)))
self.logger.debug(self.model.summary())
return self
def estimate_effect(
self,
data: pd.DataFrame = None,
treatment_value: Any = 1,
control_value: Any = 0,
target_units=None,
need_conditional_estimates=None,
**_,
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
# TODO make treatment_value and control value also as local parameters
# All treatments are set to the same constant value
effect_estimate = self._do(treatment_value, data) - self._do(control_value, data)
conditional_effect_estimates = None
if need_conditional_estimates:
conditional_effect_estimates = self._estimate_conditional_effects(
self._estimate_effect_fn, effect_modifier_names=self._effect_modifier_names
)
intercept_parameter = self.model.params[0]
estimate = CausalEstimate(
estimate=effect_estimate,
control_value=control_value,
treatment_value=treatment_value,
conditional_estimates=conditional_effect_estimates,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
intercept=intercept_parameter,
)
estimate.add_estimator(self)
return estimate
def _estimate_effect_fn(self, data_df):
est = self.estimate_effect(data=data_df, need_conditional_estimates=False)
return est.value
def _build_features(self, treatment_values=None, data_df=None):
# Using all data by default
if data_df is None:
data_df = self._data
treatment_vals = pd.get_dummies(self._treatment, drop_first=True)
observed_common_causes_vals = self._observed_common_causes
effect_modifiers_vals = self._effect_modifiers
else:
treatment_vals = pd.get_dummies(data_df[self._treatment_name], drop_first=True)
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
# Fixing treatment value to the specified value, if provided
if treatment_values is not None:
treatment_vals = treatment_values
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
# treatment_vals and data_df should have same number of rows
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_treatment_cols = 1 if len(treatment_vals.shape) == 1 else treatment_vals.shape[1]
n_samples = treatment_vals.shape[0]
treatment_2d = treatment_vals.reshape((n_samples, n_treatment_cols))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
features = sm.add_constant(features, has_constant="add") # to add an intercept term
return features
def _do(self, treatment_val, data_df=None):
if data_df is None:
data_df = self._data
if not self.model:
# The model is always built on the entire data
_, self.model = self._build_model()
# Replacing treatment values by given x
# First, create interventional tensor in original space
interventional_treatment_values = np.full((data_df.shape[0], len(self._treatment_name)), treatment_val)
# Then, use pandas to ensure that the dummies are assigned correctly for a categorical treatment
interventional_treatment_2d = pd.concat(
[
self._treatment.copy(),
pd.DataFrame(data=interventional_treatment_values, columns=self._treatment.columns),
],
axis=0,
).astype(self._treatment.dtypes, copy=False)
interventional_treatment_2d = pd.get_dummies(interventional_treatment_2d, drop_first=True)
interventional_treatment_2d = interventional_treatment_2d[self._treatment.shape[0] :]
new_features = self._build_features(treatment_values=interventional_treatment_2d, data_df=data_df)
interventional_outcomes = self.predict_fn(self.model, new_features)
return interventional_outcomes.mean()
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | it's the same as for econml estimator. Additionally,
need_conditional_estimates: Boolean flag on whether treatment effect estimates conditional on the effect modifiers are needed. Otherwise, the average treatment effect is returned. | amit-sharma | 243 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/two_stage_regression_estimator.py | import copy
import itertools
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(self, *args, first_stage_model=None, second_stage_model=None, **kwargs):
"""
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
"""
# Required to ensure that self.method_params contains all the
# parameters needed to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if first_stage_model is not None:
self.first_stage_model = first_stage_model
else:
self.first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
if second_stage_model is not None:
self.second_stage_model = second_stage_model
else:
self.second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
def _estimate_effect(self):
estimate_value = None
# First stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.outcome_variable = parse_state(self._mediators_names)
first_stage_estimate = self.first_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(modified_target_estimand.outcome_variable),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Second Stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.treatment_variable = parse_state(self._mediators_names)
second_stage_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
parse_state(modified_target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=EstimandType.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
# Total effect of treatment
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
total_effect_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(self._outcome_name),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
return CausalEstimate(
estimate=estimate_value,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| import copy
from typing import Any, List, Optional, Type, Union
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier import EstimandType, IdentifiedEstimand
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
first_stage_model: Optional[Union[CausalEstimator, Type[CausalEstimator]]] = None,
second_stage_model: Optional[Union[CausalEstimator, Type[CausalEstimator]]] = None,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
first_stage_model=first_stage_model,
second_stage_model=second_stage_model,
**kwargs,
)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if first_stage_model is not None:
self._first_stage_model = (
first_stage_model
if isinstance(first_stage_model, CausalEstimator)
else first_stage_model(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
)
else:
self._first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if second_stage_model is not None:
self._second_stage_model = (
second_stage_model
if isinstance(second_stage_model, CausalEstimator)
else second_stage_model(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
)
else:
modified_target_estimand = copy.deepcopy(self._target_estimand)
self._second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand = copy.deepcopy(self._target_estimand)
self._second_stage_model_nde = type(self._second_stage_model)(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
**_,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param iv_instrument_name: Name of the specific instrumental variable
to be used. Needs to be one of the IVs identified in the
identification step. Default is to use all the IV variables
from the identification step.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self._first_stage_model._target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
self._first_stage_model._target_estimand.outcome_variable = parse_state(self._mediators_names)
self._first_stage_model.fit(
data,
treatment_name,
parse_state(self._first_stage_model._target_estimand.outcome_variable),
effect_modifier_names=effect_modifier_names,
)
if self._target_estimand.identifier_method == "frontdoor":
self._second_stage_model._target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
self._second_stage_model._target_estimand.treatment_variable = parse_state(self._mediators_names)
self._second_stage_model.fit(
data,
parse_state(self._second_stage_model._target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator)
effect_modifier_names=effect_modifier_names,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
self._second_stage_model_nde._target_estimand.identifier_method = "backdoor"
self._second_stage_model_nde.fit(
data,
self._treatment_name,
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator)
effect_modifier_names=effect_modifier_names,
)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
estimate_value = None
# First stage
first_stage_estimate = self._first_stage_model.estimate_effect(
control_value=control_value,
treatment_value=treatment_value,
target_units=target_units,
)
# Second Stage
second_stage_estimate = self._second_stage_model.estimate_effect(
control_value=control_value,
treatment_value=treatment_value,
target_units=target_units,
)
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=EstimandType.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
total_effect_estimate = self._second_stage_model_nde.estimate_effect(
control_value=control_value, treatment_value=treatment_value, target_units=target_units
)
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
estimate = CausalEstimate(
estimate=estimate_value,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
estimate.add_estimator(self)
return estimate
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | okay this is a major refactor. I did try to check the details, but I just wanted to confirm the logic: each of these models just change the treatment or outcome column and call the user-provided estimators. Will be good to double-check that the name changes have been done correctly. It looks good to me. | amit-sharma | 244 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/two_stage_regression_estimator.py | import copy
import itertools
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(self, *args, first_stage_model=None, second_stage_model=None, **kwargs):
"""
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
"""
# Required to ensure that self.method_params contains all the
# parameters needed to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if first_stage_model is not None:
self.first_stage_model = first_stage_model
else:
self.first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
if second_stage_model is not None:
self.second_stage_model = second_stage_model
else:
self.second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
def _estimate_effect(self):
estimate_value = None
# First stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.outcome_variable = parse_state(self._mediators_names)
first_stage_estimate = self.first_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(modified_target_estimand.outcome_variable),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Second Stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.treatment_variable = parse_state(self._mediators_names)
second_stage_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
parse_state(modified_target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=EstimandType.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
# Total effect of treatment
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
total_effect_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(self._outcome_name),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
return CausalEstimate(
estimate=estimate_value,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| import copy
from typing import Any, List, Optional, Type, Union
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier import EstimandType, IdentifiedEstimand
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
first_stage_model: Optional[Union[CausalEstimator, Type[CausalEstimator]]] = None,
second_stage_model: Optional[Union[CausalEstimator, Type[CausalEstimator]]] = None,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
first_stage_model=first_stage_model,
second_stage_model=second_stage_model,
**kwargs,
)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if first_stage_model is not None:
self._first_stage_model = (
first_stage_model
if isinstance(first_stage_model, CausalEstimator)
else first_stage_model(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
)
else:
self._first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if second_stage_model is not None:
self._second_stage_model = (
second_stage_model
if isinstance(second_stage_model, CausalEstimator)
else second_stage_model(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
)
else:
modified_target_estimand = copy.deepcopy(self._target_estimand)
self._second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand = copy.deepcopy(self._target_estimand)
self._second_stage_model_nde = type(self._second_stage_model)(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
**_,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param iv_instrument_name: Name of the specific instrumental variable
to be used. Needs to be one of the IVs identified in the
identification step. Default is to use all the IV variables
from the identification step.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self._first_stage_model._target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
self._first_stage_model._target_estimand.outcome_variable = parse_state(self._mediators_names)
self._first_stage_model.fit(
data,
treatment_name,
parse_state(self._first_stage_model._target_estimand.outcome_variable),
effect_modifier_names=effect_modifier_names,
)
if self._target_estimand.identifier_method == "frontdoor":
self._second_stage_model._target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
self._second_stage_model._target_estimand.treatment_variable = parse_state(self._mediators_names)
self._second_stage_model.fit(
data,
parse_state(self._second_stage_model._target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator)
effect_modifier_names=effect_modifier_names,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
self._second_stage_model_nde._target_estimand.identifier_method = "backdoor"
self._second_stage_model_nde.fit(
data,
self._treatment_name,
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator)
effect_modifier_names=effect_modifier_names,
)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
estimate_value = None
# First stage
first_stage_estimate = self._first_stage_model.estimate_effect(
control_value=control_value,
treatment_value=treatment_value,
target_units=target_units,
)
# Second Stage
second_stage_estimate = self._second_stage_model.estimate_effect(
control_value=control_value,
treatment_value=treatment_value,
target_units=target_units,
)
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=EstimandType.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
total_effect_estimate = self._second_stage_model_nde.estimate_effect(
control_value=control_value, treatment_value=treatment_value, target_units=target_units
)
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
estimate = CausalEstimate(
estimate=estimate_value,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
estimate.add_estimator(self)
return estimate
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | why not change the identifier_method to "backdoor" here? For others, we need the data, but do not need it for this one. | amit-sharma | 245 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/two_stage_regression_estimator.py | import copy
import itertools
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(self, *args, first_stage_model=None, second_stage_model=None, **kwargs):
"""
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
"""
# Required to ensure that self.method_params contains all the
# parameters needed to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if first_stage_model is not None:
self.first_stage_model = first_stage_model
else:
self.first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
if second_stage_model is not None:
self.second_stage_model = second_stage_model
else:
self.second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
def _estimate_effect(self):
estimate_value = None
# First stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.outcome_variable = parse_state(self._mediators_names)
first_stage_estimate = self.first_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(modified_target_estimand.outcome_variable),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Second Stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.treatment_variable = parse_state(self._mediators_names)
second_stage_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
parse_state(modified_target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=EstimandType.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
# Total effect of treatment
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
total_effect_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(self._outcome_name),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
return CausalEstimate(
estimate=estimate_value,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| import copy
from typing import Any, List, Optional, Type, Union
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier import EstimandType, IdentifiedEstimand
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
first_stage_model: Optional[Union[CausalEstimator, Type[CausalEstimator]]] = None,
second_stage_model: Optional[Union[CausalEstimator, Type[CausalEstimator]]] = None,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
first_stage_model=first_stage_model,
second_stage_model=second_stage_model,
**kwargs,
)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if first_stage_model is not None:
self._first_stage_model = (
first_stage_model
if isinstance(first_stage_model, CausalEstimator)
else first_stage_model(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
)
else:
self._first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if second_stage_model is not None:
self._second_stage_model = (
second_stage_model
if isinstance(second_stage_model, CausalEstimator)
else second_stage_model(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
)
else:
modified_target_estimand = copy.deepcopy(self._target_estimand)
self._second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand = copy.deepcopy(self._target_estimand)
self._second_stage_model_nde = type(self._second_stage_model)(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
**_,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param iv_instrument_name: Name of the specific instrumental variable
to be used. Needs to be one of the IVs identified in the
identification step. Default is to use all the IV variables
from the identification step.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self._first_stage_model._target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
self._first_stage_model._target_estimand.outcome_variable = parse_state(self._mediators_names)
self._first_stage_model.fit(
data,
treatment_name,
parse_state(self._first_stage_model._target_estimand.outcome_variable),
effect_modifier_names=effect_modifier_names,
)
if self._target_estimand.identifier_method == "frontdoor":
self._second_stage_model._target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
self._second_stage_model._target_estimand.treatment_variable = parse_state(self._mediators_names)
self._second_stage_model.fit(
data,
parse_state(self._second_stage_model._target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator)
effect_modifier_names=effect_modifier_names,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
self._second_stage_model_nde._target_estimand.identifier_method = "backdoor"
self._second_stage_model_nde.fit(
data,
self._treatment_name,
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator)
effect_modifier_names=effect_modifier_names,
)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
estimate_value = None
# First stage
first_stage_estimate = self._first_stage_model.estimate_effect(
control_value=control_value,
treatment_value=treatment_value,
target_units=target_units,
)
# Second Stage
second_stage_estimate = self._second_stage_model.estimate_effect(
control_value=control_value,
treatment_value=treatment_value,
target_units=target_units,
)
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=EstimandType.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
total_effect_estimate = self._second_stage_model_nde.estimate_effect(
control_value=control_value, treatment_value=treatment_value, target_units=target_units
)
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
estimate = CausalEstimate(
estimate=estimate_value,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
estimate.add_estimator(self)
return estimate
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | both these lines can be moved to the init method. Will make code easier to understand too | amit-sharma | 246 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/two_stage_regression_estimator.py | import copy
import itertools
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(self, *args, first_stage_model=None, second_stage_model=None, **kwargs):
"""
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
"""
# Required to ensure that self.method_params contains all the
# parameters needed to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if first_stage_model is not None:
self.first_stage_model = first_stage_model
else:
self.first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
if second_stage_model is not None:
self.second_stage_model = second_stage_model
else:
self.second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
def _estimate_effect(self):
estimate_value = None
# First stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.outcome_variable = parse_state(self._mediators_names)
first_stage_estimate = self.first_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(modified_target_estimand.outcome_variable),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Second Stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.treatment_variable = parse_state(self._mediators_names)
second_stage_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
parse_state(modified_target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=EstimandType.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
# Total effect of treatment
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
total_effect_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(self._outcome_name),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
return CausalEstimate(
estimate=estimate_value,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| import copy
from typing import Any, List, Optional, Type, Union
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier import EstimandType, IdentifiedEstimand
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
first_stage_model: Optional[Union[CausalEstimator, Type[CausalEstimator]]] = None,
second_stage_model: Optional[Union[CausalEstimator, Type[CausalEstimator]]] = None,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
first_stage_model=first_stage_model,
second_stage_model=second_stage_model,
**kwargs,
)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if first_stage_model is not None:
self._first_stage_model = (
first_stage_model
if isinstance(first_stage_model, CausalEstimator)
else first_stage_model(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
)
else:
self._first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if second_stage_model is not None:
self._second_stage_model = (
second_stage_model
if isinstance(second_stage_model, CausalEstimator)
else second_stage_model(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
)
else:
modified_target_estimand = copy.deepcopy(self._target_estimand)
self._second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand = copy.deepcopy(self._target_estimand)
self._second_stage_model_nde = type(self._second_stage_model)(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
**_,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param iv_instrument_name: Name of the specific instrumental variable
to be used. Needs to be one of the IVs identified in the
identification step. Default is to use all the IV variables
from the identification step.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self._first_stage_model._target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
self._first_stage_model._target_estimand.outcome_variable = parse_state(self._mediators_names)
self._first_stage_model.fit(
data,
treatment_name,
parse_state(self._first_stage_model._target_estimand.outcome_variable),
effect_modifier_names=effect_modifier_names,
)
if self._target_estimand.identifier_method == "frontdoor":
self._second_stage_model._target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
self._second_stage_model._target_estimand.treatment_variable = parse_state(self._mediators_names)
self._second_stage_model.fit(
data,
parse_state(self._second_stage_model._target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator)
effect_modifier_names=effect_modifier_names,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
self._second_stage_model_nde._target_estimand.identifier_method = "backdoor"
self._second_stage_model_nde.fit(
data,
self._treatment_name,
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator)
effect_modifier_names=effect_modifier_names,
)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
estimate_value = None
# First stage
first_stage_estimate = self._first_stage_model.estimate_effect(
control_value=control_value,
treatment_value=treatment_value,
target_units=target_units,
)
# Second Stage
second_stage_estimate = self._second_stage_model.estimate_effect(
control_value=control_value,
treatment_value=treatment_value,
target_units=target_units,
)
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=EstimandType.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
total_effect_estimate = self._second_stage_model_nde.estimate_effect(
control_value=control_value, treatment_value=treatment_value, target_units=target_units
)
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
estimate = CausalEstimate(
estimate=estimate_value,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
estimate.add_estimator(self)
return estimate
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | these lines can be moved to the init method since they are part of initializing the model correctly. | amit-sharma | 247 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_estimators/two_stage_regression_estimator.py | import copy
import itertools
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
For a list of standard args and kwargs, see documentation for
:class:`~dowhy.causal_estimator.CausalEstimator`.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(self, *args, first_stage_model=None, second_stage_model=None, **kwargs):
"""
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
"""
# Required to ensure that self.method_params contains all the
# parameters needed to create an object of this class
args_dict = {k: v for k, v in locals().items() if k not in type(self)._STD_INIT_ARGS}
args_dict.update(kwargs)
super().__init__(*args, **args_dict)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if first_stage_model is not None:
self.first_stage_model = first_stage_model
else:
self.first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
if second_stage_model is not None:
self.second_stage_model = second_stage_model
else:
self.second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
def _estimate_effect(self):
estimate_value = None
# First stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.outcome_variable = parse_state(self._mediators_names)
first_stage_estimate = self.first_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(modified_target_estimand.outcome_variable),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Second Stage
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if self._target_estimand.identifier_method == "frontdoor":
modified_target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
modified_target_estimand.treatment_variable = parse_state(self._mediators_names)
second_stage_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
parse_state(modified_target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=EstimandType.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
# Total effect of treatment
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
total_effect_estimate = self.second_stage_model(
self._data,
modified_target_estimand,
self._treatment_name,
parse_state(self._outcome_name),
control_value=self._control_value,
treatment_value=self._treatment_value,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)._estimate_effect()
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
return CausalEstimate(
estimate=estimate_value,
control_value=self._control_value,
treatment_value=self._treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| import copy
from typing import Any, List, Optional, Type, Union
import numpy as np
import pandas as pd
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_identifier import EstimandType, IdentifiedEstimand
from dowhy.utils.api import parse_state
class TwoStageRegressionEstimator(CausalEstimator):
"""Compute treatment effect whenever the effect is fully mediated by
another variable (front-door) or when there is an instrument available.
Currently only supports a linear model for the effects.
Supports additional parameters as listed below.
"""
# First stage statistical model
DEFAULT_FIRST_STAGE_MODEL = LinearRegressionEstimator
# Second stage statistical model
DEFAULT_SECOND_STAGE_MODEL = LinearRegressionEstimator
def __init__(
self,
identified_estimand: IdentifiedEstimand,
test_significance: bool = False,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
num_null_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations: int = CausalEstimator.DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction: int = CausalEstimator.DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level: float = CausalEstimator.DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates: Union[bool, str] = "auto",
num_quantiles_to_discretize_cont_cols: int = CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
first_stage_model: Optional[Union[CausalEstimator, Type[CausalEstimator]]] = None,
second_stage_model: Optional[Union[CausalEstimator, Type[CausalEstimator]]] = None,
**kwargs,
):
"""
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param first_stage_model: First stage estimator to be used. Default is
linear regression.
:param second_stage_model: Second stage estimator to be used. Default
is linear regression.
:param kwargs: (optional) Additional estimator-specific parameters
"""
super().__init__(
identified_estimand=identified_estimand,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
num_null_simulations=num_null_simulations,
num_simulations=num_simulations,
sample_size_fraction=sample_size_fraction,
confidence_level=confidence_level,
need_conditional_estimates=need_conditional_estimates,
num_quantiles_to_discretize_cont_cols=num_quantiles_to_discretize_cont_cols,
first_stage_model=first_stage_model,
second_stage_model=second_stage_model,
**kwargs,
)
self.logger.info("INFO: Using Two Stage Regression Estimator")
# Check if the treatment is one-dimensional
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_first_stage_confounders
if first_stage_model is not None:
self._first_stage_model = (
first_stage_model
if isinstance(first_stage_model, CausalEstimator)
else first_stage_model(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
)
else:
self._first_stage_model = self.__class__.DEFAULT_FIRST_STAGE_MODEL(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
self.logger.warning("First stage model not provided. Defaulting to sklearn.linear_model.LinearRegression.")
modified_target_estimand = copy.deepcopy(self._target_estimand)
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand.backdoor_variables = self._target_estimand.mediation_second_stage_confounders
if second_stage_model is not None:
self._second_stage_model = (
second_stage_model
if isinstance(second_stage_model, CausalEstimator)
else second_stage_model(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
)
else:
modified_target_estimand = copy.deepcopy(self._target_estimand)
self._second_stage_model = self.__class__.DEFAULT_SECOND_STAGE_MODEL(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
self.logger.warning("Second stage model not provided. Defaulting to backdoor.linear_regression.")
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
modified_target_estimand.identifier_method = "backdoor"
modified_target_estimand = copy.deepcopy(self._target_estimand)
self._second_stage_model_nde = type(self._second_stage_model)(
modified_target_estimand,
test_significance=self._significance_test,
evaluate_effect_strength=self._effect_strength_eval,
confidence_intervals=self._confidence_intervals,
**kwargs,
)
def fit(
self,
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
effect_modifier_names: Optional[List[str]] = None,
**_,
):
"""
Fits the estimator with data for effect estimation
:param data: data frame containing the data
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param iv_instrument_name: Name of the specific instrumental variable
to be used. Needs to be one of the IVs identified in the
identification step. Default is to use all the IV variables
from the identification step.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
"""
self._set_data(data, treatment_name, outcome_name)
self._set_effect_modifiers(effect_modifier_names)
if len(self._treatment_name) > 1:
error_msg = str(self.__class__) + "cannot handle more than one treatment variable"
raise Exception(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self.logger.debug("Front-door variable used:" + ",".join(self._target_estimand.get_frontdoor_variables()))
self._frontdoor_variables_names = self._target_estimand.get_frontdoor_variables()
if self._frontdoor_variables_names:
self._frontdoor_variables = self._data[self._frontdoor_variables_names]
else:
self._frontdoor_variables = None
error_msg = "No front-door variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "mediation":
self.logger.debug("Mediators used:" + ",".join(self._target_estimand.get_mediator_variables()))
self._mediators_names = self._target_estimand.get_mediator_variables()
if self._mediators_names:
self._mediators = self._data[self._mediators_names]
else:
self._mediators = None
error_msg = "No mediator variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
elif self._target_estimand.identifier_method == "iv":
self.logger.debug(
"Instrumental variables used:" + ",".join(self._target_estimand.get_instrumental_variables())
)
self._instrumental_variables_names = self._target_estimand.get_instrumental_variables()
if self._instrumental_variables_names:
self._instrumental_variables = self._data[self._instrumental_variables_names]
else:
self._instrumental_variables = None
error_msg = "No instrumental variable present. Two stage regression is not applicable"
self.logger.error(error_msg)
if self._target_estimand.identifier_method == "frontdoor":
self._first_stage_model._target_estimand.outcome_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
self._first_stage_model._target_estimand.outcome_variable = parse_state(self._mediators_names)
self._first_stage_model.fit(
data,
treatment_name,
parse_state(self._first_stage_model._target_estimand.outcome_variable),
effect_modifier_names=effect_modifier_names,
)
if self._target_estimand.identifier_method == "frontdoor":
self._second_stage_model._target_estimand.treatment_variable = parse_state(self._frontdoor_variables_names)
elif self._target_estimand.identifier_method == "mediation":
self._second_stage_model._target_estimand.treatment_variable = parse_state(self._mediators_names)
self._second_stage_model.fit(
data,
parse_state(self._second_stage_model._target_estimand.treatment_variable),
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator)
effect_modifier_names=effect_modifier_names,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
self._second_stage_model_nde._target_estimand.identifier_method = "backdoor"
self._second_stage_model_nde.fit(
data,
self._treatment_name,
parse_state(self._outcome_name), # to convert it to array before passing to causal estimator)
effect_modifier_names=effect_modifier_names,
)
return self
def estimate_effect(
self, data: pd.DataFrame = None, treatment_value: Any = 1, control_value: Any = 0, target_units=None, **_
):
if data is None:
data = self._data
self._target_units = target_units
self._treatment_value = treatment_value
self._control_value = control_value
estimate_value = None
# First stage
first_stage_estimate = self._first_stage_model.estimate_effect(
control_value=control_value,
treatment_value=treatment_value,
target_units=target_units,
)
# Second Stage
second_stage_estimate = self._second_stage_model.estimate_effect(
control_value=control_value,
treatment_value=treatment_value,
target_units=target_units,
)
# Combining the two estimates
natural_indirect_effect = first_stage_estimate.value * second_stage_estimate.value
# This same estimate is valid for frontdoor as well as mediation (NIE)
estimate_value = natural_indirect_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
estimand_type=EstimandType.NONPARAMETRIC_NIE,
)
if self._target_estimand.estimand_type == EstimandType.NONPARAMETRIC_NDE:
total_effect_estimate = self._second_stage_model_nde.estimate_effect(
control_value=control_value, treatment_value=treatment_value, target_units=target_units
)
natural_direct_effect = total_effect_estimate.value - natural_indirect_effect
estimate_value = natural_direct_effect
self.symbolic_estimator = self.construct_symbolic_estimator(
first_stage_estimate.realized_estimand_expr,
second_stage_estimate.realized_estimand_expr,
total_effect_estimate.realized_estimand_expr,
estimand_type=self._target_estimand.estimand_type,
)
estimate = CausalEstimate(
estimate=estimate_value,
control_value=control_value,
treatment_value=treatment_value,
target_estimand=self._target_estimand,
realized_estimand_expr=self.symbolic_estimator,
)
estimate.add_estimator(self)
return estimate
def build_first_stage_features(self):
data_df = self._data
treatment_vals = data_df[self._treatment_name]
if len(self._observed_common_causes_names) > 0:
observed_common_causes_vals = data_df[self._observed_common_causes_names]
observed_common_causes_vals = pd.get_dummies(observed_common_causes_vals, drop_first=True)
if self._effect_modifier_names:
effect_modifiers_vals = data_df[self._effect_modifier_names]
effect_modifiers_vals = pd.get_dummies(effect_modifiers_vals, drop_first=True)
if type(treatment_vals) is not np.ndarray:
treatment_vals = treatment_vals.to_numpy()
if treatment_vals.shape[0] != data_df.shape[0]:
raise ValueError("Provided treatment values and dataframe should have the same length.")
# Bulding the feature matrix
n_samples = treatment_vals.shape[0]
self.logger.debug("Number of samples" + str(n_samples) + str(len(self._treatment_name)))
treatment_2d = treatment_vals.reshape((n_samples, len(self._treatment_name)))
if len(self._observed_common_causes_names) > 0:
features = np.concatenate((treatment_2d, observed_common_causes_vals), axis=1)
else:
features = treatment_2d
if self._effect_modifier_names:
for i in range(treatment_2d.shape[1]):
curr_treatment = treatment_2d[:, i]
new_features = curr_treatment[:, np.newaxis] * effect_modifiers_vals.to_numpy()
features = np.concatenate((features, new_features), axis=1)
features = features.astype(
float, copy=False
) # converting to float in case of binary treatment and no other variables
# features = sm.add_constant(features, has_constant='add') # to add an intercept term
return features
def construct_symbolic_estimator(
self, first_stage_symbolic, second_stage_symbolic, total_effect_symbolic=None, estimand_type=None
):
nie_symbolic = "(" + first_stage_symbolic + ")*(" + second_stage_symbolic + ")"
if estimand_type == EstimandType.NONPARAMETRIC_NIE:
return nie_symbolic
elif estimand_type == EstimandType.NONPARAMETRIC_NDE:
return "(" + total_effect_symbolic + ") - (" + nie_symbolic + ")"
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | these lines can also be moved to init.
Essentially, init creates the constructor for all these estimators. And then fit just fits them. | amit-sharma | 248 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_model.py | """ Module containing the main model class for the dowhy package.
"""
import logging
from itertools import combinations
from sympy import init_printing
import dowhy.causal_estimators as causal_estimators
import dowhy.causal_refuters as causal_refuters
import dowhy.graph_learners as graph_learners
import dowhy.utils.cli_helpers as cli
from dowhy.causal_estimator import CausalEstimate, estimate_effect
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier import AutoIdentifier, BackdoorAdjustment, IDIdentifier
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.causal_refuters.graph_refuter import GraphRefuter
from dowhy.utils.api import parse_state
init_printing() # To display symbolic math symbols
class CausalModel:
"""Main class for storing the causal model state."""
def __init__(
self,
data,
treatment,
outcome,
graph=None,
common_causes=None,
instruments=None,
effect_modifiers=None,
estimand_type="nonparametric-ate",
proceed_when_unidentifiable=False,
missing_nodes_as_confounders=False,
identify_vars=False,
**kwargs,
):
"""Initialize data and create a causal graph instance.
Assigns treatment and outcome variables.
Also checks and finds the common causes and instruments for treatment
and outcome.
At least one of graph, common_causes or instruments must be provided. If
none of these variables are provided, then learn_graph() can be used later.
:param data: a pandas dataframe containing treatment, outcome and other
variables.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param graph: path to DOT file containing a DAG or a string containing
a DAG specification in DOT format
:param common_causes: names of common causes of treatment and _outcome. Only used when graph is None.
:param instruments: names of instrumental variables for the effect of
treatment on outcome. Only used when graph is None.
:param effect_modifiers: names of variables that can modify the treatment effect. If not provided, then the causal graph is used to find the effect modifiers. Estimators will return multiple different estimates based on each value of effect_modifiers.
:param estimand_type: the type of estimand requested (currently only "nonparametric-ate" is supported). In the future, may support other specific parametric forms of identification.
:param proceed_when_unidentifiable: does the identification proceed by ignoring potential unobserved confounders. Binary flag.
:param missing_nodes_as_confounders: Binary flag indicating whether variables in the dataframe that are not included in the causal graph, should be automatically included as confounder nodes.
:param identify_vars: Variable deciding whether to compute common causes, instruments and effect modifiers while initializing the class. identify_vars should be set to False when user is providing common_causes, instruments or effect modifiers on their own(otherwise the identify_vars code can override the user provided values). Also it does not make sense if no graph is given.
:returns: an instance of CausalModel class
"""
self._data = data
self._treatment = parse_state(treatment)
self._outcome = parse_state(outcome)
self._effect_modifiers = parse_state(effect_modifiers)
self._estimand_type = estimand_type
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self._missing_nodes_as_confounders = missing_nodes_as_confounders
self.logger = logging.getLogger(__name__)
self._estimator_cache = {}
if graph is None:
self.logger.warning("Causal Graph not provided. DoWhy will construct a graph based on data inputs.")
self._common_causes = parse_state(common_causes)
self._instruments = parse_state(instruments)
if common_causes is not None and instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif common_causes is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.logger.warning(
"Relevant variables to build causal graph not provided. You may want to use the learn_graph() function to construct the causal graph."
)
self._graph = CausalGraph(
self._treatment,
self._outcome,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.init_graph(graph=graph, identify_vars=identify_vars)
self._other_variables = kwargs
self.summary()
def init_graph(self, graph, identify_vars):
"""
Initialize self._graph using graph provided by the user.
"""
# Create causal graph object
self._graph = CausalGraph(
self._treatment,
self._outcome,
graph,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
missing_nodes_as_confounders=self._missing_nodes_as_confounders,
)
if identify_vars:
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
# Sometimes, effect modifiers from the graph may not match those provided by the user.
# (Because some effect modifiers may also be common causes)
# In such cases, the user-provided modifiers are used.
# If no effect modifiers are provided, then the ones from the graph are used.
if self._effect_modifiers is None or not self._effect_modifiers:
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
def get_common_causes(self):
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
return self._common_causes
def get_instruments(self):
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
return self._instruments
def get_effect_modifiers(self):
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
return self._effect_modifiers
def learn_graph(self, method_name="cdt.causality.graph.LiNGAM", *args, **kwargs):
"""
Learn causal graph from the data. This function takes the method name as input and initializes the
causal graph object using the learnt graph.
:param self: instance of the CausalModel class (or its subclass)
:param method_name: Exact method name of the object to be imported from the concerned library.
:returns: an instance of the CausalGraph class initialized with the learned graph.
"""
# Import causal discovery class
str_arr = method_name.split(".", maxsplit=1)
library_name = str_arr[0]
causal_discovery_class = graph_learners.get_discovery_class_object(library_name)
model = causal_discovery_class(self._data, method_name, *args, **kwargs)
graph = model.learn_graph()
# Initialize causal graph object
self.init_graph(graph=graph)
return self._graph
def identify_effect(
self, estimand_type=None, method_name="default", proceed_when_unidentifiable=None, optimize_backdoor=False
):
"""Identify the causal effect to be estimated, using properties of the causal graph.
:param method_name: Method name for identification algorithm. ("id-algorithm" or "default")
:param proceed_when_unidentifiable: Binary flag indicating whether identification should proceed in the presence of (potential) unobserved confounders.
:returns: a probability expression (estimand) for the causal effect if identified, else NULL
"""
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
if estimand_type is None:
estimand_type = self._estimand_type
estimand_type = EstimandType(estimand_type)
if method_name == "id-algorithm":
identifier = IDIdentifier()
else:
identifier = AutoIdentifier(
estimand_type=estimand_type,
backdoor_adjustment=BackdoorAdjustment(method_name),
proceed_when_unidentifiable=proceed_when_unidentifiable,
optimize_backdoor=optimize_backdoor,
)
identified_estimand = identifier.identify_effect(
graph=self._graph, treatment_name=self._treatment, outcome_name=self._outcome
)
self.identifier = identifier
return identified_estimand
def estimate_effect(
self,
identified_estimand,
method_name=None,
control_value=0,
treatment_value=1,
test_significance=None,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units="ate",
effect_modifiers=None,
fit_estimator=True,
method_params=None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None or len(effect_modifiers) == 0:
effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
if method_name is None:
# TODO add propensity score as default backdoor method, iv as default iv method, add an informational message to show which method has been selected.
pass
else:
# TODO add dowhy as a prefix to all dowhy estimators
num_components = len(method_name.split("."))
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
# This is done as all dowhy estimators have two parts and external ones have two or more parts
if num_components > 2:
estimator_package = estimator_name.split(".")[0]
if estimator_package == "dowhy": # For updated dowhy methods
estimator_method = estimator_name.split(".", maxsplit=1)[
1
] # discard dowhy from the full package name
causal_estimator_class = causal_estimators.get_class_object(estimator_method + "_estimator")
else:
third_party_estimator_package = estimator_package
causal_estimator_class = causal_estimators.get_class_object(
third_party_estimator_package, estimator_name
)
if method_params is None:
method_params = {}
# Define the third-party estimation method to be used
method_params[third_party_estimator_package + "_methodname"] = estimator_name
else: # For older dowhy methods
self.logger.info(estimator_name)
# Process the dowhy estimators
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
if method_params is not None and (num_components <= 2 or estimator_package == "dowhy"):
extra_args = method_params.get("init_params", {})
else:
extra_args = {}
if method_params is None:
method_params = {}
identified_estimand.set_identifier_method(identifier_name)
if not fit_estimator and method_name in self._estimator_cache:
causal_estimator = self._estimator_cache[method_name]
else:
causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome, # names of treatment and outcome
control_value=control_value,
treatment_value=treatment_value,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
**method_params,
**extra_args,
)
self._estimator_cache[method_name] = causal_estimator
return estimate_effect(
self._treatment,
self._outcome,
identified_estimand,
identifier_name,
causal_estimator,
control_value,
treatment_value,
test_significance,
evaluate_effect_strength,
confidence_intervals,
target_units,
effect_modifiers,
fit_estimator,
method_params,
)
def do(self, x, identified_estimand, method_name=None, fit_estimator=True, method_params=None):
"""Do operator for estimating values of the outcome after intervening on treatment.
:param x: interventional value of the treatment variable
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: any of the estimation method to be used. See docs
for estimate_effect method for a list of supported estimation methods.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to compute the do-operation on new
data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method.
:returns: an instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if method_name is None:
pass
else:
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
identified_estimand.set_identifier_method(identifier_name)
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
# Check if estimator's target estimand is identified
if identified_estimand.estimands[identifier_name] is None:
self.logger.warning("No valid identified estimand for using instrumental variables method")
estimate = CausalEstimate(None, None, None, None, None)
else:
if fit_estimator:
# Note that while the name of the variable is the same,
# "self.causal_estimator", this estimator takes in less
# parameters than the same from the
# estimate_effect code. It is not advisable to use the
# estimator from this function to call estimate_effect
# with fit_estimator=False.
self.causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome,
test_significance=False,
**method_params,
)
else:
# Estimator had been computed in a previous call
assert self.causal_estimator is not None
try:
estimate = self.causal_estimator.do(x)
except NotImplementedError:
self.logger.error("Do Operation not implemented or not supported for this estimator.")
raise NotImplementedError
return estimate
def refute_estimate(self, estimand, estimate, method_name=None, show_progress_bar=False, **kwargs):
"""Refute an estimated causal effect.
If method_name is provided, uses the provided method. In the future, we may support automatic selection of suitable refutation tests. Following refutation methods are supported.
* Adding a randomly-generated confounder: "random_common_cause"
* Adding a confounder that is associated with both treatment and outcome: "add_unobserved_common_cause"
* Replacing the treatment with a placebo (random) variable): "placebo_treatment_refuter"
* Removing a random subset of the data: "data_subset_refuter"
:param estimand: target estimand, an instance of the IdentifiedEstimand class (typically, the output of identify_effect)
:param estimate: estimate to be refuted, an instance of the CausalEstimate class (typically, the output of estimate_effect)
:param method_name: name of the refutation method
:param show_progress_bar: Boolean flag on whether to show a progress bar
:param kwargs: (optional) additional arguments that are passed directly to the refutation method. Can specify a random seed here to ensure reproducible results ('random_seed' parameter). For method-specific parameters, consult the documentation for the specific method. All refutation methods are in the causal_refuters subpackage.
:returns: an instance of the RefuteResult class
"""
if estimate is None or estimate.value is None:
self.logger.error("Aborting refutation! No estimate is provided.")
raise ValueError("Aborting refutation! No valid estimate is provided.")
if method_name is None:
pass
else:
refuter_class = causal_refuters.get_class_object(method_name)
refuter = refuter_class(self._data, identified_estimand=estimand, estimate=estimate, **kwargs)
res = refuter.refute_estimate(show_progress_bar)
return res
def view_model(self, layout="dot", size=(8, 6), file_name="causal_model"):
"""View the causal DAG.
:param layout: string specifying the layout of the graph.
:param size: tuple (x, y) specifying the width and height of the figure in inches.
:param file_name: string specifying the file name for the saved causal graph png.
:returns: a visualization of the graph
"""
self._graph.view_graph(layout, size, file_name)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal model.
:param method_name: method used for interpreting the model. If None,
then default interpreter is chosen that describes the model summary and shows the associated causal graph.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
self.summary(print_to_stdout=True)
self.view_model()
return
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def summary(self, print_to_stdout=False):
"""Print a text summary of the model.
:returns: a string containining the summary
"""
summary_text = "Model to find the causal effect of treatment {0} on outcome {1}".format(
self._treatment, self._outcome
)
self.logger.info(summary_text)
if print_to_stdout:
print(summary_text)
return summary_text
def refute_graph(self, k=1, independence_test=None, independence_constraints=None):
"""
Check if the dependencies in input graph matches with the dataset -
( X ⫫ Y ) | Z
where X and Y are considered as singleton sets currently
Z can have multiple variables
:param k: number of covariates in set Z
:param independence_test: dictionary containing methods to test conditional independece in data
:param independence_constraints: list of implications to be test input by the user in the format
[(x,y,(z1,z2)),
(x,y, (z3,))
]
: returns: an instance of GraphRefuter class
"""
if independence_test is not None:
test_for_continuous = independence_test["test_for_continuous"]
test_for_discrete = independence_test["test_for_discrete"]
refuter = GraphRefuter(
data=self._data, method_name_continuous=test_for_continuous, method_name_discrete=test_for_discrete
)
else:
refuter = GraphRefuter(data=self._data)
if independence_constraints is None:
all_nodes = list(self._graph.get_all_nodes(include_unobserved=False))
num_nodes = len(all_nodes)
array_indices = list(range(0, num_nodes))
all_possible_combinations = list(
combinations(array_indices, 2)
) # Generating sets of indices of size 2 for different x and y
conditional_independences = []
self.logger.info("The followed conditional independences are true for the input graph")
for combination in all_possible_combinations: # Iterate over the unique 2-sized sets [x,y]
i = combination[0]
j = combination[1]
a = all_nodes[i]
b = all_nodes[j]
if i < j:
temp_arr = all_nodes[:i] + all_nodes[i + 1 : j] + all_nodes[j + 1 :]
else:
temp_arr = all_nodes[:j] + all_nodes[j + 1 : i] + all_nodes[i + 1 :]
k_sized_lists = list(combinations(temp_arr, k))
for k_list in k_sized_lists:
if self._graph.check_dseparation([str(a)], [str(b)], k_list) == True:
self.logger.info(" %s and %s are CI given %s ", a, b, k_list)
conditional_independences.append([a, b, k_list])
independence_constraints = conditional_independences
res = refuter.refute_model(independence_constraints=independence_constraints)
self.logger.info(refuter._refutation_passed)
return res
| """ Module containing the main model class for the dowhy package.
"""
import logging
from itertools import combinations
from sympy import init_printing
import dowhy.causal_estimators as causal_estimators
import dowhy.causal_refuters as causal_refuters
import dowhy.graph_learners as graph_learners
import dowhy.utils.cli_helpers as cli
from dowhy.causal_estimator import CausalEstimate, estimate_effect
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier import AutoIdentifier, BackdoorAdjustment, IDIdentifier
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.causal_refuters.graph_refuter import GraphRefuter
from dowhy.utils.api import parse_state
init_printing() # To display symbolic math symbols
class CausalModel:
"""Main class for storing the causal model state."""
def __init__(
self,
data,
treatment,
outcome,
graph=None,
common_causes=None,
instruments=None,
effect_modifiers=None,
estimand_type="nonparametric-ate",
proceed_when_unidentifiable=False,
missing_nodes_as_confounders=False,
identify_vars=False,
**kwargs,
):
"""Initialize data and create a causal graph instance.
Assigns treatment and outcome variables.
Also checks and finds the common causes and instruments for treatment
and outcome.
At least one of graph, common_causes or instruments must be provided. If
none of these variables are provided, then learn_graph() can be used later.
:param data: a pandas dataframe containing treatment, outcome and other
variables.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param graph: path to DOT file containing a DAG or a string containing
a DAG specification in DOT format
:param common_causes: names of common causes of treatment and _outcome. Only used when graph is None.
:param instruments: names of instrumental variables for the effect of
treatment on outcome. Only used when graph is None.
:param effect_modifiers: names of variables that can modify the treatment effect. If not provided, then the causal graph is used to find the effect modifiers. Estimators will return multiple different estimates based on each value of effect_modifiers.
:param estimand_type: the type of estimand requested (currently only "nonparametric-ate" is supported). In the future, may support other specific parametric forms of identification.
:param proceed_when_unidentifiable: does the identification proceed by ignoring potential unobserved confounders. Binary flag.
:param missing_nodes_as_confounders: Binary flag indicating whether variables in the dataframe that are not included in the causal graph, should be automatically included as confounder nodes.
:param identify_vars: Variable deciding whether to compute common causes, instruments and effect modifiers while initializing the class. identify_vars should be set to False when user is providing common_causes, instruments or effect modifiers on their own(otherwise the identify_vars code can override the user provided values). Also it does not make sense if no graph is given.
:returns: an instance of CausalModel class
"""
self._data = data
self._treatment = parse_state(treatment)
self._outcome = parse_state(outcome)
self._effect_modifiers = parse_state(effect_modifiers)
self._estimand_type = estimand_type
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self._missing_nodes_as_confounders = missing_nodes_as_confounders
self.logger = logging.getLogger(__name__)
self._estimator_cache = {}
if graph is None:
self.logger.warning("Causal Graph not provided. DoWhy will construct a graph based on data inputs.")
self._common_causes = parse_state(common_causes)
self._instruments = parse_state(instruments)
if common_causes is not None and instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif common_causes is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.logger.warning(
"Relevant variables to build causal graph not provided. You may want to use the learn_graph() function to construct the causal graph."
)
self._graph = CausalGraph(
self._treatment,
self._outcome,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.init_graph(graph=graph, identify_vars=identify_vars)
self._other_variables = kwargs
self.summary()
def init_graph(self, graph, identify_vars):
"""
Initialize self._graph using graph provided by the user.
"""
# Create causal graph object
self._graph = CausalGraph(
self._treatment,
self._outcome,
graph,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
missing_nodes_as_confounders=self._missing_nodes_as_confounders,
)
if identify_vars:
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
# Sometimes, effect modifiers from the graph may not match those provided by the user.
# (Because some effect modifiers may also be common causes)
# In such cases, the user-provided modifiers are used.
# If no effect modifiers are provided, then the ones from the graph are used.
if self._effect_modifiers is None or not self._effect_modifiers:
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
def get_common_causes(self):
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
return self._common_causes
def get_instruments(self):
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
return self._instruments
def get_effect_modifiers(self):
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
return self._effect_modifiers
def learn_graph(self, method_name="cdt.causality.graph.LiNGAM", *args, **kwargs):
"""
Learn causal graph from the data. This function takes the method name as input and initializes the
causal graph object using the learnt graph.
:param self: instance of the CausalModel class (or its subclass)
:param method_name: Exact method name of the object to be imported from the concerned library.
:returns: an instance of the CausalGraph class initialized with the learned graph.
"""
# Import causal discovery class
str_arr = method_name.split(".", maxsplit=1)
library_name = str_arr[0]
causal_discovery_class = graph_learners.get_discovery_class_object(library_name)
model = causal_discovery_class(self._data, method_name, *args, **kwargs)
graph = model.learn_graph()
# Initialize causal graph object
self.init_graph(graph=graph)
return self._graph
def identify_effect(
self, estimand_type=None, method_name="default", proceed_when_unidentifiable=None, optimize_backdoor=False
):
"""Identify the causal effect to be estimated, using properties of the causal graph.
:param method_name: Method name for identification algorithm. ("id-algorithm" or "default")
:param proceed_when_unidentifiable: Binary flag indicating whether identification should proceed in the presence of (potential) unobserved confounders.
:returns: a probability expression (estimand) for the causal effect if identified, else NULL
"""
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
if estimand_type is None:
estimand_type = self._estimand_type
estimand_type = EstimandType(estimand_type)
if method_name == "id-algorithm":
identifier = IDIdentifier()
else:
identifier = AutoIdentifier(
estimand_type=estimand_type,
backdoor_adjustment=BackdoorAdjustment(method_name),
proceed_when_unidentifiable=proceed_when_unidentifiable,
optimize_backdoor=optimize_backdoor,
)
identified_estimand = identifier.identify_effect(
graph=self._graph, treatment_name=self._treatment, outcome_name=self._outcome
)
self.identifier = identifier
return identified_estimand
def estimate_effect(
self,
identified_estimand,
method_name=None,
control_value=0,
treatment_value=1,
test_significance=None,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units="ate",
effect_modifiers=None,
fit_estimator=True,
method_params=None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None or len(effect_modifiers) == 0:
effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
if method_name is None:
# TODO add propensity score as default backdoor method, iv as default iv method, add an informational message to show which method has been selected.
pass
else:
# TODO add dowhy as a prefix to all dowhy estimators
num_components = len(method_name.split("."))
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
# This is done as all dowhy estimators have two parts and external ones have two or more parts
if num_components > 2:
estimator_package = estimator_name.split(".")[0]
if estimator_package == "dowhy": # For updated dowhy methods
estimator_method = estimator_name.split(".", maxsplit=1)[
1
] # discard dowhy from the full package name
causal_estimator_class = causal_estimators.get_class_object(estimator_method + "_estimator")
else:
third_party_estimator_package = estimator_package
causal_estimator_class = causal_estimators.get_class_object(
third_party_estimator_package, estimator_name
)
if method_params is None:
method_params = {}
# Define the third-party estimation method to be used
method_params[third_party_estimator_package + "_estimator"] = estimator_name
else: # For older dowhy methods
self.logger.info(estimator_name)
# Process the dowhy estimators
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
if method_params is not None and (num_components <= 2 or estimator_package == "dowhy"):
extra_args = method_params.get("init_params", {})
else:
extra_args = {}
if method_params is None:
method_params = {}
identified_estimand.set_identifier_method(identifier_name)
if not fit_estimator and method_name in self._estimator_cache:
causal_estimator = self._estimator_cache[method_name]
else:
causal_estimator = causal_estimator_class(
identified_estimand,
**method_params,
**extra_args,
)
self._estimator_cache[method_name] = causal_estimator
return estimate_effect(
self._data,
self._treatment,
self._outcome,
identifier_name,
causal_estimator,
control_value,
treatment_value,
target_units,
effect_modifiers,
fit_estimator,
method_params,
)
def do(self, x, identified_estimand, method_name=None, fit_estimator=True, method_params=None):
"""Do operator for estimating values of the outcome after intervening on treatment.
:param x: interventional value of the treatment variable
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: any of the estimation method to be used. See docs
for estimate_effect method for a list of supported estimation methods.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to compute the do-operation on new
data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method.
:returns: an instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if method_name is None:
pass
else:
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
identified_estimand.set_identifier_method(identifier_name)
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
# Check if estimator's target estimand is identified
if identified_estimand.estimands[identifier_name] is None:
self.logger.warning("No valid identified estimand for using instrumental variables method")
estimate = CausalEstimate(None, None, None, None, None)
else:
if fit_estimator:
# Note that while the name of the variable is the same,
# "self.causal_estimator", this estimator takes in less
# parameters than the same from the
# estimate_effect code. It is not advisable to use the
# estimator from this function to call estimate_effect
# with fit_estimator=False.
self.causal_estimator = causal_estimator_class(
identified_estimand,
**method_params,
)
self.causal_estimator.fit(
self._data,
self._treatment,
self._outcome,
)
else:
# Estimator had been computed in a previous call
assert self.causal_estimator is not None
try:
estimate = self.causal_estimator.do(x)
except NotImplementedError:
self.logger.error("Do Operation not implemented or not supported for this estimator.")
raise NotImplementedError
return estimate
def refute_estimate(self, estimand, estimate, method_name=None, show_progress_bar=False, **kwargs):
"""Refute an estimated causal effect.
If method_name is provided, uses the provided method. In the future, we may support automatic selection of suitable refutation tests. Following refutation methods are supported.
* Adding a randomly-generated confounder: "random_common_cause"
* Adding a confounder that is associated with both treatment and outcome: "add_unobserved_common_cause"
* Replacing the treatment with a placebo (random) variable): "placebo_treatment_refuter"
* Removing a random subset of the data: "data_subset_refuter"
:param estimand: target estimand, an instance of the IdentifiedEstimand class (typically, the output of identify_effect)
:param estimate: estimate to be refuted, an instance of the CausalEstimate class (typically, the output of estimate_effect)
:param method_name: name of the refutation method
:param show_progress_bar: Boolean flag on whether to show a progress bar
:param kwargs: (optional) additional arguments that are passed directly to the refutation method. Can specify a random seed here to ensure reproducible results ('random_seed' parameter). For method-specific parameters, consult the documentation for the specific method. All refutation methods are in the causal_refuters subpackage.
:returns: an instance of the RefuteResult class
"""
if estimate is None or estimate.value is None:
self.logger.error("Aborting refutation! No estimate is provided.")
raise ValueError("Aborting refutation! No valid estimate is provided.")
if method_name is None:
pass
else:
refuter_class = causal_refuters.get_class_object(method_name)
refuter = refuter_class(self._data, identified_estimand=estimand, estimate=estimate, **kwargs)
res = refuter.refute_estimate(show_progress_bar)
return res
def view_model(self, layout="dot", size=(8, 6), file_name="causal_model"):
"""View the causal DAG.
:param layout: string specifying the layout of the graph.
:param size: tuple (x, y) specifying the width and height of the figure in inches.
:param file_name: string specifying the file name for the saved causal graph png.
:returns: a visualization of the graph
"""
self._graph.view_graph(layout, size, file_name)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal model.
:param method_name: method used for interpreting the model. If None,
then default interpreter is chosen that describes the model summary and shows the associated causal graph.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
self.summary(print_to_stdout=True)
self.view_model()
return
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def summary(self, print_to_stdout=False):
"""Print a text summary of the model.
:returns: a string containining the summary
"""
summary_text = "Model to find the causal effect of treatment {0} on outcome {1}".format(
self._treatment, self._outcome
)
self.logger.info(summary_text)
if print_to_stdout:
print(summary_text)
return summary_text
def refute_graph(self, k=1, independence_test=None, independence_constraints=None):
"""
Check if the dependencies in input graph matches with the dataset -
( X ⫫ Y ) | Z
where X and Y are considered as singleton sets currently
Z can have multiple variables
:param k: number of covariates in set Z
:param independence_test: dictionary containing methods to test conditional independece in data
:param independence_constraints: list of implications to be test input by the user in the format
[(x,y,(z1,z2)),
(x,y, (z3,))
]
: returns: an instance of GraphRefuter class
"""
if independence_test is not None:
test_for_continuous = independence_test["test_for_continuous"]
test_for_discrete = independence_test["test_for_discrete"]
refuter = GraphRefuter(
data=self._data, method_name_continuous=test_for_continuous, method_name_discrete=test_for_discrete
)
else:
refuter = GraphRefuter(data=self._data)
if independence_constraints is None:
all_nodes = list(self._graph.get_all_nodes(include_unobserved=False))
num_nodes = len(all_nodes)
array_indices = list(range(0, num_nodes))
all_possible_combinations = list(
combinations(array_indices, 2)
) # Generating sets of indices of size 2 for different x and y
conditional_independences = []
self.logger.info("The followed conditional independences are true for the input graph")
for combination in all_possible_combinations: # Iterate over the unique 2-sized sets [x,y]
i = combination[0]
j = combination[1]
a = all_nodes[i]
b = all_nodes[j]
if i < j:
temp_arr = all_nodes[:i] + all_nodes[i + 1 : j] + all_nodes[j + 1 :]
else:
temp_arr = all_nodes[:j] + all_nodes[j + 1 : i] + all_nodes[i + 1 :]
k_sized_lists = list(combinations(temp_arr, k))
for k_list in k_sized_lists:
if self._graph.check_dseparation([str(a)], [str(b)], k_list) == True:
self.logger.info(" %s and %s are CI given %s ", a, b, k_list)
conditional_independences.append([a, b, k_list])
independence_constraints = conditional_independences
res = refuter.refute_model(independence_constraints=independence_constraints)
self.logger.info(refuter._refutation_passed)
return res
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | If I understand correctly, we do not pass the actual params here because we expect the `estimate_effect` call to take care of it, right? | amit-sharma | 249 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_model.py | """ Module containing the main model class for the dowhy package.
"""
import logging
from itertools import combinations
from sympy import init_printing
import dowhy.causal_estimators as causal_estimators
import dowhy.causal_refuters as causal_refuters
import dowhy.graph_learners as graph_learners
import dowhy.utils.cli_helpers as cli
from dowhy.causal_estimator import CausalEstimate, estimate_effect
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier import AutoIdentifier, BackdoorAdjustment, IDIdentifier
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.causal_refuters.graph_refuter import GraphRefuter
from dowhy.utils.api import parse_state
init_printing() # To display symbolic math symbols
class CausalModel:
"""Main class for storing the causal model state."""
def __init__(
self,
data,
treatment,
outcome,
graph=None,
common_causes=None,
instruments=None,
effect_modifiers=None,
estimand_type="nonparametric-ate",
proceed_when_unidentifiable=False,
missing_nodes_as_confounders=False,
identify_vars=False,
**kwargs,
):
"""Initialize data and create a causal graph instance.
Assigns treatment and outcome variables.
Also checks and finds the common causes and instruments for treatment
and outcome.
At least one of graph, common_causes or instruments must be provided. If
none of these variables are provided, then learn_graph() can be used later.
:param data: a pandas dataframe containing treatment, outcome and other
variables.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param graph: path to DOT file containing a DAG or a string containing
a DAG specification in DOT format
:param common_causes: names of common causes of treatment and _outcome. Only used when graph is None.
:param instruments: names of instrumental variables for the effect of
treatment on outcome. Only used when graph is None.
:param effect_modifiers: names of variables that can modify the treatment effect. If not provided, then the causal graph is used to find the effect modifiers. Estimators will return multiple different estimates based on each value of effect_modifiers.
:param estimand_type: the type of estimand requested (currently only "nonparametric-ate" is supported). In the future, may support other specific parametric forms of identification.
:param proceed_when_unidentifiable: does the identification proceed by ignoring potential unobserved confounders. Binary flag.
:param missing_nodes_as_confounders: Binary flag indicating whether variables in the dataframe that are not included in the causal graph, should be automatically included as confounder nodes.
:param identify_vars: Variable deciding whether to compute common causes, instruments and effect modifiers while initializing the class. identify_vars should be set to False when user is providing common_causes, instruments or effect modifiers on their own(otherwise the identify_vars code can override the user provided values). Also it does not make sense if no graph is given.
:returns: an instance of CausalModel class
"""
self._data = data
self._treatment = parse_state(treatment)
self._outcome = parse_state(outcome)
self._effect_modifiers = parse_state(effect_modifiers)
self._estimand_type = estimand_type
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self._missing_nodes_as_confounders = missing_nodes_as_confounders
self.logger = logging.getLogger(__name__)
self._estimator_cache = {}
if graph is None:
self.logger.warning("Causal Graph not provided. DoWhy will construct a graph based on data inputs.")
self._common_causes = parse_state(common_causes)
self._instruments = parse_state(instruments)
if common_causes is not None and instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif common_causes is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.logger.warning(
"Relevant variables to build causal graph not provided. You may want to use the learn_graph() function to construct the causal graph."
)
self._graph = CausalGraph(
self._treatment,
self._outcome,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.init_graph(graph=graph, identify_vars=identify_vars)
self._other_variables = kwargs
self.summary()
def init_graph(self, graph, identify_vars):
"""
Initialize self._graph using graph provided by the user.
"""
# Create causal graph object
self._graph = CausalGraph(
self._treatment,
self._outcome,
graph,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
missing_nodes_as_confounders=self._missing_nodes_as_confounders,
)
if identify_vars:
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
# Sometimes, effect modifiers from the graph may not match those provided by the user.
# (Because some effect modifiers may also be common causes)
# In such cases, the user-provided modifiers are used.
# If no effect modifiers are provided, then the ones from the graph are used.
if self._effect_modifiers is None or not self._effect_modifiers:
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
def get_common_causes(self):
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
return self._common_causes
def get_instruments(self):
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
return self._instruments
def get_effect_modifiers(self):
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
return self._effect_modifiers
def learn_graph(self, method_name="cdt.causality.graph.LiNGAM", *args, **kwargs):
"""
Learn causal graph from the data. This function takes the method name as input and initializes the
causal graph object using the learnt graph.
:param self: instance of the CausalModel class (or its subclass)
:param method_name: Exact method name of the object to be imported from the concerned library.
:returns: an instance of the CausalGraph class initialized with the learned graph.
"""
# Import causal discovery class
str_arr = method_name.split(".", maxsplit=1)
library_name = str_arr[0]
causal_discovery_class = graph_learners.get_discovery_class_object(library_name)
model = causal_discovery_class(self._data, method_name, *args, **kwargs)
graph = model.learn_graph()
# Initialize causal graph object
self.init_graph(graph=graph)
return self._graph
def identify_effect(
self, estimand_type=None, method_name="default", proceed_when_unidentifiable=None, optimize_backdoor=False
):
"""Identify the causal effect to be estimated, using properties of the causal graph.
:param method_name: Method name for identification algorithm. ("id-algorithm" or "default")
:param proceed_when_unidentifiable: Binary flag indicating whether identification should proceed in the presence of (potential) unobserved confounders.
:returns: a probability expression (estimand) for the causal effect if identified, else NULL
"""
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
if estimand_type is None:
estimand_type = self._estimand_type
estimand_type = EstimandType(estimand_type)
if method_name == "id-algorithm":
identifier = IDIdentifier()
else:
identifier = AutoIdentifier(
estimand_type=estimand_type,
backdoor_adjustment=BackdoorAdjustment(method_name),
proceed_when_unidentifiable=proceed_when_unidentifiable,
optimize_backdoor=optimize_backdoor,
)
identified_estimand = identifier.identify_effect(
graph=self._graph, treatment_name=self._treatment, outcome_name=self._outcome
)
self.identifier = identifier
return identified_estimand
def estimate_effect(
self,
identified_estimand,
method_name=None,
control_value=0,
treatment_value=1,
test_significance=None,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units="ate",
effect_modifiers=None,
fit_estimator=True,
method_params=None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None or len(effect_modifiers) == 0:
effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
if method_name is None:
# TODO add propensity score as default backdoor method, iv as default iv method, add an informational message to show which method has been selected.
pass
else:
# TODO add dowhy as a prefix to all dowhy estimators
num_components = len(method_name.split("."))
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
# This is done as all dowhy estimators have two parts and external ones have two or more parts
if num_components > 2:
estimator_package = estimator_name.split(".")[0]
if estimator_package == "dowhy": # For updated dowhy methods
estimator_method = estimator_name.split(".", maxsplit=1)[
1
] # discard dowhy from the full package name
causal_estimator_class = causal_estimators.get_class_object(estimator_method + "_estimator")
else:
third_party_estimator_package = estimator_package
causal_estimator_class = causal_estimators.get_class_object(
third_party_estimator_package, estimator_name
)
if method_params is None:
method_params = {}
# Define the third-party estimation method to be used
method_params[third_party_estimator_package + "_methodname"] = estimator_name
else: # For older dowhy methods
self.logger.info(estimator_name)
# Process the dowhy estimators
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
if method_params is not None and (num_components <= 2 or estimator_package == "dowhy"):
extra_args = method_params.get("init_params", {})
else:
extra_args = {}
if method_params is None:
method_params = {}
identified_estimand.set_identifier_method(identifier_name)
if not fit_estimator and method_name in self._estimator_cache:
causal_estimator = self._estimator_cache[method_name]
else:
causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome, # names of treatment and outcome
control_value=control_value,
treatment_value=treatment_value,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
**method_params,
**extra_args,
)
self._estimator_cache[method_name] = causal_estimator
return estimate_effect(
self._treatment,
self._outcome,
identified_estimand,
identifier_name,
causal_estimator,
control_value,
treatment_value,
test_significance,
evaluate_effect_strength,
confidence_intervals,
target_units,
effect_modifiers,
fit_estimator,
method_params,
)
def do(self, x, identified_estimand, method_name=None, fit_estimator=True, method_params=None):
"""Do operator for estimating values of the outcome after intervening on treatment.
:param x: interventional value of the treatment variable
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: any of the estimation method to be used. See docs
for estimate_effect method for a list of supported estimation methods.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to compute the do-operation on new
data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method.
:returns: an instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if method_name is None:
pass
else:
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
identified_estimand.set_identifier_method(identifier_name)
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
# Check if estimator's target estimand is identified
if identified_estimand.estimands[identifier_name] is None:
self.logger.warning("No valid identified estimand for using instrumental variables method")
estimate = CausalEstimate(None, None, None, None, None)
else:
if fit_estimator:
# Note that while the name of the variable is the same,
# "self.causal_estimator", this estimator takes in less
# parameters than the same from the
# estimate_effect code. It is not advisable to use the
# estimator from this function to call estimate_effect
# with fit_estimator=False.
self.causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome,
test_significance=False,
**method_params,
)
else:
# Estimator had been computed in a previous call
assert self.causal_estimator is not None
try:
estimate = self.causal_estimator.do(x)
except NotImplementedError:
self.logger.error("Do Operation not implemented or not supported for this estimator.")
raise NotImplementedError
return estimate
def refute_estimate(self, estimand, estimate, method_name=None, show_progress_bar=False, **kwargs):
"""Refute an estimated causal effect.
If method_name is provided, uses the provided method. In the future, we may support automatic selection of suitable refutation tests. Following refutation methods are supported.
* Adding a randomly-generated confounder: "random_common_cause"
* Adding a confounder that is associated with both treatment and outcome: "add_unobserved_common_cause"
* Replacing the treatment with a placebo (random) variable): "placebo_treatment_refuter"
* Removing a random subset of the data: "data_subset_refuter"
:param estimand: target estimand, an instance of the IdentifiedEstimand class (typically, the output of identify_effect)
:param estimate: estimate to be refuted, an instance of the CausalEstimate class (typically, the output of estimate_effect)
:param method_name: name of the refutation method
:param show_progress_bar: Boolean flag on whether to show a progress bar
:param kwargs: (optional) additional arguments that are passed directly to the refutation method. Can specify a random seed here to ensure reproducible results ('random_seed' parameter). For method-specific parameters, consult the documentation for the specific method. All refutation methods are in the causal_refuters subpackage.
:returns: an instance of the RefuteResult class
"""
if estimate is None or estimate.value is None:
self.logger.error("Aborting refutation! No estimate is provided.")
raise ValueError("Aborting refutation! No valid estimate is provided.")
if method_name is None:
pass
else:
refuter_class = causal_refuters.get_class_object(method_name)
refuter = refuter_class(self._data, identified_estimand=estimand, estimate=estimate, **kwargs)
res = refuter.refute_estimate(show_progress_bar)
return res
def view_model(self, layout="dot", size=(8, 6), file_name="causal_model"):
"""View the causal DAG.
:param layout: string specifying the layout of the graph.
:param size: tuple (x, y) specifying the width and height of the figure in inches.
:param file_name: string specifying the file name for the saved causal graph png.
:returns: a visualization of the graph
"""
self._graph.view_graph(layout, size, file_name)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal model.
:param method_name: method used for interpreting the model. If None,
then default interpreter is chosen that describes the model summary and shows the associated causal graph.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
self.summary(print_to_stdout=True)
self.view_model()
return
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def summary(self, print_to_stdout=False):
"""Print a text summary of the model.
:returns: a string containining the summary
"""
summary_text = "Model to find the causal effect of treatment {0} on outcome {1}".format(
self._treatment, self._outcome
)
self.logger.info(summary_text)
if print_to_stdout:
print(summary_text)
return summary_text
def refute_graph(self, k=1, independence_test=None, independence_constraints=None):
"""
Check if the dependencies in input graph matches with the dataset -
( X ⫫ Y ) | Z
where X and Y are considered as singleton sets currently
Z can have multiple variables
:param k: number of covariates in set Z
:param independence_test: dictionary containing methods to test conditional independece in data
:param independence_constraints: list of implications to be test input by the user in the format
[(x,y,(z1,z2)),
(x,y, (z3,))
]
: returns: an instance of GraphRefuter class
"""
if independence_test is not None:
test_for_continuous = independence_test["test_for_continuous"]
test_for_discrete = independence_test["test_for_discrete"]
refuter = GraphRefuter(
data=self._data, method_name_continuous=test_for_continuous, method_name_discrete=test_for_discrete
)
else:
refuter = GraphRefuter(data=self._data)
if independence_constraints is None:
all_nodes = list(self._graph.get_all_nodes(include_unobserved=False))
num_nodes = len(all_nodes)
array_indices = list(range(0, num_nodes))
all_possible_combinations = list(
combinations(array_indices, 2)
) # Generating sets of indices of size 2 for different x and y
conditional_independences = []
self.logger.info("The followed conditional independences are true for the input graph")
for combination in all_possible_combinations: # Iterate over the unique 2-sized sets [x,y]
i = combination[0]
j = combination[1]
a = all_nodes[i]
b = all_nodes[j]
if i < j:
temp_arr = all_nodes[:i] + all_nodes[i + 1 : j] + all_nodes[j + 1 :]
else:
temp_arr = all_nodes[:j] + all_nodes[j + 1 : i] + all_nodes[i + 1 :]
k_sized_lists = list(combinations(temp_arr, k))
for k_list in k_sized_lists:
if self._graph.check_dseparation([str(a)], [str(b)], k_list) == True:
self.logger.info(" %s and %s are CI given %s ", a, b, k_list)
conditional_independences.append([a, b, k_list])
independence_constraints = conditional_independences
res = refuter.refute_model(independence_constraints=independence_constraints)
self.logger.info(refuter._refutation_passed)
return res
| """ Module containing the main model class for the dowhy package.
"""
import logging
from itertools import combinations
from sympy import init_printing
import dowhy.causal_estimators as causal_estimators
import dowhy.causal_refuters as causal_refuters
import dowhy.graph_learners as graph_learners
import dowhy.utils.cli_helpers as cli
from dowhy.causal_estimator import CausalEstimate, estimate_effect
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier import AutoIdentifier, BackdoorAdjustment, IDIdentifier
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.causal_refuters.graph_refuter import GraphRefuter
from dowhy.utils.api import parse_state
init_printing() # To display symbolic math symbols
class CausalModel:
"""Main class for storing the causal model state."""
def __init__(
self,
data,
treatment,
outcome,
graph=None,
common_causes=None,
instruments=None,
effect_modifiers=None,
estimand_type="nonparametric-ate",
proceed_when_unidentifiable=False,
missing_nodes_as_confounders=False,
identify_vars=False,
**kwargs,
):
"""Initialize data and create a causal graph instance.
Assigns treatment and outcome variables.
Also checks and finds the common causes and instruments for treatment
and outcome.
At least one of graph, common_causes or instruments must be provided. If
none of these variables are provided, then learn_graph() can be used later.
:param data: a pandas dataframe containing treatment, outcome and other
variables.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param graph: path to DOT file containing a DAG or a string containing
a DAG specification in DOT format
:param common_causes: names of common causes of treatment and _outcome. Only used when graph is None.
:param instruments: names of instrumental variables for the effect of
treatment on outcome. Only used when graph is None.
:param effect_modifiers: names of variables that can modify the treatment effect. If not provided, then the causal graph is used to find the effect modifiers. Estimators will return multiple different estimates based on each value of effect_modifiers.
:param estimand_type: the type of estimand requested (currently only "nonparametric-ate" is supported). In the future, may support other specific parametric forms of identification.
:param proceed_when_unidentifiable: does the identification proceed by ignoring potential unobserved confounders. Binary flag.
:param missing_nodes_as_confounders: Binary flag indicating whether variables in the dataframe that are not included in the causal graph, should be automatically included as confounder nodes.
:param identify_vars: Variable deciding whether to compute common causes, instruments and effect modifiers while initializing the class. identify_vars should be set to False when user is providing common_causes, instruments or effect modifiers on their own(otherwise the identify_vars code can override the user provided values). Also it does not make sense if no graph is given.
:returns: an instance of CausalModel class
"""
self._data = data
self._treatment = parse_state(treatment)
self._outcome = parse_state(outcome)
self._effect_modifiers = parse_state(effect_modifiers)
self._estimand_type = estimand_type
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self._missing_nodes_as_confounders = missing_nodes_as_confounders
self.logger = logging.getLogger(__name__)
self._estimator_cache = {}
if graph is None:
self.logger.warning("Causal Graph not provided. DoWhy will construct a graph based on data inputs.")
self._common_causes = parse_state(common_causes)
self._instruments = parse_state(instruments)
if common_causes is not None and instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif common_causes is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.logger.warning(
"Relevant variables to build causal graph not provided. You may want to use the learn_graph() function to construct the causal graph."
)
self._graph = CausalGraph(
self._treatment,
self._outcome,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.init_graph(graph=graph, identify_vars=identify_vars)
self._other_variables = kwargs
self.summary()
def init_graph(self, graph, identify_vars):
"""
Initialize self._graph using graph provided by the user.
"""
# Create causal graph object
self._graph = CausalGraph(
self._treatment,
self._outcome,
graph,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
missing_nodes_as_confounders=self._missing_nodes_as_confounders,
)
if identify_vars:
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
# Sometimes, effect modifiers from the graph may not match those provided by the user.
# (Because some effect modifiers may also be common causes)
# In such cases, the user-provided modifiers are used.
# If no effect modifiers are provided, then the ones from the graph are used.
if self._effect_modifiers is None or not self._effect_modifiers:
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
def get_common_causes(self):
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
return self._common_causes
def get_instruments(self):
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
return self._instruments
def get_effect_modifiers(self):
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
return self._effect_modifiers
def learn_graph(self, method_name="cdt.causality.graph.LiNGAM", *args, **kwargs):
"""
Learn causal graph from the data. This function takes the method name as input and initializes the
causal graph object using the learnt graph.
:param self: instance of the CausalModel class (or its subclass)
:param method_name: Exact method name of the object to be imported from the concerned library.
:returns: an instance of the CausalGraph class initialized with the learned graph.
"""
# Import causal discovery class
str_arr = method_name.split(".", maxsplit=1)
library_name = str_arr[0]
causal_discovery_class = graph_learners.get_discovery_class_object(library_name)
model = causal_discovery_class(self._data, method_name, *args, **kwargs)
graph = model.learn_graph()
# Initialize causal graph object
self.init_graph(graph=graph)
return self._graph
def identify_effect(
self, estimand_type=None, method_name="default", proceed_when_unidentifiable=None, optimize_backdoor=False
):
"""Identify the causal effect to be estimated, using properties of the causal graph.
:param method_name: Method name for identification algorithm. ("id-algorithm" or "default")
:param proceed_when_unidentifiable: Binary flag indicating whether identification should proceed in the presence of (potential) unobserved confounders.
:returns: a probability expression (estimand) for the causal effect if identified, else NULL
"""
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
if estimand_type is None:
estimand_type = self._estimand_type
estimand_type = EstimandType(estimand_type)
if method_name == "id-algorithm":
identifier = IDIdentifier()
else:
identifier = AutoIdentifier(
estimand_type=estimand_type,
backdoor_adjustment=BackdoorAdjustment(method_name),
proceed_when_unidentifiable=proceed_when_unidentifiable,
optimize_backdoor=optimize_backdoor,
)
identified_estimand = identifier.identify_effect(
graph=self._graph, treatment_name=self._treatment, outcome_name=self._outcome
)
self.identifier = identifier
return identified_estimand
def estimate_effect(
self,
identified_estimand,
method_name=None,
control_value=0,
treatment_value=1,
test_significance=None,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units="ate",
effect_modifiers=None,
fit_estimator=True,
method_params=None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None or len(effect_modifiers) == 0:
effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
if method_name is None:
# TODO add propensity score as default backdoor method, iv as default iv method, add an informational message to show which method has been selected.
pass
else:
# TODO add dowhy as a prefix to all dowhy estimators
num_components = len(method_name.split("."))
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
# This is done as all dowhy estimators have two parts and external ones have two or more parts
if num_components > 2:
estimator_package = estimator_name.split(".")[0]
if estimator_package == "dowhy": # For updated dowhy methods
estimator_method = estimator_name.split(".", maxsplit=1)[
1
] # discard dowhy from the full package name
causal_estimator_class = causal_estimators.get_class_object(estimator_method + "_estimator")
else:
third_party_estimator_package = estimator_package
causal_estimator_class = causal_estimators.get_class_object(
third_party_estimator_package, estimator_name
)
if method_params is None:
method_params = {}
# Define the third-party estimation method to be used
method_params[third_party_estimator_package + "_estimator"] = estimator_name
else: # For older dowhy methods
self.logger.info(estimator_name)
# Process the dowhy estimators
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
if method_params is not None and (num_components <= 2 or estimator_package == "dowhy"):
extra_args = method_params.get("init_params", {})
else:
extra_args = {}
if method_params is None:
method_params = {}
identified_estimand.set_identifier_method(identifier_name)
if not fit_estimator and method_name in self._estimator_cache:
causal_estimator = self._estimator_cache[method_name]
else:
causal_estimator = causal_estimator_class(
identified_estimand,
**method_params,
**extra_args,
)
self._estimator_cache[method_name] = causal_estimator
return estimate_effect(
self._data,
self._treatment,
self._outcome,
identifier_name,
causal_estimator,
control_value,
treatment_value,
target_units,
effect_modifiers,
fit_estimator,
method_params,
)
def do(self, x, identified_estimand, method_name=None, fit_estimator=True, method_params=None):
"""Do operator for estimating values of the outcome after intervening on treatment.
:param x: interventional value of the treatment variable
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: any of the estimation method to be used. See docs
for estimate_effect method for a list of supported estimation methods.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to compute the do-operation on new
data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method.
:returns: an instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if method_name is None:
pass
else:
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
identified_estimand.set_identifier_method(identifier_name)
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
# Check if estimator's target estimand is identified
if identified_estimand.estimands[identifier_name] is None:
self.logger.warning("No valid identified estimand for using instrumental variables method")
estimate = CausalEstimate(None, None, None, None, None)
else:
if fit_estimator:
# Note that while the name of the variable is the same,
# "self.causal_estimator", this estimator takes in less
# parameters than the same from the
# estimate_effect code. It is not advisable to use the
# estimator from this function to call estimate_effect
# with fit_estimator=False.
self.causal_estimator = causal_estimator_class(
identified_estimand,
**method_params,
)
self.causal_estimator.fit(
self._data,
self._treatment,
self._outcome,
)
else:
# Estimator had been computed in a previous call
assert self.causal_estimator is not None
try:
estimate = self.causal_estimator.do(x)
except NotImplementedError:
self.logger.error("Do Operation not implemented or not supported for this estimator.")
raise NotImplementedError
return estimate
def refute_estimate(self, estimand, estimate, method_name=None, show_progress_bar=False, **kwargs):
"""Refute an estimated causal effect.
If method_name is provided, uses the provided method. In the future, we may support automatic selection of suitable refutation tests. Following refutation methods are supported.
* Adding a randomly-generated confounder: "random_common_cause"
* Adding a confounder that is associated with both treatment and outcome: "add_unobserved_common_cause"
* Replacing the treatment with a placebo (random) variable): "placebo_treatment_refuter"
* Removing a random subset of the data: "data_subset_refuter"
:param estimand: target estimand, an instance of the IdentifiedEstimand class (typically, the output of identify_effect)
:param estimate: estimate to be refuted, an instance of the CausalEstimate class (typically, the output of estimate_effect)
:param method_name: name of the refutation method
:param show_progress_bar: Boolean flag on whether to show a progress bar
:param kwargs: (optional) additional arguments that are passed directly to the refutation method. Can specify a random seed here to ensure reproducible results ('random_seed' parameter). For method-specific parameters, consult the documentation for the specific method. All refutation methods are in the causal_refuters subpackage.
:returns: an instance of the RefuteResult class
"""
if estimate is None or estimate.value is None:
self.logger.error("Aborting refutation! No estimate is provided.")
raise ValueError("Aborting refutation! No valid estimate is provided.")
if method_name is None:
pass
else:
refuter_class = causal_refuters.get_class_object(method_name)
refuter = refuter_class(self._data, identified_estimand=estimand, estimate=estimate, **kwargs)
res = refuter.refute_estimate(show_progress_bar)
return res
def view_model(self, layout="dot", size=(8, 6), file_name="causal_model"):
"""View the causal DAG.
:param layout: string specifying the layout of the graph.
:param size: tuple (x, y) specifying the width and height of the figure in inches.
:param file_name: string specifying the file name for the saved causal graph png.
:returns: a visualization of the graph
"""
self._graph.view_graph(layout, size, file_name)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal model.
:param method_name: method used for interpreting the model. If None,
then default interpreter is chosen that describes the model summary and shows the associated causal graph.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
self.summary(print_to_stdout=True)
self.view_model()
return
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def summary(self, print_to_stdout=False):
"""Print a text summary of the model.
:returns: a string containining the summary
"""
summary_text = "Model to find the causal effect of treatment {0} on outcome {1}".format(
self._treatment, self._outcome
)
self.logger.info(summary_text)
if print_to_stdout:
print(summary_text)
return summary_text
def refute_graph(self, k=1, independence_test=None, independence_constraints=None):
"""
Check if the dependencies in input graph matches with the dataset -
( X ⫫ Y ) | Z
where X and Y are considered as singleton sets currently
Z can have multiple variables
:param k: number of covariates in set Z
:param independence_test: dictionary containing methods to test conditional independece in data
:param independence_constraints: list of implications to be test input by the user in the format
[(x,y,(z1,z2)),
(x,y, (z3,))
]
: returns: an instance of GraphRefuter class
"""
if independence_test is not None:
test_for_continuous = independence_test["test_for_continuous"]
test_for_discrete = independence_test["test_for_discrete"]
refuter = GraphRefuter(
data=self._data, method_name_continuous=test_for_continuous, method_name_discrete=test_for_discrete
)
else:
refuter = GraphRefuter(data=self._data)
if independence_constraints is None:
all_nodes = list(self._graph.get_all_nodes(include_unobserved=False))
num_nodes = len(all_nodes)
array_indices = list(range(0, num_nodes))
all_possible_combinations = list(
combinations(array_indices, 2)
) # Generating sets of indices of size 2 for different x and y
conditional_independences = []
self.logger.info("The followed conditional independences are true for the input graph")
for combination in all_possible_combinations: # Iterate over the unique 2-sized sets [x,y]
i = combination[0]
j = combination[1]
a = all_nodes[i]
b = all_nodes[j]
if i < j:
temp_arr = all_nodes[:i] + all_nodes[i + 1 : j] + all_nodes[j + 1 :]
else:
temp_arr = all_nodes[:j] + all_nodes[j + 1 : i] + all_nodes[i + 1 :]
k_sized_lists = list(combinations(temp_arr, k))
for k_list in k_sized_lists:
if self._graph.check_dseparation([str(a)], [str(b)], k_list) == True:
self.logger.info(" %s and %s are CI given %s ", a, b, k_list)
conditional_independences.append([a, b, k_list])
independence_constraints = conditional_independences
res = refuter.refute_model(independence_constraints=independence_constraints)
self.logger.info(refuter._refutation_passed)
return res
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | extra_args should also be passed to estimate_effect? Right now, they are ignored. | amit-sharma | 250 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_model.py | """ Module containing the main model class for the dowhy package.
"""
import logging
from itertools import combinations
from sympy import init_printing
import dowhy.causal_estimators as causal_estimators
import dowhy.causal_refuters as causal_refuters
import dowhy.graph_learners as graph_learners
import dowhy.utils.cli_helpers as cli
from dowhy.causal_estimator import CausalEstimate, estimate_effect
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier import AutoIdentifier, BackdoorAdjustment, IDIdentifier
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.causal_refuters.graph_refuter import GraphRefuter
from dowhy.utils.api import parse_state
init_printing() # To display symbolic math symbols
class CausalModel:
"""Main class for storing the causal model state."""
def __init__(
self,
data,
treatment,
outcome,
graph=None,
common_causes=None,
instruments=None,
effect_modifiers=None,
estimand_type="nonparametric-ate",
proceed_when_unidentifiable=False,
missing_nodes_as_confounders=False,
identify_vars=False,
**kwargs,
):
"""Initialize data and create a causal graph instance.
Assigns treatment and outcome variables.
Also checks and finds the common causes and instruments for treatment
and outcome.
At least one of graph, common_causes or instruments must be provided. If
none of these variables are provided, then learn_graph() can be used later.
:param data: a pandas dataframe containing treatment, outcome and other
variables.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param graph: path to DOT file containing a DAG or a string containing
a DAG specification in DOT format
:param common_causes: names of common causes of treatment and _outcome. Only used when graph is None.
:param instruments: names of instrumental variables for the effect of
treatment on outcome. Only used when graph is None.
:param effect_modifiers: names of variables that can modify the treatment effect. If not provided, then the causal graph is used to find the effect modifiers. Estimators will return multiple different estimates based on each value of effect_modifiers.
:param estimand_type: the type of estimand requested (currently only "nonparametric-ate" is supported). In the future, may support other specific parametric forms of identification.
:param proceed_when_unidentifiable: does the identification proceed by ignoring potential unobserved confounders. Binary flag.
:param missing_nodes_as_confounders: Binary flag indicating whether variables in the dataframe that are not included in the causal graph, should be automatically included as confounder nodes.
:param identify_vars: Variable deciding whether to compute common causes, instruments and effect modifiers while initializing the class. identify_vars should be set to False when user is providing common_causes, instruments or effect modifiers on their own(otherwise the identify_vars code can override the user provided values). Also it does not make sense if no graph is given.
:returns: an instance of CausalModel class
"""
self._data = data
self._treatment = parse_state(treatment)
self._outcome = parse_state(outcome)
self._effect_modifiers = parse_state(effect_modifiers)
self._estimand_type = estimand_type
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self._missing_nodes_as_confounders = missing_nodes_as_confounders
self.logger = logging.getLogger(__name__)
self._estimator_cache = {}
if graph is None:
self.logger.warning("Causal Graph not provided. DoWhy will construct a graph based on data inputs.")
self._common_causes = parse_state(common_causes)
self._instruments = parse_state(instruments)
if common_causes is not None and instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif common_causes is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.logger.warning(
"Relevant variables to build causal graph not provided. You may want to use the learn_graph() function to construct the causal graph."
)
self._graph = CausalGraph(
self._treatment,
self._outcome,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.init_graph(graph=graph, identify_vars=identify_vars)
self._other_variables = kwargs
self.summary()
def init_graph(self, graph, identify_vars):
"""
Initialize self._graph using graph provided by the user.
"""
# Create causal graph object
self._graph = CausalGraph(
self._treatment,
self._outcome,
graph,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
missing_nodes_as_confounders=self._missing_nodes_as_confounders,
)
if identify_vars:
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
# Sometimes, effect modifiers from the graph may not match those provided by the user.
# (Because some effect modifiers may also be common causes)
# In such cases, the user-provided modifiers are used.
# If no effect modifiers are provided, then the ones from the graph are used.
if self._effect_modifiers is None or not self._effect_modifiers:
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
def get_common_causes(self):
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
return self._common_causes
def get_instruments(self):
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
return self._instruments
def get_effect_modifiers(self):
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
return self._effect_modifiers
def learn_graph(self, method_name="cdt.causality.graph.LiNGAM", *args, **kwargs):
"""
Learn causal graph from the data. This function takes the method name as input and initializes the
causal graph object using the learnt graph.
:param self: instance of the CausalModel class (or its subclass)
:param method_name: Exact method name of the object to be imported from the concerned library.
:returns: an instance of the CausalGraph class initialized with the learned graph.
"""
# Import causal discovery class
str_arr = method_name.split(".", maxsplit=1)
library_name = str_arr[0]
causal_discovery_class = graph_learners.get_discovery_class_object(library_name)
model = causal_discovery_class(self._data, method_name, *args, **kwargs)
graph = model.learn_graph()
# Initialize causal graph object
self.init_graph(graph=graph)
return self._graph
def identify_effect(
self, estimand_type=None, method_name="default", proceed_when_unidentifiable=None, optimize_backdoor=False
):
"""Identify the causal effect to be estimated, using properties of the causal graph.
:param method_name: Method name for identification algorithm. ("id-algorithm" or "default")
:param proceed_when_unidentifiable: Binary flag indicating whether identification should proceed in the presence of (potential) unobserved confounders.
:returns: a probability expression (estimand) for the causal effect if identified, else NULL
"""
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
if estimand_type is None:
estimand_type = self._estimand_type
estimand_type = EstimandType(estimand_type)
if method_name == "id-algorithm":
identifier = IDIdentifier()
else:
identifier = AutoIdentifier(
estimand_type=estimand_type,
backdoor_adjustment=BackdoorAdjustment(method_name),
proceed_when_unidentifiable=proceed_when_unidentifiable,
optimize_backdoor=optimize_backdoor,
)
identified_estimand = identifier.identify_effect(
graph=self._graph, treatment_name=self._treatment, outcome_name=self._outcome
)
self.identifier = identifier
return identified_estimand
def estimate_effect(
self,
identified_estimand,
method_name=None,
control_value=0,
treatment_value=1,
test_significance=None,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units="ate",
effect_modifiers=None,
fit_estimator=True,
method_params=None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None or len(effect_modifiers) == 0:
effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
if method_name is None:
# TODO add propensity score as default backdoor method, iv as default iv method, add an informational message to show which method has been selected.
pass
else:
# TODO add dowhy as a prefix to all dowhy estimators
num_components = len(method_name.split("."))
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
# This is done as all dowhy estimators have two parts and external ones have two or more parts
if num_components > 2:
estimator_package = estimator_name.split(".")[0]
if estimator_package == "dowhy": # For updated dowhy methods
estimator_method = estimator_name.split(".", maxsplit=1)[
1
] # discard dowhy from the full package name
causal_estimator_class = causal_estimators.get_class_object(estimator_method + "_estimator")
else:
third_party_estimator_package = estimator_package
causal_estimator_class = causal_estimators.get_class_object(
third_party_estimator_package, estimator_name
)
if method_params is None:
method_params = {}
# Define the third-party estimation method to be used
method_params[third_party_estimator_package + "_methodname"] = estimator_name
else: # For older dowhy methods
self.logger.info(estimator_name)
# Process the dowhy estimators
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
if method_params is not None and (num_components <= 2 or estimator_package == "dowhy"):
extra_args = method_params.get("init_params", {})
else:
extra_args = {}
if method_params is None:
method_params = {}
identified_estimand.set_identifier_method(identifier_name)
if not fit_estimator and method_name in self._estimator_cache:
causal_estimator = self._estimator_cache[method_name]
else:
causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome, # names of treatment and outcome
control_value=control_value,
treatment_value=treatment_value,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
**method_params,
**extra_args,
)
self._estimator_cache[method_name] = causal_estimator
return estimate_effect(
self._treatment,
self._outcome,
identified_estimand,
identifier_name,
causal_estimator,
control_value,
treatment_value,
test_significance,
evaluate_effect_strength,
confidence_intervals,
target_units,
effect_modifiers,
fit_estimator,
method_params,
)
def do(self, x, identified_estimand, method_name=None, fit_estimator=True, method_params=None):
"""Do operator for estimating values of the outcome after intervening on treatment.
:param x: interventional value of the treatment variable
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: any of the estimation method to be used. See docs
for estimate_effect method for a list of supported estimation methods.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to compute the do-operation on new
data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method.
:returns: an instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if method_name is None:
pass
else:
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
identified_estimand.set_identifier_method(identifier_name)
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
# Check if estimator's target estimand is identified
if identified_estimand.estimands[identifier_name] is None:
self.logger.warning("No valid identified estimand for using instrumental variables method")
estimate = CausalEstimate(None, None, None, None, None)
else:
if fit_estimator:
# Note that while the name of the variable is the same,
# "self.causal_estimator", this estimator takes in less
# parameters than the same from the
# estimate_effect code. It is not advisable to use the
# estimator from this function to call estimate_effect
# with fit_estimator=False.
self.causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome,
test_significance=False,
**method_params,
)
else:
# Estimator had been computed in a previous call
assert self.causal_estimator is not None
try:
estimate = self.causal_estimator.do(x)
except NotImplementedError:
self.logger.error("Do Operation not implemented or not supported for this estimator.")
raise NotImplementedError
return estimate
def refute_estimate(self, estimand, estimate, method_name=None, show_progress_bar=False, **kwargs):
"""Refute an estimated causal effect.
If method_name is provided, uses the provided method. In the future, we may support automatic selection of suitable refutation tests. Following refutation methods are supported.
* Adding a randomly-generated confounder: "random_common_cause"
* Adding a confounder that is associated with both treatment and outcome: "add_unobserved_common_cause"
* Replacing the treatment with a placebo (random) variable): "placebo_treatment_refuter"
* Removing a random subset of the data: "data_subset_refuter"
:param estimand: target estimand, an instance of the IdentifiedEstimand class (typically, the output of identify_effect)
:param estimate: estimate to be refuted, an instance of the CausalEstimate class (typically, the output of estimate_effect)
:param method_name: name of the refutation method
:param show_progress_bar: Boolean flag on whether to show a progress bar
:param kwargs: (optional) additional arguments that are passed directly to the refutation method. Can specify a random seed here to ensure reproducible results ('random_seed' parameter). For method-specific parameters, consult the documentation for the specific method. All refutation methods are in the causal_refuters subpackage.
:returns: an instance of the RefuteResult class
"""
if estimate is None or estimate.value is None:
self.logger.error("Aborting refutation! No estimate is provided.")
raise ValueError("Aborting refutation! No valid estimate is provided.")
if method_name is None:
pass
else:
refuter_class = causal_refuters.get_class_object(method_name)
refuter = refuter_class(self._data, identified_estimand=estimand, estimate=estimate, **kwargs)
res = refuter.refute_estimate(show_progress_bar)
return res
def view_model(self, layout="dot", size=(8, 6), file_name="causal_model"):
"""View the causal DAG.
:param layout: string specifying the layout of the graph.
:param size: tuple (x, y) specifying the width and height of the figure in inches.
:param file_name: string specifying the file name for the saved causal graph png.
:returns: a visualization of the graph
"""
self._graph.view_graph(layout, size, file_name)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal model.
:param method_name: method used for interpreting the model. If None,
then default interpreter is chosen that describes the model summary and shows the associated causal graph.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
self.summary(print_to_stdout=True)
self.view_model()
return
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def summary(self, print_to_stdout=False):
"""Print a text summary of the model.
:returns: a string containining the summary
"""
summary_text = "Model to find the causal effect of treatment {0} on outcome {1}".format(
self._treatment, self._outcome
)
self.logger.info(summary_text)
if print_to_stdout:
print(summary_text)
return summary_text
def refute_graph(self, k=1, independence_test=None, independence_constraints=None):
"""
Check if the dependencies in input graph matches with the dataset -
( X ⫫ Y ) | Z
where X and Y are considered as singleton sets currently
Z can have multiple variables
:param k: number of covariates in set Z
:param independence_test: dictionary containing methods to test conditional independece in data
:param independence_constraints: list of implications to be test input by the user in the format
[(x,y,(z1,z2)),
(x,y, (z3,))
]
: returns: an instance of GraphRefuter class
"""
if independence_test is not None:
test_for_continuous = independence_test["test_for_continuous"]
test_for_discrete = independence_test["test_for_discrete"]
refuter = GraphRefuter(
data=self._data, method_name_continuous=test_for_continuous, method_name_discrete=test_for_discrete
)
else:
refuter = GraphRefuter(data=self._data)
if independence_constraints is None:
all_nodes = list(self._graph.get_all_nodes(include_unobserved=False))
num_nodes = len(all_nodes)
array_indices = list(range(0, num_nodes))
all_possible_combinations = list(
combinations(array_indices, 2)
) # Generating sets of indices of size 2 for different x and y
conditional_independences = []
self.logger.info("The followed conditional independences are true for the input graph")
for combination in all_possible_combinations: # Iterate over the unique 2-sized sets [x,y]
i = combination[0]
j = combination[1]
a = all_nodes[i]
b = all_nodes[j]
if i < j:
temp_arr = all_nodes[:i] + all_nodes[i + 1 : j] + all_nodes[j + 1 :]
else:
temp_arr = all_nodes[:j] + all_nodes[j + 1 : i] + all_nodes[i + 1 :]
k_sized_lists = list(combinations(temp_arr, k))
for k_list in k_sized_lists:
if self._graph.check_dseparation([str(a)], [str(b)], k_list) == True:
self.logger.info(" %s and %s are CI given %s ", a, b, k_list)
conditional_independences.append([a, b, k_list])
independence_constraints = conditional_independences
res = refuter.refute_model(independence_constraints=independence_constraints)
self.logger.info(refuter._refutation_passed)
return res
| """ Module containing the main model class for the dowhy package.
"""
import logging
from itertools import combinations
from sympy import init_printing
import dowhy.causal_estimators as causal_estimators
import dowhy.causal_refuters as causal_refuters
import dowhy.graph_learners as graph_learners
import dowhy.utils.cli_helpers as cli
from dowhy.causal_estimator import CausalEstimate, estimate_effect
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier import AutoIdentifier, BackdoorAdjustment, IDIdentifier
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.causal_refuters.graph_refuter import GraphRefuter
from dowhy.utils.api import parse_state
init_printing() # To display symbolic math symbols
class CausalModel:
"""Main class for storing the causal model state."""
def __init__(
self,
data,
treatment,
outcome,
graph=None,
common_causes=None,
instruments=None,
effect_modifiers=None,
estimand_type="nonparametric-ate",
proceed_when_unidentifiable=False,
missing_nodes_as_confounders=False,
identify_vars=False,
**kwargs,
):
"""Initialize data and create a causal graph instance.
Assigns treatment and outcome variables.
Also checks and finds the common causes and instruments for treatment
and outcome.
At least one of graph, common_causes or instruments must be provided. If
none of these variables are provided, then learn_graph() can be used later.
:param data: a pandas dataframe containing treatment, outcome and other
variables.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param graph: path to DOT file containing a DAG or a string containing
a DAG specification in DOT format
:param common_causes: names of common causes of treatment and _outcome. Only used when graph is None.
:param instruments: names of instrumental variables for the effect of
treatment on outcome. Only used when graph is None.
:param effect_modifiers: names of variables that can modify the treatment effect. If not provided, then the causal graph is used to find the effect modifiers. Estimators will return multiple different estimates based on each value of effect_modifiers.
:param estimand_type: the type of estimand requested (currently only "nonparametric-ate" is supported). In the future, may support other specific parametric forms of identification.
:param proceed_when_unidentifiable: does the identification proceed by ignoring potential unobserved confounders. Binary flag.
:param missing_nodes_as_confounders: Binary flag indicating whether variables in the dataframe that are not included in the causal graph, should be automatically included as confounder nodes.
:param identify_vars: Variable deciding whether to compute common causes, instruments and effect modifiers while initializing the class. identify_vars should be set to False when user is providing common_causes, instruments or effect modifiers on their own(otherwise the identify_vars code can override the user provided values). Also it does not make sense if no graph is given.
:returns: an instance of CausalModel class
"""
self._data = data
self._treatment = parse_state(treatment)
self._outcome = parse_state(outcome)
self._effect_modifiers = parse_state(effect_modifiers)
self._estimand_type = estimand_type
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self._missing_nodes_as_confounders = missing_nodes_as_confounders
self.logger = logging.getLogger(__name__)
self._estimator_cache = {}
if graph is None:
self.logger.warning("Causal Graph not provided. DoWhy will construct a graph based on data inputs.")
self._common_causes = parse_state(common_causes)
self._instruments = parse_state(instruments)
if common_causes is not None and instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif common_causes is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.logger.warning(
"Relevant variables to build causal graph not provided. You may want to use the learn_graph() function to construct the causal graph."
)
self._graph = CausalGraph(
self._treatment,
self._outcome,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.init_graph(graph=graph, identify_vars=identify_vars)
self._other_variables = kwargs
self.summary()
def init_graph(self, graph, identify_vars):
"""
Initialize self._graph using graph provided by the user.
"""
# Create causal graph object
self._graph = CausalGraph(
self._treatment,
self._outcome,
graph,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
missing_nodes_as_confounders=self._missing_nodes_as_confounders,
)
if identify_vars:
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
# Sometimes, effect modifiers from the graph may not match those provided by the user.
# (Because some effect modifiers may also be common causes)
# In such cases, the user-provided modifiers are used.
# If no effect modifiers are provided, then the ones from the graph are used.
if self._effect_modifiers is None or not self._effect_modifiers:
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
def get_common_causes(self):
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
return self._common_causes
def get_instruments(self):
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
return self._instruments
def get_effect_modifiers(self):
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
return self._effect_modifiers
def learn_graph(self, method_name="cdt.causality.graph.LiNGAM", *args, **kwargs):
"""
Learn causal graph from the data. This function takes the method name as input and initializes the
causal graph object using the learnt graph.
:param self: instance of the CausalModel class (or its subclass)
:param method_name: Exact method name of the object to be imported from the concerned library.
:returns: an instance of the CausalGraph class initialized with the learned graph.
"""
# Import causal discovery class
str_arr = method_name.split(".", maxsplit=1)
library_name = str_arr[0]
causal_discovery_class = graph_learners.get_discovery_class_object(library_name)
model = causal_discovery_class(self._data, method_name, *args, **kwargs)
graph = model.learn_graph()
# Initialize causal graph object
self.init_graph(graph=graph)
return self._graph
def identify_effect(
self, estimand_type=None, method_name="default", proceed_when_unidentifiable=None, optimize_backdoor=False
):
"""Identify the causal effect to be estimated, using properties of the causal graph.
:param method_name: Method name for identification algorithm. ("id-algorithm" or "default")
:param proceed_when_unidentifiable: Binary flag indicating whether identification should proceed in the presence of (potential) unobserved confounders.
:returns: a probability expression (estimand) for the causal effect if identified, else NULL
"""
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
if estimand_type is None:
estimand_type = self._estimand_type
estimand_type = EstimandType(estimand_type)
if method_name == "id-algorithm":
identifier = IDIdentifier()
else:
identifier = AutoIdentifier(
estimand_type=estimand_type,
backdoor_adjustment=BackdoorAdjustment(method_name),
proceed_when_unidentifiable=proceed_when_unidentifiable,
optimize_backdoor=optimize_backdoor,
)
identified_estimand = identifier.identify_effect(
graph=self._graph, treatment_name=self._treatment, outcome_name=self._outcome
)
self.identifier = identifier
return identified_estimand
def estimate_effect(
self,
identified_estimand,
method_name=None,
control_value=0,
treatment_value=1,
test_significance=None,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units="ate",
effect_modifiers=None,
fit_estimator=True,
method_params=None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None or len(effect_modifiers) == 0:
effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
if method_name is None:
# TODO add propensity score as default backdoor method, iv as default iv method, add an informational message to show which method has been selected.
pass
else:
# TODO add dowhy as a prefix to all dowhy estimators
num_components = len(method_name.split("."))
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
# This is done as all dowhy estimators have two parts and external ones have two or more parts
if num_components > 2:
estimator_package = estimator_name.split(".")[0]
if estimator_package == "dowhy": # For updated dowhy methods
estimator_method = estimator_name.split(".", maxsplit=1)[
1
] # discard dowhy from the full package name
causal_estimator_class = causal_estimators.get_class_object(estimator_method + "_estimator")
else:
third_party_estimator_package = estimator_package
causal_estimator_class = causal_estimators.get_class_object(
third_party_estimator_package, estimator_name
)
if method_params is None:
method_params = {}
# Define the third-party estimation method to be used
method_params[third_party_estimator_package + "_estimator"] = estimator_name
else: # For older dowhy methods
self.logger.info(estimator_name)
# Process the dowhy estimators
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
if method_params is not None and (num_components <= 2 or estimator_package == "dowhy"):
extra_args = method_params.get("init_params", {})
else:
extra_args = {}
if method_params is None:
method_params = {}
identified_estimand.set_identifier_method(identifier_name)
if not fit_estimator and method_name in self._estimator_cache:
causal_estimator = self._estimator_cache[method_name]
else:
causal_estimator = causal_estimator_class(
identified_estimand,
**method_params,
**extra_args,
)
self._estimator_cache[method_name] = causal_estimator
return estimate_effect(
self._data,
self._treatment,
self._outcome,
identifier_name,
causal_estimator,
control_value,
treatment_value,
target_units,
effect_modifiers,
fit_estimator,
method_params,
)
def do(self, x, identified_estimand, method_name=None, fit_estimator=True, method_params=None):
"""Do operator for estimating values of the outcome after intervening on treatment.
:param x: interventional value of the treatment variable
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: any of the estimation method to be used. See docs
for estimate_effect method for a list of supported estimation methods.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to compute the do-operation on new
data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method.
:returns: an instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if method_name is None:
pass
else:
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
identified_estimand.set_identifier_method(identifier_name)
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
# Check if estimator's target estimand is identified
if identified_estimand.estimands[identifier_name] is None:
self.logger.warning("No valid identified estimand for using instrumental variables method")
estimate = CausalEstimate(None, None, None, None, None)
else:
if fit_estimator:
# Note that while the name of the variable is the same,
# "self.causal_estimator", this estimator takes in less
# parameters than the same from the
# estimate_effect code. It is not advisable to use the
# estimator from this function to call estimate_effect
# with fit_estimator=False.
self.causal_estimator = causal_estimator_class(
identified_estimand,
**method_params,
)
self.causal_estimator.fit(
self._data,
self._treatment,
self._outcome,
)
else:
# Estimator had been computed in a previous call
assert self.causal_estimator is not None
try:
estimate = self.causal_estimator.do(x)
except NotImplementedError:
self.logger.error("Do Operation not implemented or not supported for this estimator.")
raise NotImplementedError
return estimate
def refute_estimate(self, estimand, estimate, method_name=None, show_progress_bar=False, **kwargs):
"""Refute an estimated causal effect.
If method_name is provided, uses the provided method. In the future, we may support automatic selection of suitable refutation tests. Following refutation methods are supported.
* Adding a randomly-generated confounder: "random_common_cause"
* Adding a confounder that is associated with both treatment and outcome: "add_unobserved_common_cause"
* Replacing the treatment with a placebo (random) variable): "placebo_treatment_refuter"
* Removing a random subset of the data: "data_subset_refuter"
:param estimand: target estimand, an instance of the IdentifiedEstimand class (typically, the output of identify_effect)
:param estimate: estimate to be refuted, an instance of the CausalEstimate class (typically, the output of estimate_effect)
:param method_name: name of the refutation method
:param show_progress_bar: Boolean flag on whether to show a progress bar
:param kwargs: (optional) additional arguments that are passed directly to the refutation method. Can specify a random seed here to ensure reproducible results ('random_seed' parameter). For method-specific parameters, consult the documentation for the specific method. All refutation methods are in the causal_refuters subpackage.
:returns: an instance of the RefuteResult class
"""
if estimate is None or estimate.value is None:
self.logger.error("Aborting refutation! No estimate is provided.")
raise ValueError("Aborting refutation! No valid estimate is provided.")
if method_name is None:
pass
else:
refuter_class = causal_refuters.get_class_object(method_name)
refuter = refuter_class(self._data, identified_estimand=estimand, estimate=estimate, **kwargs)
res = refuter.refute_estimate(show_progress_bar)
return res
def view_model(self, layout="dot", size=(8, 6), file_name="causal_model"):
"""View the causal DAG.
:param layout: string specifying the layout of the graph.
:param size: tuple (x, y) specifying the width and height of the figure in inches.
:param file_name: string specifying the file name for the saved causal graph png.
:returns: a visualization of the graph
"""
self._graph.view_graph(layout, size, file_name)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal model.
:param method_name: method used for interpreting the model. If None,
then default interpreter is chosen that describes the model summary and shows the associated causal graph.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
self.summary(print_to_stdout=True)
self.view_model()
return
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def summary(self, print_to_stdout=False):
"""Print a text summary of the model.
:returns: a string containining the summary
"""
summary_text = "Model to find the causal effect of treatment {0} on outcome {1}".format(
self._treatment, self._outcome
)
self.logger.info(summary_text)
if print_to_stdout:
print(summary_text)
return summary_text
def refute_graph(self, k=1, independence_test=None, independence_constraints=None):
"""
Check if the dependencies in input graph matches with the dataset -
( X ⫫ Y ) | Z
where X and Y are considered as singleton sets currently
Z can have multiple variables
:param k: number of covariates in set Z
:param independence_test: dictionary containing methods to test conditional independece in data
:param independence_constraints: list of implications to be test input by the user in the format
[(x,y,(z1,z2)),
(x,y, (z3,))
]
: returns: an instance of GraphRefuter class
"""
if independence_test is not None:
test_for_continuous = independence_test["test_for_continuous"]
test_for_discrete = independence_test["test_for_discrete"]
refuter = GraphRefuter(
data=self._data, method_name_continuous=test_for_continuous, method_name_discrete=test_for_discrete
)
else:
refuter = GraphRefuter(data=self._data)
if independence_constraints is None:
all_nodes = list(self._graph.get_all_nodes(include_unobserved=False))
num_nodes = len(all_nodes)
array_indices = list(range(0, num_nodes))
all_possible_combinations = list(
combinations(array_indices, 2)
) # Generating sets of indices of size 2 for different x and y
conditional_independences = []
self.logger.info("The followed conditional independences are true for the input graph")
for combination in all_possible_combinations: # Iterate over the unique 2-sized sets [x,y]
i = combination[0]
j = combination[1]
a = all_nodes[i]
b = all_nodes[j]
if i < j:
temp_arr = all_nodes[:i] + all_nodes[i + 1 : j] + all_nodes[j + 1 :]
else:
temp_arr = all_nodes[:j] + all_nodes[j + 1 : i] + all_nodes[i + 1 :]
k_sized_lists = list(combinations(temp_arr, k))
for k_list in k_sized_lists:
if self._graph.check_dseparation([str(a)], [str(b)], k_list) == True:
self.logger.info(" %s and %s are CI given %s ", a, b, k_list)
conditional_independences.append([a, b, k_list])
independence_constraints = conditional_independences
res = refuter.refute_model(independence_constraints=independence_constraints)
self.logger.info(refuter._refutation_passed)
return res
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | Yep, here we are initializing the estimator, just creating it, then we need to call `fit()` and then `estimate_effect()` | andresmor-ms | 251 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_model.py | """ Module containing the main model class for the dowhy package.
"""
import logging
from itertools import combinations
from sympy import init_printing
import dowhy.causal_estimators as causal_estimators
import dowhy.causal_refuters as causal_refuters
import dowhy.graph_learners as graph_learners
import dowhy.utils.cli_helpers as cli
from dowhy.causal_estimator import CausalEstimate, estimate_effect
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier import AutoIdentifier, BackdoorAdjustment, IDIdentifier
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.causal_refuters.graph_refuter import GraphRefuter
from dowhy.utils.api import parse_state
init_printing() # To display symbolic math symbols
class CausalModel:
"""Main class for storing the causal model state."""
def __init__(
self,
data,
treatment,
outcome,
graph=None,
common_causes=None,
instruments=None,
effect_modifiers=None,
estimand_type="nonparametric-ate",
proceed_when_unidentifiable=False,
missing_nodes_as_confounders=False,
identify_vars=False,
**kwargs,
):
"""Initialize data and create a causal graph instance.
Assigns treatment and outcome variables.
Also checks and finds the common causes and instruments for treatment
and outcome.
At least one of graph, common_causes or instruments must be provided. If
none of these variables are provided, then learn_graph() can be used later.
:param data: a pandas dataframe containing treatment, outcome and other
variables.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param graph: path to DOT file containing a DAG or a string containing
a DAG specification in DOT format
:param common_causes: names of common causes of treatment and _outcome. Only used when graph is None.
:param instruments: names of instrumental variables for the effect of
treatment on outcome. Only used when graph is None.
:param effect_modifiers: names of variables that can modify the treatment effect. If not provided, then the causal graph is used to find the effect modifiers. Estimators will return multiple different estimates based on each value of effect_modifiers.
:param estimand_type: the type of estimand requested (currently only "nonparametric-ate" is supported). In the future, may support other specific parametric forms of identification.
:param proceed_when_unidentifiable: does the identification proceed by ignoring potential unobserved confounders. Binary flag.
:param missing_nodes_as_confounders: Binary flag indicating whether variables in the dataframe that are not included in the causal graph, should be automatically included as confounder nodes.
:param identify_vars: Variable deciding whether to compute common causes, instruments and effect modifiers while initializing the class. identify_vars should be set to False when user is providing common_causes, instruments or effect modifiers on their own(otherwise the identify_vars code can override the user provided values). Also it does not make sense if no graph is given.
:returns: an instance of CausalModel class
"""
self._data = data
self._treatment = parse_state(treatment)
self._outcome = parse_state(outcome)
self._effect_modifiers = parse_state(effect_modifiers)
self._estimand_type = estimand_type
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self._missing_nodes_as_confounders = missing_nodes_as_confounders
self.logger = logging.getLogger(__name__)
self._estimator_cache = {}
if graph is None:
self.logger.warning("Causal Graph not provided. DoWhy will construct a graph based on data inputs.")
self._common_causes = parse_state(common_causes)
self._instruments = parse_state(instruments)
if common_causes is not None and instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif common_causes is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.logger.warning(
"Relevant variables to build causal graph not provided. You may want to use the learn_graph() function to construct the causal graph."
)
self._graph = CausalGraph(
self._treatment,
self._outcome,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.init_graph(graph=graph, identify_vars=identify_vars)
self._other_variables = kwargs
self.summary()
def init_graph(self, graph, identify_vars):
"""
Initialize self._graph using graph provided by the user.
"""
# Create causal graph object
self._graph = CausalGraph(
self._treatment,
self._outcome,
graph,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
missing_nodes_as_confounders=self._missing_nodes_as_confounders,
)
if identify_vars:
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
# Sometimes, effect modifiers from the graph may not match those provided by the user.
# (Because some effect modifiers may also be common causes)
# In such cases, the user-provided modifiers are used.
# If no effect modifiers are provided, then the ones from the graph are used.
if self._effect_modifiers is None or not self._effect_modifiers:
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
def get_common_causes(self):
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
return self._common_causes
def get_instruments(self):
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
return self._instruments
def get_effect_modifiers(self):
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
return self._effect_modifiers
def learn_graph(self, method_name="cdt.causality.graph.LiNGAM", *args, **kwargs):
"""
Learn causal graph from the data. This function takes the method name as input and initializes the
causal graph object using the learnt graph.
:param self: instance of the CausalModel class (or its subclass)
:param method_name: Exact method name of the object to be imported from the concerned library.
:returns: an instance of the CausalGraph class initialized with the learned graph.
"""
# Import causal discovery class
str_arr = method_name.split(".", maxsplit=1)
library_name = str_arr[0]
causal_discovery_class = graph_learners.get_discovery_class_object(library_name)
model = causal_discovery_class(self._data, method_name, *args, **kwargs)
graph = model.learn_graph()
# Initialize causal graph object
self.init_graph(graph=graph)
return self._graph
def identify_effect(
self, estimand_type=None, method_name="default", proceed_when_unidentifiable=None, optimize_backdoor=False
):
"""Identify the causal effect to be estimated, using properties of the causal graph.
:param method_name: Method name for identification algorithm. ("id-algorithm" or "default")
:param proceed_when_unidentifiable: Binary flag indicating whether identification should proceed in the presence of (potential) unobserved confounders.
:returns: a probability expression (estimand) for the causal effect if identified, else NULL
"""
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
if estimand_type is None:
estimand_type = self._estimand_type
estimand_type = EstimandType(estimand_type)
if method_name == "id-algorithm":
identifier = IDIdentifier()
else:
identifier = AutoIdentifier(
estimand_type=estimand_type,
backdoor_adjustment=BackdoorAdjustment(method_name),
proceed_when_unidentifiable=proceed_when_unidentifiable,
optimize_backdoor=optimize_backdoor,
)
identified_estimand = identifier.identify_effect(
graph=self._graph, treatment_name=self._treatment, outcome_name=self._outcome
)
self.identifier = identifier
return identified_estimand
def estimate_effect(
self,
identified_estimand,
method_name=None,
control_value=0,
treatment_value=1,
test_significance=None,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units="ate",
effect_modifiers=None,
fit_estimator=True,
method_params=None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None or len(effect_modifiers) == 0:
effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
if method_name is None:
# TODO add propensity score as default backdoor method, iv as default iv method, add an informational message to show which method has been selected.
pass
else:
# TODO add dowhy as a prefix to all dowhy estimators
num_components = len(method_name.split("."))
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
# This is done as all dowhy estimators have two parts and external ones have two or more parts
if num_components > 2:
estimator_package = estimator_name.split(".")[0]
if estimator_package == "dowhy": # For updated dowhy methods
estimator_method = estimator_name.split(".", maxsplit=1)[
1
] # discard dowhy from the full package name
causal_estimator_class = causal_estimators.get_class_object(estimator_method + "_estimator")
else:
third_party_estimator_package = estimator_package
causal_estimator_class = causal_estimators.get_class_object(
third_party_estimator_package, estimator_name
)
if method_params is None:
method_params = {}
# Define the third-party estimation method to be used
method_params[third_party_estimator_package + "_methodname"] = estimator_name
else: # For older dowhy methods
self.logger.info(estimator_name)
# Process the dowhy estimators
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
if method_params is not None and (num_components <= 2 or estimator_package == "dowhy"):
extra_args = method_params.get("init_params", {})
else:
extra_args = {}
if method_params is None:
method_params = {}
identified_estimand.set_identifier_method(identifier_name)
if not fit_estimator and method_name in self._estimator_cache:
causal_estimator = self._estimator_cache[method_name]
else:
causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome, # names of treatment and outcome
control_value=control_value,
treatment_value=treatment_value,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
**method_params,
**extra_args,
)
self._estimator_cache[method_name] = causal_estimator
return estimate_effect(
self._treatment,
self._outcome,
identified_estimand,
identifier_name,
causal_estimator,
control_value,
treatment_value,
test_significance,
evaluate_effect_strength,
confidence_intervals,
target_units,
effect_modifiers,
fit_estimator,
method_params,
)
def do(self, x, identified_estimand, method_name=None, fit_estimator=True, method_params=None):
"""Do operator for estimating values of the outcome after intervening on treatment.
:param x: interventional value of the treatment variable
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: any of the estimation method to be used. See docs
for estimate_effect method for a list of supported estimation methods.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to compute the do-operation on new
data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method.
:returns: an instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if method_name is None:
pass
else:
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
identified_estimand.set_identifier_method(identifier_name)
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
# Check if estimator's target estimand is identified
if identified_estimand.estimands[identifier_name] is None:
self.logger.warning("No valid identified estimand for using instrumental variables method")
estimate = CausalEstimate(None, None, None, None, None)
else:
if fit_estimator:
# Note that while the name of the variable is the same,
# "self.causal_estimator", this estimator takes in less
# parameters than the same from the
# estimate_effect code. It is not advisable to use the
# estimator from this function to call estimate_effect
# with fit_estimator=False.
self.causal_estimator = causal_estimator_class(
self._data,
identified_estimand,
self._treatment,
self._outcome,
test_significance=False,
**method_params,
)
else:
# Estimator had been computed in a previous call
assert self.causal_estimator is not None
try:
estimate = self.causal_estimator.do(x)
except NotImplementedError:
self.logger.error("Do Operation not implemented or not supported for this estimator.")
raise NotImplementedError
return estimate
def refute_estimate(self, estimand, estimate, method_name=None, show_progress_bar=False, **kwargs):
"""Refute an estimated causal effect.
If method_name is provided, uses the provided method. In the future, we may support automatic selection of suitable refutation tests. Following refutation methods are supported.
* Adding a randomly-generated confounder: "random_common_cause"
* Adding a confounder that is associated with both treatment and outcome: "add_unobserved_common_cause"
* Replacing the treatment with a placebo (random) variable): "placebo_treatment_refuter"
* Removing a random subset of the data: "data_subset_refuter"
:param estimand: target estimand, an instance of the IdentifiedEstimand class (typically, the output of identify_effect)
:param estimate: estimate to be refuted, an instance of the CausalEstimate class (typically, the output of estimate_effect)
:param method_name: name of the refutation method
:param show_progress_bar: Boolean flag on whether to show a progress bar
:param kwargs: (optional) additional arguments that are passed directly to the refutation method. Can specify a random seed here to ensure reproducible results ('random_seed' parameter). For method-specific parameters, consult the documentation for the specific method. All refutation methods are in the causal_refuters subpackage.
:returns: an instance of the RefuteResult class
"""
if estimate is None or estimate.value is None:
self.logger.error("Aborting refutation! No estimate is provided.")
raise ValueError("Aborting refutation! No valid estimate is provided.")
if method_name is None:
pass
else:
refuter_class = causal_refuters.get_class_object(method_name)
refuter = refuter_class(self._data, identified_estimand=estimand, estimate=estimate, **kwargs)
res = refuter.refute_estimate(show_progress_bar)
return res
def view_model(self, layout="dot", size=(8, 6), file_name="causal_model"):
"""View the causal DAG.
:param layout: string specifying the layout of the graph.
:param size: tuple (x, y) specifying the width and height of the figure in inches.
:param file_name: string specifying the file name for the saved causal graph png.
:returns: a visualization of the graph
"""
self._graph.view_graph(layout, size, file_name)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal model.
:param method_name: method used for interpreting the model. If None,
then default interpreter is chosen that describes the model summary and shows the associated causal graph.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
self.summary(print_to_stdout=True)
self.view_model()
return
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def summary(self, print_to_stdout=False):
"""Print a text summary of the model.
:returns: a string containining the summary
"""
summary_text = "Model to find the causal effect of treatment {0} on outcome {1}".format(
self._treatment, self._outcome
)
self.logger.info(summary_text)
if print_to_stdout:
print(summary_text)
return summary_text
def refute_graph(self, k=1, independence_test=None, independence_constraints=None):
"""
Check if the dependencies in input graph matches with the dataset -
( X ⫫ Y ) | Z
where X and Y are considered as singleton sets currently
Z can have multiple variables
:param k: number of covariates in set Z
:param independence_test: dictionary containing methods to test conditional independece in data
:param independence_constraints: list of implications to be test input by the user in the format
[(x,y,(z1,z2)),
(x,y, (z3,))
]
: returns: an instance of GraphRefuter class
"""
if independence_test is not None:
test_for_continuous = independence_test["test_for_continuous"]
test_for_discrete = independence_test["test_for_discrete"]
refuter = GraphRefuter(
data=self._data, method_name_continuous=test_for_continuous, method_name_discrete=test_for_discrete
)
else:
refuter = GraphRefuter(data=self._data)
if independence_constraints is None:
all_nodes = list(self._graph.get_all_nodes(include_unobserved=False))
num_nodes = len(all_nodes)
array_indices = list(range(0, num_nodes))
all_possible_combinations = list(
combinations(array_indices, 2)
) # Generating sets of indices of size 2 for different x and y
conditional_independences = []
self.logger.info("The followed conditional independences are true for the input graph")
for combination in all_possible_combinations: # Iterate over the unique 2-sized sets [x,y]
i = combination[0]
j = combination[1]
a = all_nodes[i]
b = all_nodes[j]
if i < j:
temp_arr = all_nodes[:i] + all_nodes[i + 1 : j] + all_nodes[j + 1 :]
else:
temp_arr = all_nodes[:j] + all_nodes[j + 1 : i] + all_nodes[i + 1 :]
k_sized_lists = list(combinations(temp_arr, k))
for k_list in k_sized_lists:
if self._graph.check_dseparation([str(a)], [str(b)], k_list) == True:
self.logger.info(" %s and %s are CI given %s ", a, b, k_list)
conditional_independences.append([a, b, k_list])
independence_constraints = conditional_independences
res = refuter.refute_model(independence_constraints=independence_constraints)
self.logger.info(refuter._refutation_passed)
return res
| """ Module containing the main model class for the dowhy package.
"""
import logging
from itertools import combinations
from sympy import init_printing
import dowhy.causal_estimators as causal_estimators
import dowhy.causal_refuters as causal_refuters
import dowhy.graph_learners as graph_learners
import dowhy.utils.cli_helpers as cli
from dowhy.causal_estimator import CausalEstimate, estimate_effect
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier import AutoIdentifier, BackdoorAdjustment, IDIdentifier
from dowhy.causal_identifier.identify_effect import EstimandType
from dowhy.causal_refuters.graph_refuter import GraphRefuter
from dowhy.utils.api import parse_state
init_printing() # To display symbolic math symbols
class CausalModel:
"""Main class for storing the causal model state."""
def __init__(
self,
data,
treatment,
outcome,
graph=None,
common_causes=None,
instruments=None,
effect_modifiers=None,
estimand_type="nonparametric-ate",
proceed_when_unidentifiable=False,
missing_nodes_as_confounders=False,
identify_vars=False,
**kwargs,
):
"""Initialize data and create a causal graph instance.
Assigns treatment and outcome variables.
Also checks and finds the common causes and instruments for treatment
and outcome.
At least one of graph, common_causes or instruments must be provided. If
none of these variables are provided, then learn_graph() can be used later.
:param data: a pandas dataframe containing treatment, outcome and other
variables.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param graph: path to DOT file containing a DAG or a string containing
a DAG specification in DOT format
:param common_causes: names of common causes of treatment and _outcome. Only used when graph is None.
:param instruments: names of instrumental variables for the effect of
treatment on outcome. Only used when graph is None.
:param effect_modifiers: names of variables that can modify the treatment effect. If not provided, then the causal graph is used to find the effect modifiers. Estimators will return multiple different estimates based on each value of effect_modifiers.
:param estimand_type: the type of estimand requested (currently only "nonparametric-ate" is supported). In the future, may support other specific parametric forms of identification.
:param proceed_when_unidentifiable: does the identification proceed by ignoring potential unobserved confounders. Binary flag.
:param missing_nodes_as_confounders: Binary flag indicating whether variables in the dataframe that are not included in the causal graph, should be automatically included as confounder nodes.
:param identify_vars: Variable deciding whether to compute common causes, instruments and effect modifiers while initializing the class. identify_vars should be set to False when user is providing common_causes, instruments or effect modifiers on their own(otherwise the identify_vars code can override the user provided values). Also it does not make sense if no graph is given.
:returns: an instance of CausalModel class
"""
self._data = data
self._treatment = parse_state(treatment)
self._outcome = parse_state(outcome)
self._effect_modifiers = parse_state(effect_modifiers)
self._estimand_type = estimand_type
self._proceed_when_unidentifiable = proceed_when_unidentifiable
self._missing_nodes_as_confounders = missing_nodes_as_confounders
self.logger = logging.getLogger(__name__)
self._estimator_cache = {}
if graph is None:
self.logger.warning("Causal Graph not provided. DoWhy will construct a graph based on data inputs.")
self._common_causes = parse_state(common_causes)
self._instruments = parse_state(instruments)
if common_causes is not None and instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif common_causes is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
common_cause_names=self._common_causes,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
elif instruments is not None:
self._graph = CausalGraph(
self._treatment,
self._outcome,
instrument_names=self._instruments,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.logger.warning(
"Relevant variables to build causal graph not provided. You may want to use the learn_graph() function to construct the causal graph."
)
self._graph = CausalGraph(
self._treatment,
self._outcome,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
)
else:
self.init_graph(graph=graph, identify_vars=identify_vars)
self._other_variables = kwargs
self.summary()
def init_graph(self, graph, identify_vars):
"""
Initialize self._graph using graph provided by the user.
"""
# Create causal graph object
self._graph = CausalGraph(
self._treatment,
self._outcome,
graph,
effect_modifier_names=self._effect_modifiers,
observed_node_names=self._data.columns.tolist(),
missing_nodes_as_confounders=self._missing_nodes_as_confounders,
)
if identify_vars:
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
# Sometimes, effect modifiers from the graph may not match those provided by the user.
# (Because some effect modifiers may also be common causes)
# In such cases, the user-provided modifiers are used.
# If no effect modifiers are provided, then the ones from the graph are used.
if self._effect_modifiers is None or not self._effect_modifiers:
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
def get_common_causes(self):
self._common_causes = self._graph.get_common_causes(self._treatment, self._outcome)
return self._common_causes
def get_instruments(self):
self._instruments = self._graph.get_instruments(self._treatment, self._outcome)
return self._instruments
def get_effect_modifiers(self):
self._effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
return self._effect_modifiers
def learn_graph(self, method_name="cdt.causality.graph.LiNGAM", *args, **kwargs):
"""
Learn causal graph from the data. This function takes the method name as input and initializes the
causal graph object using the learnt graph.
:param self: instance of the CausalModel class (or its subclass)
:param method_name: Exact method name of the object to be imported from the concerned library.
:returns: an instance of the CausalGraph class initialized with the learned graph.
"""
# Import causal discovery class
str_arr = method_name.split(".", maxsplit=1)
library_name = str_arr[0]
causal_discovery_class = graph_learners.get_discovery_class_object(library_name)
model = causal_discovery_class(self._data, method_name, *args, **kwargs)
graph = model.learn_graph()
# Initialize causal graph object
self.init_graph(graph=graph)
return self._graph
def identify_effect(
self, estimand_type=None, method_name="default", proceed_when_unidentifiable=None, optimize_backdoor=False
):
"""Identify the causal effect to be estimated, using properties of the causal graph.
:param method_name: Method name for identification algorithm. ("id-algorithm" or "default")
:param proceed_when_unidentifiable: Binary flag indicating whether identification should proceed in the presence of (potential) unobserved confounders.
:returns: a probability expression (estimand) for the causal effect if identified, else NULL
"""
if proceed_when_unidentifiable is None:
proceed_when_unidentifiable = self._proceed_when_unidentifiable
if estimand_type is None:
estimand_type = self._estimand_type
estimand_type = EstimandType(estimand_type)
if method_name == "id-algorithm":
identifier = IDIdentifier()
else:
identifier = AutoIdentifier(
estimand_type=estimand_type,
backdoor_adjustment=BackdoorAdjustment(method_name),
proceed_when_unidentifiable=proceed_when_unidentifiable,
optimize_backdoor=optimize_backdoor,
)
identified_estimand = identifier.identify_effect(
graph=self._graph, treatment_name=self._treatment, outcome_name=self._outcome
)
self.identifier = identifier
return identified_estimand
def estimate_effect(
self,
identified_estimand,
method_name=None,
control_value=0,
treatment_value=1,
test_significance=None,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units="ate",
effect_modifiers=None,
fit_estimator=True,
method_params=None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if effect_modifiers is None or len(effect_modifiers) == 0:
effect_modifiers = self._graph.get_effect_modifiers(self._treatment, self._outcome)
if method_name is None:
# TODO add propensity score as default backdoor method, iv as default iv method, add an informational message to show which method has been selected.
pass
else:
# TODO add dowhy as a prefix to all dowhy estimators
num_components = len(method_name.split("."))
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
# This is done as all dowhy estimators have two parts and external ones have two or more parts
if num_components > 2:
estimator_package = estimator_name.split(".")[0]
if estimator_package == "dowhy": # For updated dowhy methods
estimator_method = estimator_name.split(".", maxsplit=1)[
1
] # discard dowhy from the full package name
causal_estimator_class = causal_estimators.get_class_object(estimator_method + "_estimator")
else:
third_party_estimator_package = estimator_package
causal_estimator_class = causal_estimators.get_class_object(
third_party_estimator_package, estimator_name
)
if method_params is None:
method_params = {}
# Define the third-party estimation method to be used
method_params[third_party_estimator_package + "_estimator"] = estimator_name
else: # For older dowhy methods
self.logger.info(estimator_name)
# Process the dowhy estimators
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
if method_params is not None and (num_components <= 2 or estimator_package == "dowhy"):
extra_args = method_params.get("init_params", {})
else:
extra_args = {}
if method_params is None:
method_params = {}
identified_estimand.set_identifier_method(identifier_name)
if not fit_estimator and method_name in self._estimator_cache:
causal_estimator = self._estimator_cache[method_name]
else:
causal_estimator = causal_estimator_class(
identified_estimand,
**method_params,
**extra_args,
)
self._estimator_cache[method_name] = causal_estimator
return estimate_effect(
self._data,
self._treatment,
self._outcome,
identifier_name,
causal_estimator,
control_value,
treatment_value,
target_units,
effect_modifiers,
fit_estimator,
method_params,
)
def do(self, x, identified_estimand, method_name=None, fit_estimator=True, method_params=None):
"""Do operator for estimating values of the outcome after intervening on treatment.
:param x: interventional value of the treatment variable
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: any of the estimation method to be used. See docs
for estimate_effect method for a list of supported estimation methods.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to compute the do-operation on new
data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method.
:returns: an instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
if method_name is None:
pass
else:
str_arr = method_name.split(".", maxsplit=1)
identifier_name = str_arr[0]
estimator_name = str_arr[1]
identified_estimand.set_identifier_method(identifier_name)
causal_estimator_class = causal_estimators.get_class_object(estimator_name + "_estimator")
# Check if estimator's target estimand is identified
if identified_estimand.estimands[identifier_name] is None:
self.logger.warning("No valid identified estimand for using instrumental variables method")
estimate = CausalEstimate(None, None, None, None, None)
else:
if fit_estimator:
# Note that while the name of the variable is the same,
# "self.causal_estimator", this estimator takes in less
# parameters than the same from the
# estimate_effect code. It is not advisable to use the
# estimator from this function to call estimate_effect
# with fit_estimator=False.
self.causal_estimator = causal_estimator_class(
identified_estimand,
**method_params,
)
self.causal_estimator.fit(
self._data,
self._treatment,
self._outcome,
)
else:
# Estimator had been computed in a previous call
assert self.causal_estimator is not None
try:
estimate = self.causal_estimator.do(x)
except NotImplementedError:
self.logger.error("Do Operation not implemented or not supported for this estimator.")
raise NotImplementedError
return estimate
def refute_estimate(self, estimand, estimate, method_name=None, show_progress_bar=False, **kwargs):
"""Refute an estimated causal effect.
If method_name is provided, uses the provided method. In the future, we may support automatic selection of suitable refutation tests. Following refutation methods are supported.
* Adding a randomly-generated confounder: "random_common_cause"
* Adding a confounder that is associated with both treatment and outcome: "add_unobserved_common_cause"
* Replacing the treatment with a placebo (random) variable): "placebo_treatment_refuter"
* Removing a random subset of the data: "data_subset_refuter"
:param estimand: target estimand, an instance of the IdentifiedEstimand class (typically, the output of identify_effect)
:param estimate: estimate to be refuted, an instance of the CausalEstimate class (typically, the output of estimate_effect)
:param method_name: name of the refutation method
:param show_progress_bar: Boolean flag on whether to show a progress bar
:param kwargs: (optional) additional arguments that are passed directly to the refutation method. Can specify a random seed here to ensure reproducible results ('random_seed' parameter). For method-specific parameters, consult the documentation for the specific method. All refutation methods are in the causal_refuters subpackage.
:returns: an instance of the RefuteResult class
"""
if estimate is None or estimate.value is None:
self.logger.error("Aborting refutation! No estimate is provided.")
raise ValueError("Aborting refutation! No valid estimate is provided.")
if method_name is None:
pass
else:
refuter_class = causal_refuters.get_class_object(method_name)
refuter = refuter_class(self._data, identified_estimand=estimand, estimate=estimate, **kwargs)
res = refuter.refute_estimate(show_progress_bar)
return res
def view_model(self, layout="dot", size=(8, 6), file_name="causal_model"):
"""View the causal DAG.
:param layout: string specifying the layout of the graph.
:param size: tuple (x, y) specifying the width and height of the figure in inches.
:param file_name: string specifying the file name for the saved causal graph png.
:returns: a visualization of the graph
"""
self._graph.view_graph(layout, size, file_name)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal model.
:param method_name: method used for interpreting the model. If None,
then default interpreter is chosen that describes the model summary and shows the associated causal graph.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
self.summary(print_to_stdout=True)
self.view_model()
return
method_name_arr = parse_state(method_name)
import dowhy.interpreters as interpreters
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def summary(self, print_to_stdout=False):
"""Print a text summary of the model.
:returns: a string containining the summary
"""
summary_text = "Model to find the causal effect of treatment {0} on outcome {1}".format(
self._treatment, self._outcome
)
self.logger.info(summary_text)
if print_to_stdout:
print(summary_text)
return summary_text
def refute_graph(self, k=1, independence_test=None, independence_constraints=None):
"""
Check if the dependencies in input graph matches with the dataset -
( X ⫫ Y ) | Z
where X and Y are considered as singleton sets currently
Z can have multiple variables
:param k: number of covariates in set Z
:param independence_test: dictionary containing methods to test conditional independece in data
:param independence_constraints: list of implications to be test input by the user in the format
[(x,y,(z1,z2)),
(x,y, (z3,))
]
: returns: an instance of GraphRefuter class
"""
if independence_test is not None:
test_for_continuous = independence_test["test_for_continuous"]
test_for_discrete = independence_test["test_for_discrete"]
refuter = GraphRefuter(
data=self._data, method_name_continuous=test_for_continuous, method_name_discrete=test_for_discrete
)
else:
refuter = GraphRefuter(data=self._data)
if independence_constraints is None:
all_nodes = list(self._graph.get_all_nodes(include_unobserved=False))
num_nodes = len(all_nodes)
array_indices = list(range(0, num_nodes))
all_possible_combinations = list(
combinations(array_indices, 2)
) # Generating sets of indices of size 2 for different x and y
conditional_independences = []
self.logger.info("The followed conditional independences are true for the input graph")
for combination in all_possible_combinations: # Iterate over the unique 2-sized sets [x,y]
i = combination[0]
j = combination[1]
a = all_nodes[i]
b = all_nodes[j]
if i < j:
temp_arr = all_nodes[:i] + all_nodes[i + 1 : j] + all_nodes[j + 1 :]
else:
temp_arr = all_nodes[:j] + all_nodes[j + 1 : i] + all_nodes[i + 1 :]
k_sized_lists = list(combinations(temp_arr, k))
for k_list in k_sized_lists:
if self._graph.check_dseparation([str(a)], [str(b)], k_list) == True:
self.logger.info(" %s and %s are CI given %s ", a, b, k_list)
conditional_independences.append([a, b, k_list])
independence_constraints = conditional_independences
res = refuter.refute_model(independence_constraints=independence_constraints)
self.logger.info(refuter._refutation_passed)
return res
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | `extra_args` is only used for `init_params` which is used to instantiate estimators, method_params are the extra parameters for executing the estimate_effect method | andresmor-ms | 252 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
logger = logging.getLogger(__name__)
DEFAULT_CONVERGENCE_THRESHOLD = 0.1
DEFAULT_C_STAR_MAX = 1000
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
if self.simulation_method == "linear-partial-R2":
return sensitivity_linear_partial_r2(
self._data,
self._estimate,
self._treatment_name,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.percent_change_estimate,
self.benchmark_common_causes,
self.significance_level,
self.null_hypothesis_effect,
self.plot_estimate,
)
elif self.simulation_method == "non-parametric-partial-R2":
return sensitivity_non_parametric_partial_r2(
self._estimate,
self.kappa_t,
self.kappa_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.benchmark_common_causes,
self.plot_estimate,
self.alpha_s_estimator_list,
self.alpha_s_estimator_param_list,
self.g_s_estimator_list,
self.g_s_estimator_param_list,
self.plugin_reisz,
)
elif self.simulation_method == "e-value":
return sensitivity_e_value(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.plot_estimate,
)
elif self.simulation_method == "direct-simulation":
refute = sensitivity_simulation(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self.effect_on_t,
self.effect_on_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.plotmethod,
show_progress_bar,
)
refute.add_refuter(self)
return refute
def include_simulated_confounder(
self, convergence_threshold=DEFAULT_CONVERGENCE_THRESHOLD, c_star_max=DEFAULT_C_STAR_MAX
):
return include_simulated_confounder(
self._data,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self._variables_of_interest,
convergence_threshold,
c_star_max,
)
def _infer_default_kappa_t(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
treatment_name: List[str],
effect_on_t: str,
frac_strength_treatment: float,
len_kappa_t: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = data[treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(min_coeff: float, max_coeff: float, effect_strength_fraction: np.ndarray):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def _infer_default_kappa_y(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
outcome_name: List[str],
effect_on_y: str,
frac_strength_outcome: float,
len_kappa_y: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = data[outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _include_confounders_effect(
data: pd.DataFrame,
new_data: pd.DataFrame,
effect_on_t: str,
treatment_name: str,
kappa_t: float,
effect_on_y: str,
outcome_name: str,
kappa_y: float,
):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, treatment_name]
)
for tname in treatment_name:
if pd.api.types.is_bool_dtype(data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[treatment_name] = new_data[treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
if effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, outcome_name] = 1 - new_data.loc[rel_interval <= w_random, outcome_name]
for yname in outcome_name:
if pd.api.types.is_bool_dtype(data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[outcome_name] = new_data[outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
kappa_t: float,
kappa_y: float,
variables_of_interest: List,
convergence_threshold: float = DEFAULT_CONVERGENCE_THRESHOLD,
c_star_max: int = DEFAULT_C_STAR_MAX,
):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = choose_variables(required_variables, variables_of_interest)
observed_variables_with_treatment_and_outcome = observed_variables + treatment_name + outcome_name
# Taking a subset of the dataframe that has only observed variables
data = data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = data[outcome_name[0]]
observed_variables_with_treatment = observed_variables + treatment_name
X = data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = data[treatment_name[0]].astype("int64")
X = data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = data[i]
outcome_values = data[outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if kappa_t is not None:
correlation_t_observed = kappa_t
if kappa_y is not None:
correlation_y_observed = kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = _generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def _generate_confounder_from_residuals(c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
def sensitivity_linear_partial_r2(
data: pd.DataFrame,
estimate: CausalEstimate,
treatment_name: str,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
percent_change_estimate: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
significance_level: Optional[float] = None,
null_hypothesis_effect: Optional[float] = None,
plot_estimate: bool = True,
) -> LinearSensitivityAnalyzer:
"""Add an unobserved confounder for refutation using Linear partial R2 methond (Sensitivity Analysis for linear models).
:param data: pd.DataFrame: Data to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
"""
if not (isinstance(estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if frac_strength_outcome == 1:
frac_strength_outcome = frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=estimate.estimator,
data=data,
treatment_name=treatment_name,
percent_change_estimate=percent_change_estimate,
significance_level=significance_level,
benchmark_common_causes=benchmark_common_causes,
null_hypothesis_effect=null_hypothesis_effect,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
common_causes_order=estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_non_parametric_partial_r2(
estimate: CausalEstimate,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
plot_estimate: bool = True,
alpha_s_estimator_list: Optional[List] = None,
alpha_s_estimator_param_list: Optional[List[Dict]] = None,
g_s_estimator_list: Optional[List] = None,
g_s_estimator_param_list: Optional[List[Dict]] = None,
plugin_reisz: bool = False,
) -> Union[PartialLinearSensitivityAnalyzer, NonParametricSensitivityAnalyzer]:
"""Add an unobserved confounder for refutation using Non-parametric partial R2 methond (Sensitivity Analysis for non-parametric models).
:param estimate: CausalEstimate: Estimate to run the refutation
:param kappa_t: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param kappa_y: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param alpha_s_estimator_list: list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
:plugin_reisz: bool: Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s).
"""
import dowhy.causal_estimators.econml
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(estimate.estimator, dowhy.causal_estimators.econml.Econml):
if estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=estimate._estimator_object,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=estimate.estimator,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_list=alpha_s_estimator_list,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
theta_s=estimate.value,
plugin_reisz=plugin_reisz,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_e_value(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: List[str],
outcome_name: List[str],
plot_estimate: bool = True,
) -> EValueSensitivityAnalyzer:
if not isinstance(estimate.estimator, RegressionEstimator):
raise NotImplementedError("E-Value sensitivity analysis is currently only implemented RegressionEstimator.")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=estimate,
estimand=target_estimand,
data=data,
treatment_name=treatment_name[0],
outcome_name=outcome_name[0],
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_simulation(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: str,
outcome_name: str,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
confounders_effect_on_treatment: str = "binary_flip",
confounders_effect_on_outcome: str = "linear",
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
plotmethod: Optional[str] = None,
show_progress_bar=False,
**_,
) -> CausalRefutation:
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param outcome_name: str: Name of the outcome
:param kappa_t: float, numpy.ndarray: Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param kappa_y: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if kappa_t is None:
kappa_t = _infer_default_kappa_t(
data, target_estimand, treatment_name, confounders_effect_on_treatment, frac_strength_treatment
)
if kappa_y is None:
kappa_y = _infer_default_kappa_y(
data, target_estimand, outcome_name, confounders_effect_on_outcome, frac_strength_outcome
)
if not isinstance(kappa_t, (list, np.ndarray)) and not isinstance(
kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(data)
new_data = _include_confounders_effect(
data,
new_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
return refute
else: # Deal with multiple value inputs
if isinstance(kappa_t, (list, np.ndarray)) and isinstance(
kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(kappa_t), len(kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(data)
for i in tqdm(
range(len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(kappa_y)):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y[j],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if plotmethod == "contour":
cp = plt.contourf(kappa_y, kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif plotmethod == "colormesh":
cp = plt.pcolormesh(kappa_y, kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(kappa_t)
ax.xaxis.set_ticks(kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_t))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_t, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_y))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y[i],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_y, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
| import copy
import logging
import math
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.econml import Econml
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
logger = logging.getLogger(__name__)
DEFAULT_CONVERGENCE_THRESHOLD = 0.1
DEFAULT_C_STAR_MAX = 1000
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
if self.simulation_method == "linear-partial-R2":
return sensitivity_linear_partial_r2(
self._data,
self._estimate,
self._treatment_name,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.percent_change_estimate,
self.benchmark_common_causes,
self.significance_level,
self.null_hypothesis_effect,
self.plot_estimate,
)
elif self.simulation_method == "non-parametric-partial-R2":
return sensitivity_non_parametric_partial_r2(
self._estimate,
self.kappa_t,
self.kappa_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.benchmark_common_causes,
self.plot_estimate,
self.alpha_s_estimator_list,
self.alpha_s_estimator_param_list,
self.g_s_estimator_list,
self.g_s_estimator_param_list,
self.plugin_reisz,
)
elif self.simulation_method == "e-value":
return sensitivity_e_value(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.plot_estimate,
)
elif self.simulation_method == "direct-simulation":
refute = sensitivity_simulation(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self.effect_on_t,
self.effect_on_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.plotmethod,
show_progress_bar,
)
refute.add_refuter(self)
return refute
def include_simulated_confounder(
self, convergence_threshold=DEFAULT_CONVERGENCE_THRESHOLD, c_star_max=DEFAULT_C_STAR_MAX
):
return include_simulated_confounder(
self._data,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self._variables_of_interest,
convergence_threshold,
c_star_max,
)
def _infer_default_kappa_t(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
treatment_name: List[str],
effect_on_t: str,
frac_strength_treatment: float,
len_kappa_t: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = data[treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(min_coeff: float, max_coeff: float, effect_strength_fraction: np.ndarray):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def _infer_default_kappa_y(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
outcome_name: List[str],
effect_on_y: str,
frac_strength_outcome: float,
len_kappa_y: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = data[outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _include_confounders_effect(
data: pd.DataFrame,
new_data: pd.DataFrame,
effect_on_t: str,
treatment_name: str,
kappa_t: float,
effect_on_y: str,
outcome_name: str,
kappa_y: float,
):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, treatment_name]
)
for tname in treatment_name:
if pd.api.types.is_bool_dtype(data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[treatment_name] = new_data[treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
if effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, outcome_name] = 1 - new_data.loc[rel_interval <= w_random, outcome_name]
for yname in outcome_name:
if pd.api.types.is_bool_dtype(data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[outcome_name] = new_data[outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
kappa_t: float,
kappa_y: float,
variables_of_interest: List,
convergence_threshold: float = DEFAULT_CONVERGENCE_THRESHOLD,
c_star_max: int = DEFAULT_C_STAR_MAX,
):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = choose_variables(required_variables, variables_of_interest)
observed_variables_with_treatment_and_outcome = observed_variables + treatment_name + outcome_name
# Taking a subset of the dataframe that has only observed variables
data = data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = data[outcome_name[0]]
observed_variables_with_treatment = observed_variables + treatment_name
X = data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = data[treatment_name[0]].astype("int64")
X = data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = data[i]
outcome_values = data[outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if kappa_t is not None:
correlation_t_observed = kappa_t
if kappa_y is not None:
correlation_y_observed = kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = _generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def _generate_confounder_from_residuals(c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
def sensitivity_linear_partial_r2(
data: pd.DataFrame,
estimate: CausalEstimate,
treatment_name: str,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
percent_change_estimate: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
significance_level: Optional[float] = None,
null_hypothesis_effect: Optional[float] = None,
plot_estimate: bool = True,
) -> LinearSensitivityAnalyzer:
"""Add an unobserved confounder for refutation using Linear partial R2 methond (Sensitivity Analysis for linear models).
:param data: pd.DataFrame: Data to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
"""
if not (isinstance(estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if estimate.estimator._effect_modifier_names is not None and len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if frac_strength_outcome == 1:
frac_strength_outcome = frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=estimate.estimator,
data=data,
treatment_name=treatment_name,
percent_change_estimate=percent_change_estimate,
significance_level=significance_level,
benchmark_common_causes=benchmark_common_causes,
null_hypothesis_effect=null_hypothesis_effect,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
common_causes_order=estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_non_parametric_partial_r2(
estimate: CausalEstimate,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
plot_estimate: bool = True,
alpha_s_estimator_list: Optional[List] = None,
alpha_s_estimator_param_list: Optional[List[Dict]] = None,
g_s_estimator_list: Optional[List] = None,
g_s_estimator_param_list: Optional[List[Dict]] = None,
plugin_reisz: bool = False,
) -> Union[PartialLinearSensitivityAnalyzer, NonParametricSensitivityAnalyzer]:
"""Add an unobserved confounder for refutation using Non-parametric partial R2 methond (Sensitivity Analysis for non-parametric models).
:param estimate: CausalEstimate: Estimate to run the refutation
:param kappa_t: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param kappa_y: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param alpha_s_estimator_list: list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
:plugin_reisz: bool: Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s).
"""
import dowhy.causal_estimators.econml
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if (
isinstance(estimate.estimator, dowhy.causal_estimators.econml.Econml)
and estimate.estimator.estimator.__class__.__name__ == "LinearDML"
):
analyzer = PartialLinearSensitivityAnalyzer(
estimator=estimate._estimator_object,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=estimate.estimator,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_list=alpha_s_estimator_list,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
theta_s=estimate.value,
plugin_reisz=plugin_reisz,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_e_value(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: List[str],
outcome_name: List[str],
plot_estimate: bool = True,
) -> EValueSensitivityAnalyzer:
if not isinstance(estimate.estimator, RegressionEstimator):
raise NotImplementedError("E-Value sensitivity analysis is currently only implemented RegressionEstimator.")
if estimate.estimator._effect_modifier_names is not None and len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=estimate,
estimand=target_estimand,
data=data,
treatment_name=treatment_name[0],
outcome_name=outcome_name[0],
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_simulation(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: str,
outcome_name: str,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
confounders_effect_on_treatment: str = "binary_flip",
confounders_effect_on_outcome: str = "linear",
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
plotmethod: Optional[str] = None,
show_progress_bar=False,
**_,
) -> CausalRefutation:
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param outcome_name: str: Name of the outcome
:param kappa_t: float, numpy.ndarray: Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param kappa_y: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if kappa_t is None:
kappa_t = _infer_default_kappa_t(
data, target_estimand, treatment_name, confounders_effect_on_treatment, frac_strength_treatment
)
if kappa_y is None:
kappa_y = _infer_default_kappa_y(
data, target_estimand, outcome_name, confounders_effect_on_outcome, frac_strength_outcome
)
if not isinstance(kappa_t, (list, np.ndarray)) and not isinstance(
kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(data)
new_data = _include_confounders_effect(
data,
new_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = estimate.estimator.get_new_estimator_object(target_estimand)
new_estimator.fit(
new_data,
target_estimand.treatment_variable,
target_estimand.outcome_variable,
estimate.estimator._effect_modifier_names,
**new_estimator._econml_fit_params if isinstance(new_estimator, Econml) else {},
)
new_effect = new_estimator.estimate_effect(
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_units=estimate.estimator._target_units,
)
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
return refute
else: # Deal with multiple value inputs
if isinstance(kappa_t, (list, np.ndarray)) and isinstance(
kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(kappa_t), len(kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(data)
for i in tqdm(
range(len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(kappa_y)):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y[j],
)
new_estimator = estimate.estimator.get_new_estimator_object(target_estimand)
new_estimator.fit(
new_data,
target_estimand.treatment_variable,
target_estimand.outcome_variable,
estimate.estimator._effect_modifier_names,
**new_estimator._econml_fit_params if isinstance(new_estimator, Econml) else {},
)
new_effect = new_estimator.estimate_effect(
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_units=estimate.estimator._target_units,
)
refute = CausalRefutation(
estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if plotmethod == "contour":
cp = plt.contourf(kappa_y, kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif plotmethod == "colormesh":
cp = plt.pcolormesh(kappa_y, kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(kappa_t)
ax.xaxis.set_ticks(kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_t))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = estimate.estimator.get_new_estimator_object(target_estimand)
new_estimator.fit(
new_data,
target_estimand.treatment_variable,
target_estimand.outcome_variable,
estimate.estimator._effect_modifier_names,
**new_estimator._econml_fit_params if isinstance(new_estimator, Econml) else {},
)
new_effect = new_estimator.estimate_effect(
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_units=estimate.estimator._target_units,
)
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_t, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_y))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y[i],
)
new_estimator = estimate.estimator.get_new_estimator_object(target_estimand)
new_estimator.fit(
new_data,
target_estimand.treatment_variable,
target_estimand.outcome_variable,
estimate.estimator._effect_modifier_names,
**new_estimator._econml_fit_params if isinstance(new_estimator, Econml) else {},
)
new_effect = new_estimator.estimate_effect(
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_units=estimate.estimator._target_units,
)
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_y, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | add fit call for all estimators. | amit-sharma | 253 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_refuters/bootstrap_refuter.py | import logging
import random
from typing import List, Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from sklearn.utils import resample
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables, test_significance
logger = logging.getLogger(__name__)
class BootstrapRefuter(CausalRefuter):
"""
Refute an estimate by running it on a random sample of the data containing measurement error in the
confounders. This allows us to find the ability of the estimator to find the effect of the
treatment on the outcome.
It supports additional parameters that can be specified in the refute_estimate() method.
:param num_simulations: The number of simulations to be run, ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param sample_size: The size of each bootstrap sample and is the size of the original data by default
:type sample_size: int, optional
:param required_variables: The list of variables to be used as the input for ``y~f(W)``
This is ``True`` by default, which in turn selects all variables leaving the treatment and the outcome
:type required_variables: int, list, bool, optional
1. An integer argument refers to how many variables will be used for estimating the value of the outcome
2. A list explicitly refers to which variables will be used to estimate the outcome
Furthermore, it gives the ability to explictly select or deselect the covariates present in the estimation of the
outcome. This is done by either adding or explicitly removing variables from the list as shown below:
.. note::
* We need to pass required_variables = ``[W0,W1]`` if we want ``W0`` and ``W1``.
* We need to pass required_variables = ``[-W0,-W1]`` if we want all variables excluding ``W0`` and ``W1``.
3. If the value is True, we wish to include all variables to estimate the value of the outcome.
.. warning:: A ``False`` value is ``INVALID`` and will result in an ``error``.
:param noise: The standard deviation of the noise to be added to the data and is ``BootstrapRefuter.DEFAULT_STD_DEV`` by default
:type noise: float, optional
:param probability_of_change: It specifies the probability with which we change the data for a boolean or categorical variable
It is ``noise`` by default, only if the value of ``noise`` is less than 1.
:type probability_of_change: float, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. For this purpose, we repeat the same seed in the psuedo-random generator.
:type random_state: int, RandomState, optional
"""
DEFAULT_STD_DEV = 0.1
DEFAULT_SUCCESS_PROBABILITY = 0.5
DEFAULT_NUMBER_OF_TRIALS = 1
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS)
self._sample_size = kwargs.pop("sample_size", len(self._data))
self._required_variables = kwargs.pop("required_variables", True)
self._noise = kwargs.pop("noise", BootstrapRefuter.DEFAULT_STD_DEV)
self._probability_of_change = kwargs.pop("probability_of_change", None)
self._random_state = kwargs.pop("random_state", None)
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar: bool = False, *args, **kwargs):
refute = refute_bootstrap(
data=self._data,
target_estimand=self._target_estimand,
estimate=self._estimate,
num_simulations=self._num_simulations,
random_state=self._random_state,
sample_size=self._sample_size,
required_variables=self._required_variables,
noise=self._noise,
probability_of_change=self._probability_of_change,
show_progress_bar=show_progress_bar,
n_jobs=self._n_jobs,
verbose=self._verbose,
)
refute.add_refuter(self)
return refute
def _refute_once(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
chosen_variables: Optional[List] = None,
random_state: Optional[Union[int, np.random.RandomState]] = None,
sample_size: Optional[int] = None,
noise: float = 0.1,
probability_of_change: Optional[float] = None,
):
if random_state is None:
new_data = resample(data, n_samples=sample_size)
else:
new_data = resample(data, n_samples=sample_size, random_state=random_state)
if chosen_variables is not None:
for variable in chosen_variables:
if ("float" or "int") in new_data[variable].dtype.name:
scaling_factor = new_data[variable].std()
new_data[variable] += np.random.normal(loc=0.0, scale=noise * scaling_factor, size=sample_size)
elif "bool" in new_data[variable].dtype.name:
probs = np.random.uniform(0, 1, sample_size)
new_data[variable] = np.where(
probs < probability_of_change, np.logical_not(new_data[variable]), new_data[variable]
)
elif "category" in new_data[variable].dtype.name:
categories = new_data[variable].unique()
# Find the set difference for each row
changed_data = new_data[variable].apply(lambda row: list(set(categories) - set([row])))
# Choose one out of the remaining
changed_data = changed_data.apply(lambda row: random.choice(row))
new_data[variable] = np.where(probs < probability_of_change, changed_data)
new_data[variable].astype("category")
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
return new_effect.value
def refute_bootstrap(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
num_simulations: int = 100,
random_state: Optional[Union[int, np.random.RandomState]] = None,
sample_size: Optional[int] = None,
required_variables: bool = True,
noise: float = 0.1,
probability_of_change: Optional[float] = None,
show_progress_bar: bool = False,
n_jobs: int = 1,
verbose: int = 0,
**_,
) -> CausalRefutation:
"""Refute an estimate by running it on a random sample of the data containing measurement error in the
confounders. This allows us to find the ability of the estimator to find the effect of the
treatment on the outcome.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param num_simulations: The number of simulations to be run, ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:param random_state: The seed value to be added if we wish to repeat the same random behavior. For this purpose, we repeat the same seed in the psuedo-random generator.
:param sample_size: The size of each bootstrap sample and is the size of the original data by default
:param required_variables: The list of variables to be used as the input for ``y~f(W)``
This is ``True`` by default, which in turn selects all variables leaving the treatment and the outcome
1. An integer argument refers to how many variables will be used for estimating the value of the outcome
2. A list explicitly refers to which variables will be used to estimate the outcome
Furthermore, it gives the ability to explictly select or deselect the covariates present in the estimation of the
outcome. This is done by either adding or explicitly removing variables from the list as shown below:
.. note::
* We need to pass required_variables = ``[W0,W1]`` if we want ``W0`` and ``W1``.
* We need to pass required_variables = ``[-W0,-W1]`` if we want all variables excluding ``W0`` and ``W1``.
3. If the value is True, we wish to include all variables to estimate the value of the outcome.
.. warning:: A ``False`` value is ``INVALID`` and will result in an ``error``.
:param noise: The standard deviation of the noise to be added to the data and is ``BootstrapRefuter.DEFAULT_STD_DEV`` by default
:param probability_of_change: It specifies the probability with which we change the data for a boolean or categorical variable
It is ``noise`` by default, only if the value of ``noise`` is less than 1.
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
"""
if sample_size is None:
sample_size = len(data)
chosen_variables = choose_variables(
required_variables,
target_estimand.get_backdoor_variables()
+ target_estimand.instrumental_variables
+ estimate.params["effect_modifiers"],
)
if chosen_variables is None:
logger.info("INFO: There are no chosen variables")
else:
logger.info("INFO: The chosen variables are: " + ",".join(chosen_variables))
if probability_of_change is None and noise > 1:
logger.error("Error in using noise:{} for Binary Flip. The value is greater than 1".format(noise))
raise ValueError("The value for Binary Flip cannot be greater than 1")
elif probability_of_change is None and noise <= 1:
probability_of_change = noise
elif probability_of_change > 1:
logger.error(
"The probability of flip is: {}, However, this value cannot be greater than 1".format(probability_of_change)
)
raise ValueError("Probability of Flip cannot be greater than 1")
if sample_size > len(data):
logger.warning("The sample size is larger than the population size")
logger.info("Refutation over {} simulated datasets of size {} each".format(num_simulations, sample_size))
sample_estimates = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(_refute_once)(
data, target_estimand, estimate, chosen_variables, random_state, sample_size, noise, probability_of_change
)
for _ in tqdm(
range(num_simulations),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
)
)
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(
estimate.value, np.mean(sample_estimates), refutation_type="Refute: Bootstrap Sample Dataset"
)
# We want to see if the estimate falls in the same distribution as the one generated by the refuter
# Ideally that should be the case as running bootstrap should not have a significant effect on the ability
# of the treatment to affect the outcome
refute.add_significance_test_results(test_significance(estimate, sample_estimates))
return refute
| import logging
import random
from typing import List, Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from sklearn.utils import resample
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.econml import Econml
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables, test_significance
logger = logging.getLogger(__name__)
class BootstrapRefuter(CausalRefuter):
"""
Refute an estimate by running it on a random sample of the data containing measurement error in the
confounders. This allows us to find the ability of the estimator to find the effect of the
treatment on the outcome.
It supports additional parameters that can be specified in the refute_estimate() method.
:param num_simulations: The number of simulations to be run, ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param sample_size: The size of each bootstrap sample and is the size of the original data by default
:type sample_size: int, optional
:param required_variables: The list of variables to be used as the input for ``y~f(W)``
This is ``True`` by default, which in turn selects all variables leaving the treatment and the outcome
:type required_variables: int, list, bool, optional
1. An integer argument refers to how many variables will be used for estimating the value of the outcome
2. A list explicitly refers to which variables will be used to estimate the outcome
Furthermore, it gives the ability to explictly select or deselect the covariates present in the estimation of the
outcome. This is done by either adding or explicitly removing variables from the list as shown below:
.. note::
* We need to pass required_variables = ``[W0,W1]`` if we want ``W0`` and ``W1``.
* We need to pass required_variables = ``[-W0,-W1]`` if we want all variables excluding ``W0`` and ``W1``.
3. If the value is True, we wish to include all variables to estimate the value of the outcome.
.. warning:: A ``False`` value is ``INVALID`` and will result in an ``error``.
:param noise: The standard deviation of the noise to be added to the data and is ``BootstrapRefuter.DEFAULT_STD_DEV`` by default
:type noise: float, optional
:param probability_of_change: It specifies the probability with which we change the data for a boolean or categorical variable
It is ``noise`` by default, only if the value of ``noise`` is less than 1.
:type probability_of_change: float, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. For this purpose, we repeat the same seed in the psuedo-random generator.
:type random_state: int, RandomState, optional
"""
DEFAULT_STD_DEV = 0.1
DEFAULT_SUCCESS_PROBABILITY = 0.5
DEFAULT_NUMBER_OF_TRIALS = 1
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS)
self._sample_size = kwargs.pop("sample_size", len(self._data))
self._required_variables = kwargs.pop("required_variables", True)
self._noise = kwargs.pop("noise", BootstrapRefuter.DEFAULT_STD_DEV)
self._probability_of_change = kwargs.pop("probability_of_change", None)
self._random_state = kwargs.pop("random_state", None)
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar: bool = False, *args, **kwargs):
refute = refute_bootstrap(
data=self._data,
target_estimand=self._target_estimand,
estimate=self._estimate,
num_simulations=self._num_simulations,
random_state=self._random_state,
sample_size=self._sample_size,
required_variables=self._required_variables,
noise=self._noise,
probability_of_change=self._probability_of_change,
show_progress_bar=show_progress_bar,
n_jobs=self._n_jobs,
verbose=self._verbose,
)
refute.add_refuter(self)
return refute
def _refute_once(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
chosen_variables: Optional[List] = None,
random_state: Optional[Union[int, np.random.RandomState]] = None,
sample_size: Optional[int] = None,
noise: float = 0.1,
probability_of_change: Optional[float] = None,
):
if random_state is None:
new_data = resample(data, n_samples=sample_size)
else:
new_data = resample(data, n_samples=sample_size, random_state=random_state)
if chosen_variables is not None:
for variable in chosen_variables:
if ("float" or "int") in new_data[variable].dtype.name:
scaling_factor = new_data[variable].std()
new_data[variable] += np.random.normal(loc=0.0, scale=noise * scaling_factor, size=sample_size)
elif "bool" in new_data[variable].dtype.name:
probs = np.random.uniform(0, 1, sample_size)
new_data[variable] = np.where(
probs < probability_of_change, np.logical_not(new_data[variable]), new_data[variable]
)
elif "category" in new_data[variable].dtype.name:
categories = new_data[variable].unique()
# Find the set difference for each row
changed_data = new_data[variable].apply(lambda row: list(set(categories) - set([row])))
# Choose one out of the remaining
changed_data = changed_data.apply(lambda row: random.choice(row))
new_data[variable] = np.where(probs < probability_of_change, changed_data)
new_data[variable].astype("category")
new_estimator = estimate.estimator.get_new_estimator_object(target_estimand)
new_estimator.fit(
new_data,
target_estimand.treatment_variable,
target_estimand.outcome_variable,
estimate.estimator._effect_modifier_names,
**new_estimator._econml_fit_params if isinstance(new_estimator, Econml) else {},
)
new_effect = new_estimator.estimate_effect(
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_units=estimate.estimator._target_units,
)
return new_effect.value
def refute_bootstrap(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
num_simulations: int = 100,
random_state: Optional[Union[int, np.random.RandomState]] = None,
sample_size: Optional[int] = None,
required_variables: bool = True,
noise: float = 0.1,
probability_of_change: Optional[float] = None,
show_progress_bar: bool = False,
n_jobs: int = 1,
verbose: int = 0,
**_,
) -> CausalRefutation:
"""Refute an estimate by running it on a random sample of the data containing measurement error in the
confounders. This allows us to find the ability of the estimator to find the effect of the
treatment on the outcome.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param num_simulations: The number of simulations to be run, ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:param random_state: The seed value to be added if we wish to repeat the same random behavior. For this purpose, we repeat the same seed in the psuedo-random generator.
:param sample_size: The size of each bootstrap sample and is the size of the original data by default
:param required_variables: The list of variables to be used as the input for ``y~f(W)``
This is ``True`` by default, which in turn selects all variables leaving the treatment and the outcome
1. An integer argument refers to how many variables will be used for estimating the value of the outcome
2. A list explicitly refers to which variables will be used to estimate the outcome
Furthermore, it gives the ability to explictly select or deselect the covariates present in the estimation of the
outcome. This is done by either adding or explicitly removing variables from the list as shown below:
.. note::
* We need to pass required_variables = ``[W0,W1]`` if we want ``W0`` and ``W1``.
* We need to pass required_variables = ``[-W0,-W1]`` if we want all variables excluding ``W0`` and ``W1``.
3. If the value is True, we wish to include all variables to estimate the value of the outcome.
.. warning:: A ``False`` value is ``INVALID`` and will result in an ``error``.
:param noise: The standard deviation of the noise to be added to the data and is ``BootstrapRefuter.DEFAULT_STD_DEV`` by default
:param probability_of_change: It specifies the probability with which we change the data for a boolean or categorical variable
It is ``noise`` by default, only if the value of ``noise`` is less than 1.
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
"""
if sample_size is None:
sample_size = len(data)
chosen_variables = choose_variables(
required_variables,
target_estimand.get_backdoor_variables()
+ target_estimand.instrumental_variables
+ estimate.estimator._effect_modifier_names,
)
if chosen_variables is None:
logger.info("INFO: There are no chosen variables")
else:
logger.info("INFO: The chosen variables are: " + ",".join(chosen_variables))
if probability_of_change is None and noise > 1:
logger.error("Error in using noise:{} for Binary Flip. The value is greater than 1".format(noise))
raise ValueError("The value for Binary Flip cannot be greater than 1")
elif probability_of_change is None and noise <= 1:
probability_of_change = noise
elif probability_of_change > 1:
logger.error(
"The probability of flip is: {}, However, this value cannot be greater than 1".format(probability_of_change)
)
raise ValueError("Probability of Flip cannot be greater than 1")
if sample_size > len(data):
logger.warning("The sample size is larger than the population size")
logger.info("Refutation over {} simulated datasets of size {} each".format(num_simulations, sample_size))
sample_estimates = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(_refute_once)(
data, target_estimand, estimate, chosen_variables, random_state, sample_size, noise, probability_of_change
)
for _ in tqdm(
range(num_simulations),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
)
)
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(
estimate.value, np.mean(sample_estimates), refutation_type="Refute: Bootstrap Sample Dataset"
)
# We want to see if the estimate falls in the same distribution as the one generated by the refuter
# Ideally that should be the case as running bootstrap should not have a significant effect on the ability
# of the treatment to affect the outcome
refute.add_significance_test_results(test_significance(estimate, sample_estimates))
return refute
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | I think we should have an explicit fit method here, to be consistent with the new API. | amit-sharma | 254 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_refuters/bootstrap_refuter.py | import logging
import random
from typing import List, Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from sklearn.utils import resample
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables, test_significance
logger = logging.getLogger(__name__)
class BootstrapRefuter(CausalRefuter):
"""
Refute an estimate by running it on a random sample of the data containing measurement error in the
confounders. This allows us to find the ability of the estimator to find the effect of the
treatment on the outcome.
It supports additional parameters that can be specified in the refute_estimate() method.
:param num_simulations: The number of simulations to be run, ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param sample_size: The size of each bootstrap sample and is the size of the original data by default
:type sample_size: int, optional
:param required_variables: The list of variables to be used as the input for ``y~f(W)``
This is ``True`` by default, which in turn selects all variables leaving the treatment and the outcome
:type required_variables: int, list, bool, optional
1. An integer argument refers to how many variables will be used for estimating the value of the outcome
2. A list explicitly refers to which variables will be used to estimate the outcome
Furthermore, it gives the ability to explictly select or deselect the covariates present in the estimation of the
outcome. This is done by either adding or explicitly removing variables from the list as shown below:
.. note::
* We need to pass required_variables = ``[W0,W1]`` if we want ``W0`` and ``W1``.
* We need to pass required_variables = ``[-W0,-W1]`` if we want all variables excluding ``W0`` and ``W1``.
3. If the value is True, we wish to include all variables to estimate the value of the outcome.
.. warning:: A ``False`` value is ``INVALID`` and will result in an ``error``.
:param noise: The standard deviation of the noise to be added to the data and is ``BootstrapRefuter.DEFAULT_STD_DEV`` by default
:type noise: float, optional
:param probability_of_change: It specifies the probability with which we change the data for a boolean or categorical variable
It is ``noise`` by default, only if the value of ``noise`` is less than 1.
:type probability_of_change: float, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. For this purpose, we repeat the same seed in the psuedo-random generator.
:type random_state: int, RandomState, optional
"""
DEFAULT_STD_DEV = 0.1
DEFAULT_SUCCESS_PROBABILITY = 0.5
DEFAULT_NUMBER_OF_TRIALS = 1
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS)
self._sample_size = kwargs.pop("sample_size", len(self._data))
self._required_variables = kwargs.pop("required_variables", True)
self._noise = kwargs.pop("noise", BootstrapRefuter.DEFAULT_STD_DEV)
self._probability_of_change = kwargs.pop("probability_of_change", None)
self._random_state = kwargs.pop("random_state", None)
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar: bool = False, *args, **kwargs):
refute = refute_bootstrap(
data=self._data,
target_estimand=self._target_estimand,
estimate=self._estimate,
num_simulations=self._num_simulations,
random_state=self._random_state,
sample_size=self._sample_size,
required_variables=self._required_variables,
noise=self._noise,
probability_of_change=self._probability_of_change,
show_progress_bar=show_progress_bar,
n_jobs=self._n_jobs,
verbose=self._verbose,
)
refute.add_refuter(self)
return refute
def _refute_once(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
chosen_variables: Optional[List] = None,
random_state: Optional[Union[int, np.random.RandomState]] = None,
sample_size: Optional[int] = None,
noise: float = 0.1,
probability_of_change: Optional[float] = None,
):
if random_state is None:
new_data = resample(data, n_samples=sample_size)
else:
new_data = resample(data, n_samples=sample_size, random_state=random_state)
if chosen_variables is not None:
for variable in chosen_variables:
if ("float" or "int") in new_data[variable].dtype.name:
scaling_factor = new_data[variable].std()
new_data[variable] += np.random.normal(loc=0.0, scale=noise * scaling_factor, size=sample_size)
elif "bool" in new_data[variable].dtype.name:
probs = np.random.uniform(0, 1, sample_size)
new_data[variable] = np.where(
probs < probability_of_change, np.logical_not(new_data[variable]), new_data[variable]
)
elif "category" in new_data[variable].dtype.name:
categories = new_data[variable].unique()
# Find the set difference for each row
changed_data = new_data[variable].apply(lambda row: list(set(categories) - set([row])))
# Choose one out of the remaining
changed_data = changed_data.apply(lambda row: random.choice(row))
new_data[variable] = np.where(probs < probability_of_change, changed_data)
new_data[variable].astype("category")
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
return new_effect.value
def refute_bootstrap(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
num_simulations: int = 100,
random_state: Optional[Union[int, np.random.RandomState]] = None,
sample_size: Optional[int] = None,
required_variables: bool = True,
noise: float = 0.1,
probability_of_change: Optional[float] = None,
show_progress_bar: bool = False,
n_jobs: int = 1,
verbose: int = 0,
**_,
) -> CausalRefutation:
"""Refute an estimate by running it on a random sample of the data containing measurement error in the
confounders. This allows us to find the ability of the estimator to find the effect of the
treatment on the outcome.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param num_simulations: The number of simulations to be run, ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:param random_state: The seed value to be added if we wish to repeat the same random behavior. For this purpose, we repeat the same seed in the psuedo-random generator.
:param sample_size: The size of each bootstrap sample and is the size of the original data by default
:param required_variables: The list of variables to be used as the input for ``y~f(W)``
This is ``True`` by default, which in turn selects all variables leaving the treatment and the outcome
1. An integer argument refers to how many variables will be used for estimating the value of the outcome
2. A list explicitly refers to which variables will be used to estimate the outcome
Furthermore, it gives the ability to explictly select or deselect the covariates present in the estimation of the
outcome. This is done by either adding or explicitly removing variables from the list as shown below:
.. note::
* We need to pass required_variables = ``[W0,W1]`` if we want ``W0`` and ``W1``.
* We need to pass required_variables = ``[-W0,-W1]`` if we want all variables excluding ``W0`` and ``W1``.
3. If the value is True, we wish to include all variables to estimate the value of the outcome.
.. warning:: A ``False`` value is ``INVALID`` and will result in an ``error``.
:param noise: The standard deviation of the noise to be added to the data and is ``BootstrapRefuter.DEFAULT_STD_DEV`` by default
:param probability_of_change: It specifies the probability with which we change the data for a boolean or categorical variable
It is ``noise`` by default, only if the value of ``noise`` is less than 1.
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
"""
if sample_size is None:
sample_size = len(data)
chosen_variables = choose_variables(
required_variables,
target_estimand.get_backdoor_variables()
+ target_estimand.instrumental_variables
+ estimate.params["effect_modifiers"],
)
if chosen_variables is None:
logger.info("INFO: There are no chosen variables")
else:
logger.info("INFO: The chosen variables are: " + ",".join(chosen_variables))
if probability_of_change is None and noise > 1:
logger.error("Error in using noise:{} for Binary Flip. The value is greater than 1".format(noise))
raise ValueError("The value for Binary Flip cannot be greater than 1")
elif probability_of_change is None and noise <= 1:
probability_of_change = noise
elif probability_of_change > 1:
logger.error(
"The probability of flip is: {}, However, this value cannot be greater than 1".format(probability_of_change)
)
raise ValueError("Probability of Flip cannot be greater than 1")
if sample_size > len(data):
logger.warning("The sample size is larger than the population size")
logger.info("Refutation over {} simulated datasets of size {} each".format(num_simulations, sample_size))
sample_estimates = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(_refute_once)(
data, target_estimand, estimate, chosen_variables, random_state, sample_size, noise, probability_of_change
)
for _ in tqdm(
range(num_simulations),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
)
)
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(
estimate.value, np.mean(sample_estimates), refutation_type="Refute: Bootstrap Sample Dataset"
)
# We want to see if the estimate falls in the same distribution as the one generated by the refuter
# Ideally that should be the case as running bootstrap should not have a significant effect on the ability
# of the treatment to affect the outcome
refute.add_significance_test_results(test_significance(estimate, sample_estimates))
return refute
| import logging
import random
from typing import List, Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from sklearn.utils import resample
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.econml import Econml
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables, test_significance
logger = logging.getLogger(__name__)
class BootstrapRefuter(CausalRefuter):
"""
Refute an estimate by running it on a random sample of the data containing measurement error in the
confounders. This allows us to find the ability of the estimator to find the effect of the
treatment on the outcome.
It supports additional parameters that can be specified in the refute_estimate() method.
:param num_simulations: The number of simulations to be run, ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param sample_size: The size of each bootstrap sample and is the size of the original data by default
:type sample_size: int, optional
:param required_variables: The list of variables to be used as the input for ``y~f(W)``
This is ``True`` by default, which in turn selects all variables leaving the treatment and the outcome
:type required_variables: int, list, bool, optional
1. An integer argument refers to how many variables will be used for estimating the value of the outcome
2. A list explicitly refers to which variables will be used to estimate the outcome
Furthermore, it gives the ability to explictly select or deselect the covariates present in the estimation of the
outcome. This is done by either adding or explicitly removing variables from the list as shown below:
.. note::
* We need to pass required_variables = ``[W0,W1]`` if we want ``W0`` and ``W1``.
* We need to pass required_variables = ``[-W0,-W1]`` if we want all variables excluding ``W0`` and ``W1``.
3. If the value is True, we wish to include all variables to estimate the value of the outcome.
.. warning:: A ``False`` value is ``INVALID`` and will result in an ``error``.
:param noise: The standard deviation of the noise to be added to the data and is ``BootstrapRefuter.DEFAULT_STD_DEV`` by default
:type noise: float, optional
:param probability_of_change: It specifies the probability with which we change the data for a boolean or categorical variable
It is ``noise`` by default, only if the value of ``noise`` is less than 1.
:type probability_of_change: float, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. For this purpose, we repeat the same seed in the psuedo-random generator.
:type random_state: int, RandomState, optional
"""
DEFAULT_STD_DEV = 0.1
DEFAULT_SUCCESS_PROBABILITY = 0.5
DEFAULT_NUMBER_OF_TRIALS = 1
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS)
self._sample_size = kwargs.pop("sample_size", len(self._data))
self._required_variables = kwargs.pop("required_variables", True)
self._noise = kwargs.pop("noise", BootstrapRefuter.DEFAULT_STD_DEV)
self._probability_of_change = kwargs.pop("probability_of_change", None)
self._random_state = kwargs.pop("random_state", None)
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar: bool = False, *args, **kwargs):
refute = refute_bootstrap(
data=self._data,
target_estimand=self._target_estimand,
estimate=self._estimate,
num_simulations=self._num_simulations,
random_state=self._random_state,
sample_size=self._sample_size,
required_variables=self._required_variables,
noise=self._noise,
probability_of_change=self._probability_of_change,
show_progress_bar=show_progress_bar,
n_jobs=self._n_jobs,
verbose=self._verbose,
)
refute.add_refuter(self)
return refute
def _refute_once(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
chosen_variables: Optional[List] = None,
random_state: Optional[Union[int, np.random.RandomState]] = None,
sample_size: Optional[int] = None,
noise: float = 0.1,
probability_of_change: Optional[float] = None,
):
if random_state is None:
new_data = resample(data, n_samples=sample_size)
else:
new_data = resample(data, n_samples=sample_size, random_state=random_state)
if chosen_variables is not None:
for variable in chosen_variables:
if ("float" or "int") in new_data[variable].dtype.name:
scaling_factor = new_data[variable].std()
new_data[variable] += np.random.normal(loc=0.0, scale=noise * scaling_factor, size=sample_size)
elif "bool" in new_data[variable].dtype.name:
probs = np.random.uniform(0, 1, sample_size)
new_data[variable] = np.where(
probs < probability_of_change, np.logical_not(new_data[variable]), new_data[variable]
)
elif "category" in new_data[variable].dtype.name:
categories = new_data[variable].unique()
# Find the set difference for each row
changed_data = new_data[variable].apply(lambda row: list(set(categories) - set([row])))
# Choose one out of the remaining
changed_data = changed_data.apply(lambda row: random.choice(row))
new_data[variable] = np.where(probs < probability_of_change, changed_data)
new_data[variable].astype("category")
new_estimator = estimate.estimator.get_new_estimator_object(target_estimand)
new_estimator.fit(
new_data,
target_estimand.treatment_variable,
target_estimand.outcome_variable,
estimate.estimator._effect_modifier_names,
**new_estimator._econml_fit_params if isinstance(new_estimator, Econml) else {},
)
new_effect = new_estimator.estimate_effect(
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_units=estimate.estimator._target_units,
)
return new_effect.value
def refute_bootstrap(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
num_simulations: int = 100,
random_state: Optional[Union[int, np.random.RandomState]] = None,
sample_size: Optional[int] = None,
required_variables: bool = True,
noise: float = 0.1,
probability_of_change: Optional[float] = None,
show_progress_bar: bool = False,
n_jobs: int = 1,
verbose: int = 0,
**_,
) -> CausalRefutation:
"""Refute an estimate by running it on a random sample of the data containing measurement error in the
confounders. This allows us to find the ability of the estimator to find the effect of the
treatment on the outcome.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param num_simulations: The number of simulations to be run, ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:param random_state: The seed value to be added if we wish to repeat the same random behavior. For this purpose, we repeat the same seed in the psuedo-random generator.
:param sample_size: The size of each bootstrap sample and is the size of the original data by default
:param required_variables: The list of variables to be used as the input for ``y~f(W)``
This is ``True`` by default, which in turn selects all variables leaving the treatment and the outcome
1. An integer argument refers to how many variables will be used for estimating the value of the outcome
2. A list explicitly refers to which variables will be used to estimate the outcome
Furthermore, it gives the ability to explictly select or deselect the covariates present in the estimation of the
outcome. This is done by either adding or explicitly removing variables from the list as shown below:
.. note::
* We need to pass required_variables = ``[W0,W1]`` if we want ``W0`` and ``W1``.
* We need to pass required_variables = ``[-W0,-W1]`` if we want all variables excluding ``W0`` and ``W1``.
3. If the value is True, we wish to include all variables to estimate the value of the outcome.
.. warning:: A ``False`` value is ``INVALID`` and will result in an ``error``.
:param noise: The standard deviation of the noise to be added to the data and is ``BootstrapRefuter.DEFAULT_STD_DEV`` by default
:param probability_of_change: It specifies the probability with which we change the data for a boolean or categorical variable
It is ``noise`` by default, only if the value of ``noise`` is less than 1.
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
"""
if sample_size is None:
sample_size = len(data)
chosen_variables = choose_variables(
required_variables,
target_estimand.get_backdoor_variables()
+ target_estimand.instrumental_variables
+ estimate.estimator._effect_modifier_names,
)
if chosen_variables is None:
logger.info("INFO: There are no chosen variables")
else:
logger.info("INFO: The chosen variables are: " + ",".join(chosen_variables))
if probability_of_change is None and noise > 1:
logger.error("Error in using noise:{} for Binary Flip. The value is greater than 1".format(noise))
raise ValueError("The value for Binary Flip cannot be greater than 1")
elif probability_of_change is None and noise <= 1:
probability_of_change = noise
elif probability_of_change > 1:
logger.error(
"The probability of flip is: {}, However, this value cannot be greater than 1".format(probability_of_change)
)
raise ValueError("Probability of Flip cannot be greater than 1")
if sample_size > len(data):
logger.warning("The sample size is larger than the population size")
logger.info("Refutation over {} simulated datasets of size {} each".format(num_simulations, sample_size))
sample_estimates = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(_refute_once)(
data, target_estimand, estimate, chosen_variables, random_state, sample_size, noise, probability_of_change
)
for _ in tqdm(
range(num_simulations),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
)
)
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(
estimate.value, np.mean(sample_estimates), refutation_type="Refute: Bootstrap Sample Dataset"
)
# We want to see if the estimate falls in the same distribution as the one generated by the refuter
# Ideally that should be the case as running bootstrap should not have a significant effect on the ability
# of the treatment to affect the outcome
refute.add_significance_test_results(test_significance(estimate, sample_estimates))
return refute
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | and then make sure that the right effect modifiers and other parameters are passed to the fit method. | amit-sharma | 255 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_refuters/data_subset_refuter.py | import logging
from typing import Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, test_significance
logger = logging.getLogger(__name__)
class DataSubsetRefuter(CausalRefuter):
"""Refute an estimate by rerunning it on a random subset of the original data.
Supports additional parameters that can be specified in the refute_estimate() method. For joblib-related parameters (n_jobs, verbose), please refer to the joblib documentation for more details (https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html).
:param subset_fraction: Fraction of the data to be used for re-estimation, which is ``DataSubsetRefuter.DEFAULT_SUBSET_FRACTION`` by default.
:type subset_fraction: float, optional
:param num_simulations: The number of simulations to be run, which is ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we with to repeat the same behavior we push the same seed in the psuedo-random generator
:type random_state: int, RandomState, optional
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:type n_jobs: int, optional
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
:type verbose: int, optional
"""
# The default subset of the data to be used
DEFAULT_SUBSET_FRACTION = 0.8
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._subset_fraction = kwargs.pop("subset_fraction", 0.8)
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS)
self._random_state = kwargs.pop("random_state", None)
def refute_estimate(self, show_progress_bar: bool = False):
refute = refute_data_subset(
data=self._data,
target_estimand=self._target_estimand,
estimate=self._estimate,
subset_fraction=self._subset_fraction,
num_simulations=self._num_simulations,
random_state=self._random_state,
show_progress_bar=show_progress_bar,
n_jobs=self._n_jobs,
verbose=self._verbose,
)
refute.add_refuter(self)
return refute
def _refute_once(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
subset_fraction: float,
random_state: Optional[Union[int, np.random.RandomState]],
):
if random_state is None:
new_data = data.sample(frac=subset_fraction)
else:
new_data = data.sample(frac=subset_fraction, random_state=random_state)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
return new_effect.value
def refute_data_subset(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
subset_fraction: float = 0.8,
num_simulations: int = 100,
random_state: Optional[Union[int, np.random.RandomState]] = None,
show_progress_bar: bool = False,
n_jobs: int = 1,
verbose: int = 0,
**_,
) -> CausalRefutation:
"""Refute an estimate by rerunning it on a random subset of the original data.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param subset_fraction: Fraction of the data to be used for re-estimation, which is ``DataSubsetRefuter.DEFAULT_SUBSET_FRACTION`` by default.
:param num_simulations: The number of simulations to be run, ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:param random_state: The seed value to be added if we wish to repeat the same random behavior. For this purpose, we repeat the same seed in the psuedo-random generator.
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
"""
logger.info(
"Refutation over {} simulated datasets of size {} each".format(
subset_fraction, subset_fraction * len(data.index)
)
)
# Run refutation in parallel
sample_estimates = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(_refute_once)(data, target_estimand, estimate, subset_fraction, random_state)
for _ in tqdm(
range(num_simulations),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
)
)
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(estimate.value, np.mean(sample_estimates), refutation_type="Refute: Use a subset of data")
# We want to see if the estimate falls in the same distribution as the one generated by the refuter
# Ideally that should be the case as choosing a subset should not have a significant effect on the ability
# of the treatment to affect the outcome
refute.add_significance_test_results(test_significance(estimate, sample_estimates))
return refute
| import logging
from typing import Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.econml import Econml
from dowhy.causal_identifier import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, test_significance
logger = logging.getLogger(__name__)
class DataSubsetRefuter(CausalRefuter):
"""Refute an estimate by rerunning it on a random subset of the original data.
Supports additional parameters that can be specified in the refute_estimate() method. For joblib-related parameters (n_jobs, verbose), please refer to the joblib documentation for more details (https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html).
:param subset_fraction: Fraction of the data to be used for re-estimation, which is ``DataSubsetRefuter.DEFAULT_SUBSET_FRACTION`` by default.
:type subset_fraction: float, optional
:param num_simulations: The number of simulations to be run, which is ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we with to repeat the same behavior we push the same seed in the psuedo-random generator
:type random_state: int, RandomState, optional
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:type n_jobs: int, optional
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
:type verbose: int, optional
"""
# The default subset of the data to be used
DEFAULT_SUBSET_FRACTION = 0.8
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._subset_fraction = kwargs.pop("subset_fraction", 0.8)
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS)
self._random_state = kwargs.pop("random_state", None)
def refute_estimate(self, show_progress_bar: bool = False):
refute = refute_data_subset(
data=self._data,
target_estimand=self._target_estimand,
estimate=self._estimate,
subset_fraction=self._subset_fraction,
num_simulations=self._num_simulations,
random_state=self._random_state,
show_progress_bar=show_progress_bar,
n_jobs=self._n_jobs,
verbose=self._verbose,
)
refute.add_refuter(self)
return refute
def _refute_once(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
subset_fraction: float,
random_state: Optional[Union[int, np.random.RandomState]],
):
if random_state is None:
new_data = data.sample(frac=subset_fraction)
else:
new_data = data.sample(frac=subset_fraction, random_state=random_state)
new_estimator = estimate.estimator.get_new_estimator_object(target_estimand)
new_estimator.fit(
new_data,
target_estimand.treatment_variable,
target_estimand.outcome_variable,
estimate.estimator._effect_modifier_names,
**new_estimator._econml_fit_params if isinstance(new_estimator, Econml) else {},
)
new_effect = new_estimator.estimate_effect(
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_units=estimate.estimator._target_units,
)
return new_effect.value
def refute_data_subset(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
subset_fraction: float = 0.8,
num_simulations: int = 100,
random_state: Optional[Union[int, np.random.RandomState]] = None,
show_progress_bar: bool = False,
n_jobs: int = 1,
verbose: int = 0,
**_,
) -> CausalRefutation:
"""Refute an estimate by rerunning it on a random subset of the original data.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param subset_fraction: Fraction of the data to be used for re-estimation, which is ``DataSubsetRefuter.DEFAULT_SUBSET_FRACTION`` by default.
:param num_simulations: The number of simulations to be run, ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:param random_state: The seed value to be added if we wish to repeat the same random behavior. For this purpose, we repeat the same seed in the psuedo-random generator.
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
"""
logger.info(
"Refutation over {} simulated datasets of size {} each".format(
subset_fraction, subset_fraction * len(data.index)
)
)
# Run refutation in parallel
sample_estimates = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(_refute_once)(data, target_estimand, estimate, subset_fraction, random_state)
for _ in tqdm(
range(num_simulations),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
)
)
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(estimate.value, np.mean(sample_estimates), refutation_type="Refute: Use a subset of data")
# We want to see if the estimate falls in the same distribution as the one generated by the refuter
# Ideally that should be the case as choosing a subset should not have a significant effect on the ability
# of the treatment to affect the outcome
refute.add_significance_test_results(test_significance(estimate, sample_estimates))
return refute
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | add fit method. Same comment for all refuters. | amit-sharma | 256 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_refuters/placebo_treatment_refuter.py | import copy
import logging
from enum import Enum
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, test_significance
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
# Default value of the p value taken for the distribution
DEFAULT_PROBABILITY_OF_BINOMIAL = 0.5
# Number of Trials: Number of cointosses to understand if a sample gets the treatment
DEFAULT_NUMBER_OF_TRIALS = 1
# Mean of the Normal Distribution
DEFAULT_MEAN_OF_NORMAL = 0
# Standard Deviation of the Normal Distribution
DEFAULT_STD_DEV_OF_NORMAL = 0
class PlaceboType(Enum):
DEFAULT = "Random Data"
PERMUTE = "permute"
class PlaceboTreatmentRefuter(CausalRefuter):
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
Supports additional parameters that can be specified in the refute_estimate() method. For joblib-related parameters (n_jobs, verbose), please refer to the joblib documentation for more details (https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html).
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:type placebo_type: str, optional
:param num_simulations: The number of simulations to be run, which is ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:type random_state: int, RandomState, optional
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:type n_jobs: int, optional
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
:type verbose: int, optional
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._placebo_type = kwargs.pop("placebo_type", None)
if self._placebo_type is None:
self._placebo_type = "Random Data"
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS)
self._random_state = kwargs.pop("random_state", None)
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
refute = refute_placebo_treatment(
data=self._data,
target_estimand=self._target_estimand,
estimate=self._estimate,
treatment_names=self._treatment_name,
num_simulations=self._num_simulations,
placebo_type=PlaceboType(self._placebo_type),
random_state=self._random_state,
show_progress_bar=show_progress_bar,
n_jobs=self._n_jobs,
verbose=self._verbose,
)
refute.add_refuter(self)
return refute
def _refute_once(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List[str],
type_dict: Dict,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[np.random.RandomState] = None,
):
if placebo_type == PlaceboType.PERMUTE:
permuted_idx = None
if random_state is None:
permuted_idx = np.random.choice(data.shape[0], size=data.shape[0], replace=False)
else:
permuted_idx = random_state.choice(data.shape[0], size=data.shape[0], replace=False)
new_treatment = data[treatment_names].iloc[permuted_idx].values
if target_estimand.identifier_method.startswith("iv"):
new_instruments_values = data[estimate.estimator.estimating_instrument_names].iloc[permuted_idx].values
new_instruments_df = pd.DataFrame(
new_instruments_values,
columns=["placebo_" + s for s in data[estimate.estimator.estimating_instrument_names].columns],
)
else:
if "float" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Normal Distribution with Mean:{} and Variance:{}".format(
DEFAULT_MEAN_OF_NORMAL,
DEFAULT_STD_DEV_OF_NORMAL,
)
)
new_treatment = np.random.randn(data.shape[0]) * DEFAULT_STD_DEV_OF_NORMAL + DEFAULT_MEAN_OF_NORMAL
elif "bool" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Binomial Distribution with {} trials and {} probability of success".format(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
)
)
new_treatment = np.random.binomial(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
data.shape[0],
).astype(bool)
elif "int" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Discrete Uniform Distribution lying between {} and {}".format(
data[treatment_names[0]].min(), data[treatment_names[0]].max()
)
)
new_treatment = np.random.randint(
low=data[treatment_names[0]].min(), high=data[treatment_names[0]].max() + 1, size=data.shape[0]
)
elif "category" in type_dict[treatment_names[0]].name:
categories = data[treatment_names[0]].unique()
logger.info("Using a Discrete Uniform Distribution with the following categories:{}".format(categories))
sample = np.random.choice(categories, size=data.shape[0])
new_treatment = pd.Series(sample, index=data.index).astype("category")
# Create a new column in the data by the name of placebo
new_data = data.assign(placebo=new_treatment)
if target_estimand.identifier_method.startswith("iv"):
new_data = pd.concat((new_data, new_instruments_df), axis=1)
# Sanity check the data
logger.debug(new_data[0:10])
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
return new_effect.value
def refute_placebo_treatment(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List,
num_simulations: int = 100,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[Union[int, np.random.RandomState]] = None,
show_progress_bar: bool = False,
n_jobs: int = 1,
verbose: int = 0,
**_,
) -> CausalRefutation:
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_names: list: List of treatments
:param num_simulations: The number of simulations to be run, which defaults to ``CausalRefuter.DEFAULT_NUM_SIMULATIONS``
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
"""
if isinstance(random_state, int):
random_state = np.random.RandomState(random_state)
# only permute is supported for iv methods
if target_estimand.identifier_method.startswith("iv"):
if placebo_type != PlaceboType.PERMUTE:
logger.error(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods"
)
raise ValueError(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods."
)
# We need to change the identified estimand
# We make a copy as a safety measure, we don't want to change the
# original DataFrame
identified_estimand = copy.deepcopy(target_estimand)
identified_estimand.treatment_variable = ["placebo"]
if target_estimand.identifier_method.startswith("iv"):
identified_estimand.instrumental_variables = [
"placebo_" + s for s in identified_estimand.instrumental_variables
]
# For IV methods, the estimating_instrument_names should also be
# changed. Create a copy to avoid modifying original object
if estimate.params["method_params"] is not None and "iv_instrument_name" in estimate.params["method_params"]:
estimate = copy.deepcopy(estimate)
estimate.params["method_params"]["iv_instrument_name"] = [
"placebo_" + s for s in parse_state(estimate.params["method_params"]["iv_instrument_name"])
]
logger.info("Refutation over {} simulated datasets of {} treatment".format(num_simulations, placebo_type))
type_dict = dict(data.dtypes)
# Run refutation in parallel
sample_estimates = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(_refute_once)(
data, identified_estimand, estimate, treatment_names, type_dict, placebo_type, random_state
)
for _ in tqdm(
range(num_simulations),
disable=not show_progress_bar,
colour=CausalRefuter.PROGRESS_BAR_COLOR,
desc="Refuting Estimates: ",
)
)
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(
estimate.value, np.mean(sample_estimates), refutation_type="Refute: Use a Placebo Treatment"
)
# Note: We hardcode the estimate value to ZERO as we want to check if it falls in the distribution of the refuter
# Ideally we should expect that ZERO should fall in the distribution of the effect estimates as we have severed any causal
# relationship between the treatment and the outcome.
dummy_estimator = CausalEstimate(
estimate=0,
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_estimand=estimate.target_estimand,
realized_estimand_expr=estimate.realized_estimand_expr,
)
refute.add_significance_test_results(test_significance(dummy_estimator, sample_estimates))
return refute
| import copy
import logging
from enum import Enum
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate
from dowhy.causal_estimators.econml import Econml
from dowhy.causal_estimators.instrumental_variable_estimator import InstrumentalVariableEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, test_significance
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
# Default value of the p value taken for the distribution
DEFAULT_PROBABILITY_OF_BINOMIAL = 0.5
# Number of Trials: Number of cointosses to understand if a sample gets the treatment
DEFAULT_NUMBER_OF_TRIALS = 1
# Mean of the Normal Distribution
DEFAULT_MEAN_OF_NORMAL = 0
# Standard Deviation of the Normal Distribution
DEFAULT_STD_DEV_OF_NORMAL = 0
class PlaceboType(Enum):
DEFAULT = "Random Data"
PERMUTE = "permute"
class PlaceboTreatmentRefuter(CausalRefuter):
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
Supports additional parameters that can be specified in the refute_estimate() method. For joblib-related parameters (n_jobs, verbose), please refer to the joblib documentation for more details (https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html).
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:type placebo_type: str, optional
:param num_simulations: The number of simulations to be run, which is ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:type random_state: int, RandomState, optional
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:type n_jobs: int, optional
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
:type verbose: int, optional
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._placebo_type = kwargs.pop("placebo_type", None)
if self._placebo_type is None:
self._placebo_type = "Random Data"
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS)
self._random_state = kwargs.pop("random_state", None)
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
refute = refute_placebo_treatment(
data=self._data,
target_estimand=self._target_estimand,
estimate=self._estimate,
treatment_names=self._treatment_name,
num_simulations=self._num_simulations,
placebo_type=PlaceboType(self._placebo_type),
random_state=self._random_state,
show_progress_bar=show_progress_bar,
n_jobs=self._n_jobs,
verbose=self._verbose,
)
refute.add_refuter(self)
return refute
def _refute_once(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List[str],
type_dict: Dict,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[np.random.RandomState] = None,
):
if placebo_type == PlaceboType.PERMUTE:
permuted_idx = None
if random_state is None:
permuted_idx = np.random.choice(data.shape[0], size=data.shape[0], replace=False)
else:
permuted_idx = random_state.choice(data.shape[0], size=data.shape[0], replace=False)
new_treatment = data[treatment_names].iloc[permuted_idx].values
if target_estimand.identifier_method.startswith("iv"):
new_instruments_values = data[estimate.estimator.estimating_instrument_names].iloc[permuted_idx].values
new_instruments_df = pd.DataFrame(
new_instruments_values,
columns=["placebo_" + s for s in data[estimate.estimator.estimating_instrument_names].columns],
)
else:
if "float" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Normal Distribution with Mean:{} and Variance:{}".format(
DEFAULT_MEAN_OF_NORMAL,
DEFAULT_STD_DEV_OF_NORMAL,
)
)
new_treatment = np.random.randn(data.shape[0]) * DEFAULT_STD_DEV_OF_NORMAL + DEFAULT_MEAN_OF_NORMAL
elif "bool" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Binomial Distribution with {} trials and {} probability of success".format(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
)
)
new_treatment = np.random.binomial(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
data.shape[0],
).astype(bool)
elif "int" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Discrete Uniform Distribution lying between {} and {}".format(
data[treatment_names[0]].min(), data[treatment_names[0]].max()
)
)
new_treatment = np.random.randint(
low=data[treatment_names[0]].min(), high=data[treatment_names[0]].max() + 1, size=data.shape[0]
)
elif "category" in type_dict[treatment_names[0]].name:
categories = data[treatment_names[0]].unique()
logger.info("Using a Discrete Uniform Distribution with the following categories:{}".format(categories))
sample = np.random.choice(categories, size=data.shape[0])
new_treatment = pd.Series(sample, index=data.index).astype("category")
# Create a new column in the data by the name of placebo
new_data = data.assign(placebo=new_treatment)
if target_estimand.identifier_method.startswith("iv"):
new_data = pd.concat((new_data, new_instruments_df), axis=1)
# Sanity check the data
logger.debug(new_data[0:10])
new_estimator = estimate.estimator.get_new_estimator_object(target_estimand)
new_estimator.fit(
new_data,
target_estimand.treatment_variable,
target_estimand.outcome_variable,
estimate.estimator._effect_modifier_names,
**new_estimator._econml_fit_params if isinstance(new_estimator, Econml) else {},
)
new_effect = new_estimator.estimate_effect(
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_units=estimate.estimator._target_units,
)
return new_effect.value
def refute_placebo_treatment(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List,
num_simulations: int = 100,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[Union[int, np.random.RandomState]] = None,
show_progress_bar: bool = False,
n_jobs: int = 1,
verbose: int = 0,
**_,
) -> CausalRefutation:
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_names: list: List of treatments
:param num_simulations: The number of simulations to be run, which defaults to ``CausalRefuter.DEFAULT_NUM_SIMULATIONS``
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
"""
if isinstance(random_state, int):
random_state = np.random.RandomState(random_state)
# only permute is supported for iv methods
if target_estimand.identifier_method.startswith("iv"):
if placebo_type != PlaceboType.PERMUTE:
logger.error(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods"
)
raise ValueError(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods."
)
# For IV methods, the estimating_instrument_names should also be
# changed. Create a copy to avoid modifying original object
if isinstance(estimate, InstrumentalVariableEstimator):
estimate = copy.deepcopy(estimate)
estimate.iv_instrument_name = ["placebo_" + s for s in parse_state(estimate.iv_instrument_name)]
# We need to change the identified estimand
# We make a copy as a safety measure, we don't want to change the
# original DataFrame
identified_estimand = copy.deepcopy(target_estimand)
identified_estimand.treatment_variable = ["placebo"]
if target_estimand.identifier_method.startswith("iv"):
identified_estimand.instrumental_variables = [
"placebo_" + s for s in identified_estimand.instrumental_variables
]
logger.info("Refutation over {} simulated datasets of {} treatment".format(num_simulations, placebo_type))
type_dict = dict(data.dtypes)
# Run refutation in parallel
sample_estimates = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(_refute_once)(
data, identified_estimand, estimate, treatment_names, type_dict, placebo_type, random_state
)
for _ in tqdm(
range(num_simulations),
disable=not show_progress_bar,
colour=CausalRefuter.PROGRESS_BAR_COLOR,
desc="Refuting Estimates: ",
)
)
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(
estimate.value, np.mean(sample_estimates), refutation_type="Refute: Use a Placebo Treatment"
)
# Note: We hardcode the estimate value to ZERO as we want to check if it falls in the distribution of the refuter
# Ideally we should expect that ZERO should fall in the distribution of the effect estimates as we have severed any causal
# relationship between the treatment and the outcome.
dummy_estimator = CausalEstimate(
estimate=0,
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_estimand=estimate.target_estimand,
realized_estimand_expr=estimate.realized_estimand_expr,
)
refute.add_significance_test_results(test_significance(dummy_estimator, sample_estimates))
return refute
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | do we need this InstrumentalVariableEstimator import? | amit-sharma | 257 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_refuters/placebo_treatment_refuter.py | import copy
import logging
from enum import Enum
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, test_significance
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
# Default value of the p value taken for the distribution
DEFAULT_PROBABILITY_OF_BINOMIAL = 0.5
# Number of Trials: Number of cointosses to understand if a sample gets the treatment
DEFAULT_NUMBER_OF_TRIALS = 1
# Mean of the Normal Distribution
DEFAULT_MEAN_OF_NORMAL = 0
# Standard Deviation of the Normal Distribution
DEFAULT_STD_DEV_OF_NORMAL = 0
class PlaceboType(Enum):
DEFAULT = "Random Data"
PERMUTE = "permute"
class PlaceboTreatmentRefuter(CausalRefuter):
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
Supports additional parameters that can be specified in the refute_estimate() method. For joblib-related parameters (n_jobs, verbose), please refer to the joblib documentation for more details (https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html).
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:type placebo_type: str, optional
:param num_simulations: The number of simulations to be run, which is ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:type random_state: int, RandomState, optional
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:type n_jobs: int, optional
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
:type verbose: int, optional
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._placebo_type = kwargs.pop("placebo_type", None)
if self._placebo_type is None:
self._placebo_type = "Random Data"
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS)
self._random_state = kwargs.pop("random_state", None)
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
refute = refute_placebo_treatment(
data=self._data,
target_estimand=self._target_estimand,
estimate=self._estimate,
treatment_names=self._treatment_name,
num_simulations=self._num_simulations,
placebo_type=PlaceboType(self._placebo_type),
random_state=self._random_state,
show_progress_bar=show_progress_bar,
n_jobs=self._n_jobs,
verbose=self._verbose,
)
refute.add_refuter(self)
return refute
def _refute_once(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List[str],
type_dict: Dict,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[np.random.RandomState] = None,
):
if placebo_type == PlaceboType.PERMUTE:
permuted_idx = None
if random_state is None:
permuted_idx = np.random.choice(data.shape[0], size=data.shape[0], replace=False)
else:
permuted_idx = random_state.choice(data.shape[0], size=data.shape[0], replace=False)
new_treatment = data[treatment_names].iloc[permuted_idx].values
if target_estimand.identifier_method.startswith("iv"):
new_instruments_values = data[estimate.estimator.estimating_instrument_names].iloc[permuted_idx].values
new_instruments_df = pd.DataFrame(
new_instruments_values,
columns=["placebo_" + s for s in data[estimate.estimator.estimating_instrument_names].columns],
)
else:
if "float" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Normal Distribution with Mean:{} and Variance:{}".format(
DEFAULT_MEAN_OF_NORMAL,
DEFAULT_STD_DEV_OF_NORMAL,
)
)
new_treatment = np.random.randn(data.shape[0]) * DEFAULT_STD_DEV_OF_NORMAL + DEFAULT_MEAN_OF_NORMAL
elif "bool" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Binomial Distribution with {} trials and {} probability of success".format(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
)
)
new_treatment = np.random.binomial(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
data.shape[0],
).astype(bool)
elif "int" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Discrete Uniform Distribution lying between {} and {}".format(
data[treatment_names[0]].min(), data[treatment_names[0]].max()
)
)
new_treatment = np.random.randint(
low=data[treatment_names[0]].min(), high=data[treatment_names[0]].max() + 1, size=data.shape[0]
)
elif "category" in type_dict[treatment_names[0]].name:
categories = data[treatment_names[0]].unique()
logger.info("Using a Discrete Uniform Distribution with the following categories:{}".format(categories))
sample = np.random.choice(categories, size=data.shape[0])
new_treatment = pd.Series(sample, index=data.index).astype("category")
# Create a new column in the data by the name of placebo
new_data = data.assign(placebo=new_treatment)
if target_estimand.identifier_method.startswith("iv"):
new_data = pd.concat((new_data, new_instruments_df), axis=1)
# Sanity check the data
logger.debug(new_data[0:10])
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
return new_effect.value
def refute_placebo_treatment(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List,
num_simulations: int = 100,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[Union[int, np.random.RandomState]] = None,
show_progress_bar: bool = False,
n_jobs: int = 1,
verbose: int = 0,
**_,
) -> CausalRefutation:
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_names: list: List of treatments
:param num_simulations: The number of simulations to be run, which defaults to ``CausalRefuter.DEFAULT_NUM_SIMULATIONS``
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
"""
if isinstance(random_state, int):
random_state = np.random.RandomState(random_state)
# only permute is supported for iv methods
if target_estimand.identifier_method.startswith("iv"):
if placebo_type != PlaceboType.PERMUTE:
logger.error(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods"
)
raise ValueError(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods."
)
# We need to change the identified estimand
# We make a copy as a safety measure, we don't want to change the
# original DataFrame
identified_estimand = copy.deepcopy(target_estimand)
identified_estimand.treatment_variable = ["placebo"]
if target_estimand.identifier_method.startswith("iv"):
identified_estimand.instrumental_variables = [
"placebo_" + s for s in identified_estimand.instrumental_variables
]
# For IV methods, the estimating_instrument_names should also be
# changed. Create a copy to avoid modifying original object
if estimate.params["method_params"] is not None and "iv_instrument_name" in estimate.params["method_params"]:
estimate = copy.deepcopy(estimate)
estimate.params["method_params"]["iv_instrument_name"] = [
"placebo_" + s for s in parse_state(estimate.params["method_params"]["iv_instrument_name"])
]
logger.info("Refutation over {} simulated datasets of {} treatment".format(num_simulations, placebo_type))
type_dict = dict(data.dtypes)
# Run refutation in parallel
sample_estimates = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(_refute_once)(
data, identified_estimand, estimate, treatment_names, type_dict, placebo_type, random_state
)
for _ in tqdm(
range(num_simulations),
disable=not show_progress_bar,
colour=CausalRefuter.PROGRESS_BAR_COLOR,
desc="Refuting Estimates: ",
)
)
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(
estimate.value, np.mean(sample_estimates), refutation_type="Refute: Use a Placebo Treatment"
)
# Note: We hardcode the estimate value to ZERO as we want to check if it falls in the distribution of the refuter
# Ideally we should expect that ZERO should fall in the distribution of the effect estimates as we have severed any causal
# relationship between the treatment and the outcome.
dummy_estimator = CausalEstimate(
estimate=0,
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_estimand=estimate.target_estimand,
realized_estimand_expr=estimate.realized_estimand_expr,
)
refute.add_significance_test_results(test_significance(dummy_estimator, sample_estimates))
return refute
| import copy
import logging
from enum import Enum
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate
from dowhy.causal_estimators.econml import Econml
from dowhy.causal_estimators.instrumental_variable_estimator import InstrumentalVariableEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, test_significance
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
# Default value of the p value taken for the distribution
DEFAULT_PROBABILITY_OF_BINOMIAL = 0.5
# Number of Trials: Number of cointosses to understand if a sample gets the treatment
DEFAULT_NUMBER_OF_TRIALS = 1
# Mean of the Normal Distribution
DEFAULT_MEAN_OF_NORMAL = 0
# Standard Deviation of the Normal Distribution
DEFAULT_STD_DEV_OF_NORMAL = 0
class PlaceboType(Enum):
DEFAULT = "Random Data"
PERMUTE = "permute"
class PlaceboTreatmentRefuter(CausalRefuter):
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
Supports additional parameters that can be specified in the refute_estimate() method. For joblib-related parameters (n_jobs, verbose), please refer to the joblib documentation for more details (https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html).
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:type placebo_type: str, optional
:param num_simulations: The number of simulations to be run, which is ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:type random_state: int, RandomState, optional
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:type n_jobs: int, optional
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
:type verbose: int, optional
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._placebo_type = kwargs.pop("placebo_type", None)
if self._placebo_type is None:
self._placebo_type = "Random Data"
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS)
self._random_state = kwargs.pop("random_state", None)
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
refute = refute_placebo_treatment(
data=self._data,
target_estimand=self._target_estimand,
estimate=self._estimate,
treatment_names=self._treatment_name,
num_simulations=self._num_simulations,
placebo_type=PlaceboType(self._placebo_type),
random_state=self._random_state,
show_progress_bar=show_progress_bar,
n_jobs=self._n_jobs,
verbose=self._verbose,
)
refute.add_refuter(self)
return refute
def _refute_once(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List[str],
type_dict: Dict,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[np.random.RandomState] = None,
):
if placebo_type == PlaceboType.PERMUTE:
permuted_idx = None
if random_state is None:
permuted_idx = np.random.choice(data.shape[0], size=data.shape[0], replace=False)
else:
permuted_idx = random_state.choice(data.shape[0], size=data.shape[0], replace=False)
new_treatment = data[treatment_names].iloc[permuted_idx].values
if target_estimand.identifier_method.startswith("iv"):
new_instruments_values = data[estimate.estimator.estimating_instrument_names].iloc[permuted_idx].values
new_instruments_df = pd.DataFrame(
new_instruments_values,
columns=["placebo_" + s for s in data[estimate.estimator.estimating_instrument_names].columns],
)
else:
if "float" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Normal Distribution with Mean:{} and Variance:{}".format(
DEFAULT_MEAN_OF_NORMAL,
DEFAULT_STD_DEV_OF_NORMAL,
)
)
new_treatment = np.random.randn(data.shape[0]) * DEFAULT_STD_DEV_OF_NORMAL + DEFAULT_MEAN_OF_NORMAL
elif "bool" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Binomial Distribution with {} trials and {} probability of success".format(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
)
)
new_treatment = np.random.binomial(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
data.shape[0],
).astype(bool)
elif "int" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Discrete Uniform Distribution lying between {} and {}".format(
data[treatment_names[0]].min(), data[treatment_names[0]].max()
)
)
new_treatment = np.random.randint(
low=data[treatment_names[0]].min(), high=data[treatment_names[0]].max() + 1, size=data.shape[0]
)
elif "category" in type_dict[treatment_names[0]].name:
categories = data[treatment_names[0]].unique()
logger.info("Using a Discrete Uniform Distribution with the following categories:{}".format(categories))
sample = np.random.choice(categories, size=data.shape[0])
new_treatment = pd.Series(sample, index=data.index).astype("category")
# Create a new column in the data by the name of placebo
new_data = data.assign(placebo=new_treatment)
if target_estimand.identifier_method.startswith("iv"):
new_data = pd.concat((new_data, new_instruments_df), axis=1)
# Sanity check the data
logger.debug(new_data[0:10])
new_estimator = estimate.estimator.get_new_estimator_object(target_estimand)
new_estimator.fit(
new_data,
target_estimand.treatment_variable,
target_estimand.outcome_variable,
estimate.estimator._effect_modifier_names,
**new_estimator._econml_fit_params if isinstance(new_estimator, Econml) else {},
)
new_effect = new_estimator.estimate_effect(
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_units=estimate.estimator._target_units,
)
return new_effect.value
def refute_placebo_treatment(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List,
num_simulations: int = 100,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[Union[int, np.random.RandomState]] = None,
show_progress_bar: bool = False,
n_jobs: int = 1,
verbose: int = 0,
**_,
) -> CausalRefutation:
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_names: list: List of treatments
:param num_simulations: The number of simulations to be run, which defaults to ``CausalRefuter.DEFAULT_NUM_SIMULATIONS``
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
"""
if isinstance(random_state, int):
random_state = np.random.RandomState(random_state)
# only permute is supported for iv methods
if target_estimand.identifier_method.startswith("iv"):
if placebo_type != PlaceboType.PERMUTE:
logger.error(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods"
)
raise ValueError(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods."
)
# For IV methods, the estimating_instrument_names should also be
# changed. Create a copy to avoid modifying original object
if isinstance(estimate, InstrumentalVariableEstimator):
estimate = copy.deepcopy(estimate)
estimate.iv_instrument_name = ["placebo_" + s for s in parse_state(estimate.iv_instrument_name)]
# We need to change the identified estimand
# We make a copy as a safety measure, we don't want to change the
# original DataFrame
identified_estimand = copy.deepcopy(target_estimand)
identified_estimand.treatment_variable = ["placebo"]
if target_estimand.identifier_method.startswith("iv"):
identified_estimand.instrumental_variables = [
"placebo_" + s for s in identified_estimand.instrumental_variables
]
logger.info("Refutation over {} simulated datasets of {} treatment".format(num_simulations, placebo_type))
type_dict = dict(data.dtypes)
# Run refutation in parallel
sample_estimates = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(_refute_once)(
data, identified_estimand, estimate, treatment_names, type_dict, placebo_type, random_state
)
for _ in tqdm(
range(num_simulations),
disable=not show_progress_bar,
colour=CausalRefuter.PROGRESS_BAR_COLOR,
desc="Refuting Estimates: ",
)
)
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(
estimate.value, np.mean(sample_estimates), refutation_type="Refute: Use a Placebo Treatment"
)
# Note: We hardcode the estimate value to ZERO as we want to check if it falls in the distribution of the refuter
# Ideally we should expect that ZERO should fall in the distribution of the effect estimates as we have severed any causal
# relationship between the treatment and the outcome.
dummy_estimator = CausalEstimate(
estimate=0,
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_estimand=estimate.target_estimand,
realized_estimand_expr=estimate.realized_estimand_expr,
)
refute.add_significance_test_results(test_significance(dummy_estimator, sample_estimates))
return refute
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | why is this IV code removed? is it redundant? | amit-sharma | 258 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_refuters/placebo_treatment_refuter.py | import copy
import logging
from enum import Enum
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, test_significance
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
# Default value of the p value taken for the distribution
DEFAULT_PROBABILITY_OF_BINOMIAL = 0.5
# Number of Trials: Number of cointosses to understand if a sample gets the treatment
DEFAULT_NUMBER_OF_TRIALS = 1
# Mean of the Normal Distribution
DEFAULT_MEAN_OF_NORMAL = 0
# Standard Deviation of the Normal Distribution
DEFAULT_STD_DEV_OF_NORMAL = 0
class PlaceboType(Enum):
DEFAULT = "Random Data"
PERMUTE = "permute"
class PlaceboTreatmentRefuter(CausalRefuter):
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
Supports additional parameters that can be specified in the refute_estimate() method. For joblib-related parameters (n_jobs, verbose), please refer to the joblib documentation for more details (https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html).
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:type placebo_type: str, optional
:param num_simulations: The number of simulations to be run, which is ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:type random_state: int, RandomState, optional
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:type n_jobs: int, optional
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
:type verbose: int, optional
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._placebo_type = kwargs.pop("placebo_type", None)
if self._placebo_type is None:
self._placebo_type = "Random Data"
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS)
self._random_state = kwargs.pop("random_state", None)
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
refute = refute_placebo_treatment(
data=self._data,
target_estimand=self._target_estimand,
estimate=self._estimate,
treatment_names=self._treatment_name,
num_simulations=self._num_simulations,
placebo_type=PlaceboType(self._placebo_type),
random_state=self._random_state,
show_progress_bar=show_progress_bar,
n_jobs=self._n_jobs,
verbose=self._verbose,
)
refute.add_refuter(self)
return refute
def _refute_once(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List[str],
type_dict: Dict,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[np.random.RandomState] = None,
):
if placebo_type == PlaceboType.PERMUTE:
permuted_idx = None
if random_state is None:
permuted_idx = np.random.choice(data.shape[0], size=data.shape[0], replace=False)
else:
permuted_idx = random_state.choice(data.shape[0], size=data.shape[0], replace=False)
new_treatment = data[treatment_names].iloc[permuted_idx].values
if target_estimand.identifier_method.startswith("iv"):
new_instruments_values = data[estimate.estimator.estimating_instrument_names].iloc[permuted_idx].values
new_instruments_df = pd.DataFrame(
new_instruments_values,
columns=["placebo_" + s for s in data[estimate.estimator.estimating_instrument_names].columns],
)
else:
if "float" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Normal Distribution with Mean:{} and Variance:{}".format(
DEFAULT_MEAN_OF_NORMAL,
DEFAULT_STD_DEV_OF_NORMAL,
)
)
new_treatment = np.random.randn(data.shape[0]) * DEFAULT_STD_DEV_OF_NORMAL + DEFAULT_MEAN_OF_NORMAL
elif "bool" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Binomial Distribution with {} trials and {} probability of success".format(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
)
)
new_treatment = np.random.binomial(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
data.shape[0],
).astype(bool)
elif "int" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Discrete Uniform Distribution lying between {} and {}".format(
data[treatment_names[0]].min(), data[treatment_names[0]].max()
)
)
new_treatment = np.random.randint(
low=data[treatment_names[0]].min(), high=data[treatment_names[0]].max() + 1, size=data.shape[0]
)
elif "category" in type_dict[treatment_names[0]].name:
categories = data[treatment_names[0]].unique()
logger.info("Using a Discrete Uniform Distribution with the following categories:{}".format(categories))
sample = np.random.choice(categories, size=data.shape[0])
new_treatment = pd.Series(sample, index=data.index).astype("category")
# Create a new column in the data by the name of placebo
new_data = data.assign(placebo=new_treatment)
if target_estimand.identifier_method.startswith("iv"):
new_data = pd.concat((new_data, new_instruments_df), axis=1)
# Sanity check the data
logger.debug(new_data[0:10])
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
return new_effect.value
def refute_placebo_treatment(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List,
num_simulations: int = 100,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[Union[int, np.random.RandomState]] = None,
show_progress_bar: bool = False,
n_jobs: int = 1,
verbose: int = 0,
**_,
) -> CausalRefutation:
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_names: list: List of treatments
:param num_simulations: The number of simulations to be run, which defaults to ``CausalRefuter.DEFAULT_NUM_SIMULATIONS``
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
"""
if isinstance(random_state, int):
random_state = np.random.RandomState(random_state)
# only permute is supported for iv methods
if target_estimand.identifier_method.startswith("iv"):
if placebo_type != PlaceboType.PERMUTE:
logger.error(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods"
)
raise ValueError(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods."
)
# We need to change the identified estimand
# We make a copy as a safety measure, we don't want to change the
# original DataFrame
identified_estimand = copy.deepcopy(target_estimand)
identified_estimand.treatment_variable = ["placebo"]
if target_estimand.identifier_method.startswith("iv"):
identified_estimand.instrumental_variables = [
"placebo_" + s for s in identified_estimand.instrumental_variables
]
# For IV methods, the estimating_instrument_names should also be
# changed. Create a copy to avoid modifying original object
if estimate.params["method_params"] is not None and "iv_instrument_name" in estimate.params["method_params"]:
estimate = copy.deepcopy(estimate)
estimate.params["method_params"]["iv_instrument_name"] = [
"placebo_" + s for s in parse_state(estimate.params["method_params"]["iv_instrument_name"])
]
logger.info("Refutation over {} simulated datasets of {} treatment".format(num_simulations, placebo_type))
type_dict = dict(data.dtypes)
# Run refutation in parallel
sample_estimates = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(_refute_once)(
data, identified_estimand, estimate, treatment_names, type_dict, placebo_type, random_state
)
for _ in tqdm(
range(num_simulations),
disable=not show_progress_bar,
colour=CausalRefuter.PROGRESS_BAR_COLOR,
desc="Refuting Estimates: ",
)
)
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(
estimate.value, np.mean(sample_estimates), refutation_type="Refute: Use a Placebo Treatment"
)
# Note: We hardcode the estimate value to ZERO as we want to check if it falls in the distribution of the refuter
# Ideally we should expect that ZERO should fall in the distribution of the effect estimates as we have severed any causal
# relationship between the treatment and the outcome.
dummy_estimator = CausalEstimate(
estimate=0,
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_estimand=estimate.target_estimand,
realized_estimand_expr=estimate.realized_estimand_expr,
)
refute.add_significance_test_results(test_significance(dummy_estimator, sample_estimates))
return refute
| import copy
import logging
from enum import Enum
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate
from dowhy.causal_estimators.econml import Econml
from dowhy.causal_estimators.instrumental_variable_estimator import InstrumentalVariableEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, test_significance
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
# Default value of the p value taken for the distribution
DEFAULT_PROBABILITY_OF_BINOMIAL = 0.5
# Number of Trials: Number of cointosses to understand if a sample gets the treatment
DEFAULT_NUMBER_OF_TRIALS = 1
# Mean of the Normal Distribution
DEFAULT_MEAN_OF_NORMAL = 0
# Standard Deviation of the Normal Distribution
DEFAULT_STD_DEV_OF_NORMAL = 0
class PlaceboType(Enum):
DEFAULT = "Random Data"
PERMUTE = "permute"
class PlaceboTreatmentRefuter(CausalRefuter):
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
Supports additional parameters that can be specified in the refute_estimate() method. For joblib-related parameters (n_jobs, verbose), please refer to the joblib documentation for more details (https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html).
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:type placebo_type: str, optional
:param num_simulations: The number of simulations to be run, which is ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:type random_state: int, RandomState, optional
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:type n_jobs: int, optional
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
:type verbose: int, optional
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._placebo_type = kwargs.pop("placebo_type", None)
if self._placebo_type is None:
self._placebo_type = "Random Data"
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS)
self._random_state = kwargs.pop("random_state", None)
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
refute = refute_placebo_treatment(
data=self._data,
target_estimand=self._target_estimand,
estimate=self._estimate,
treatment_names=self._treatment_name,
num_simulations=self._num_simulations,
placebo_type=PlaceboType(self._placebo_type),
random_state=self._random_state,
show_progress_bar=show_progress_bar,
n_jobs=self._n_jobs,
verbose=self._verbose,
)
refute.add_refuter(self)
return refute
def _refute_once(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List[str],
type_dict: Dict,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[np.random.RandomState] = None,
):
if placebo_type == PlaceboType.PERMUTE:
permuted_idx = None
if random_state is None:
permuted_idx = np.random.choice(data.shape[0], size=data.shape[0], replace=False)
else:
permuted_idx = random_state.choice(data.shape[0], size=data.shape[0], replace=False)
new_treatment = data[treatment_names].iloc[permuted_idx].values
if target_estimand.identifier_method.startswith("iv"):
new_instruments_values = data[estimate.estimator.estimating_instrument_names].iloc[permuted_idx].values
new_instruments_df = pd.DataFrame(
new_instruments_values,
columns=["placebo_" + s for s in data[estimate.estimator.estimating_instrument_names].columns],
)
else:
if "float" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Normal Distribution with Mean:{} and Variance:{}".format(
DEFAULT_MEAN_OF_NORMAL,
DEFAULT_STD_DEV_OF_NORMAL,
)
)
new_treatment = np.random.randn(data.shape[0]) * DEFAULT_STD_DEV_OF_NORMAL + DEFAULT_MEAN_OF_NORMAL
elif "bool" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Binomial Distribution with {} trials and {} probability of success".format(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
)
)
new_treatment = np.random.binomial(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
data.shape[0],
).astype(bool)
elif "int" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Discrete Uniform Distribution lying between {} and {}".format(
data[treatment_names[0]].min(), data[treatment_names[0]].max()
)
)
new_treatment = np.random.randint(
low=data[treatment_names[0]].min(), high=data[treatment_names[0]].max() + 1, size=data.shape[0]
)
elif "category" in type_dict[treatment_names[0]].name:
categories = data[treatment_names[0]].unique()
logger.info("Using a Discrete Uniform Distribution with the following categories:{}".format(categories))
sample = np.random.choice(categories, size=data.shape[0])
new_treatment = pd.Series(sample, index=data.index).astype("category")
# Create a new column in the data by the name of placebo
new_data = data.assign(placebo=new_treatment)
if target_estimand.identifier_method.startswith("iv"):
new_data = pd.concat((new_data, new_instruments_df), axis=1)
# Sanity check the data
logger.debug(new_data[0:10])
new_estimator = estimate.estimator.get_new_estimator_object(target_estimand)
new_estimator.fit(
new_data,
target_estimand.treatment_variable,
target_estimand.outcome_variable,
estimate.estimator._effect_modifier_names,
**new_estimator._econml_fit_params if isinstance(new_estimator, Econml) else {},
)
new_effect = new_estimator.estimate_effect(
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_units=estimate.estimator._target_units,
)
return new_effect.value
def refute_placebo_treatment(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List,
num_simulations: int = 100,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[Union[int, np.random.RandomState]] = None,
show_progress_bar: bool = False,
n_jobs: int = 1,
verbose: int = 0,
**_,
) -> CausalRefutation:
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_names: list: List of treatments
:param num_simulations: The number of simulations to be run, which defaults to ``CausalRefuter.DEFAULT_NUM_SIMULATIONS``
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
"""
if isinstance(random_state, int):
random_state = np.random.RandomState(random_state)
# only permute is supported for iv methods
if target_estimand.identifier_method.startswith("iv"):
if placebo_type != PlaceboType.PERMUTE:
logger.error(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods"
)
raise ValueError(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods."
)
# For IV methods, the estimating_instrument_names should also be
# changed. Create a copy to avoid modifying original object
if isinstance(estimate, InstrumentalVariableEstimator):
estimate = copy.deepcopy(estimate)
estimate.iv_instrument_name = ["placebo_" + s for s in parse_state(estimate.iv_instrument_name)]
# We need to change the identified estimand
# We make a copy as a safety measure, we don't want to change the
# original DataFrame
identified_estimand = copy.deepcopy(target_estimand)
identified_estimand.treatment_variable = ["placebo"]
if target_estimand.identifier_method.startswith("iv"):
identified_estimand.instrumental_variables = [
"placebo_" + s for s in identified_estimand.instrumental_variables
]
logger.info("Refutation over {} simulated datasets of {} treatment".format(num_simulations, placebo_type))
type_dict = dict(data.dtypes)
# Run refutation in parallel
sample_estimates = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(_refute_once)(
data, identified_estimand, estimate, treatment_names, type_dict, placebo_type, random_state
)
for _ in tqdm(
range(num_simulations),
disable=not show_progress_bar,
colour=CausalRefuter.PROGRESS_BAR_COLOR,
desc="Refuting Estimates: ",
)
)
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(
estimate.value, np.mean(sample_estimates), refutation_type="Refute: Use a Placebo Treatment"
)
# Note: We hardcode the estimate value to ZERO as we want to check if it falls in the distribution of the refuter
# Ideally we should expect that ZERO should fall in the distribution of the effect estimates as we have severed any causal
# relationship between the treatment and the outcome.
dummy_estimator = CausalEstimate(
estimate=0,
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_estimand=estimate.target_estimand,
realized_estimand_expr=estimate.realized_estimand_expr,
)
refute.add_significance_test_results(test_significance(dummy_estimator, sample_estimates))
return refute
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | it is used now by updating a piece of code i removed by mistake | andresmor-ms | 259 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | dowhy/causal_refuters/placebo_treatment_refuter.py | import copy
import logging
from enum import Enum
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, test_significance
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
# Default value of the p value taken for the distribution
DEFAULT_PROBABILITY_OF_BINOMIAL = 0.5
# Number of Trials: Number of cointosses to understand if a sample gets the treatment
DEFAULT_NUMBER_OF_TRIALS = 1
# Mean of the Normal Distribution
DEFAULT_MEAN_OF_NORMAL = 0
# Standard Deviation of the Normal Distribution
DEFAULT_STD_DEV_OF_NORMAL = 0
class PlaceboType(Enum):
DEFAULT = "Random Data"
PERMUTE = "permute"
class PlaceboTreatmentRefuter(CausalRefuter):
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
Supports additional parameters that can be specified in the refute_estimate() method. For joblib-related parameters (n_jobs, verbose), please refer to the joblib documentation for more details (https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html).
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:type placebo_type: str, optional
:param num_simulations: The number of simulations to be run, which is ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:type random_state: int, RandomState, optional
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:type n_jobs: int, optional
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
:type verbose: int, optional
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._placebo_type = kwargs.pop("placebo_type", None)
if self._placebo_type is None:
self._placebo_type = "Random Data"
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS)
self._random_state = kwargs.pop("random_state", None)
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
refute = refute_placebo_treatment(
data=self._data,
target_estimand=self._target_estimand,
estimate=self._estimate,
treatment_names=self._treatment_name,
num_simulations=self._num_simulations,
placebo_type=PlaceboType(self._placebo_type),
random_state=self._random_state,
show_progress_bar=show_progress_bar,
n_jobs=self._n_jobs,
verbose=self._verbose,
)
refute.add_refuter(self)
return refute
def _refute_once(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List[str],
type_dict: Dict,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[np.random.RandomState] = None,
):
if placebo_type == PlaceboType.PERMUTE:
permuted_idx = None
if random_state is None:
permuted_idx = np.random.choice(data.shape[0], size=data.shape[0], replace=False)
else:
permuted_idx = random_state.choice(data.shape[0], size=data.shape[0], replace=False)
new_treatment = data[treatment_names].iloc[permuted_idx].values
if target_estimand.identifier_method.startswith("iv"):
new_instruments_values = data[estimate.estimator.estimating_instrument_names].iloc[permuted_idx].values
new_instruments_df = pd.DataFrame(
new_instruments_values,
columns=["placebo_" + s for s in data[estimate.estimator.estimating_instrument_names].columns],
)
else:
if "float" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Normal Distribution with Mean:{} and Variance:{}".format(
DEFAULT_MEAN_OF_NORMAL,
DEFAULT_STD_DEV_OF_NORMAL,
)
)
new_treatment = np.random.randn(data.shape[0]) * DEFAULT_STD_DEV_OF_NORMAL + DEFAULT_MEAN_OF_NORMAL
elif "bool" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Binomial Distribution with {} trials and {} probability of success".format(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
)
)
new_treatment = np.random.binomial(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
data.shape[0],
).astype(bool)
elif "int" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Discrete Uniform Distribution lying between {} and {}".format(
data[treatment_names[0]].min(), data[treatment_names[0]].max()
)
)
new_treatment = np.random.randint(
low=data[treatment_names[0]].min(), high=data[treatment_names[0]].max() + 1, size=data.shape[0]
)
elif "category" in type_dict[treatment_names[0]].name:
categories = data[treatment_names[0]].unique()
logger.info("Using a Discrete Uniform Distribution with the following categories:{}".format(categories))
sample = np.random.choice(categories, size=data.shape[0])
new_treatment = pd.Series(sample, index=data.index).astype("category")
# Create a new column in the data by the name of placebo
new_data = data.assign(placebo=new_treatment)
if target_estimand.identifier_method.startswith("iv"):
new_data = pd.concat((new_data, new_instruments_df), axis=1)
# Sanity check the data
logger.debug(new_data[0:10])
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
return new_effect.value
def refute_placebo_treatment(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List,
num_simulations: int = 100,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[Union[int, np.random.RandomState]] = None,
show_progress_bar: bool = False,
n_jobs: int = 1,
verbose: int = 0,
**_,
) -> CausalRefutation:
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_names: list: List of treatments
:param num_simulations: The number of simulations to be run, which defaults to ``CausalRefuter.DEFAULT_NUM_SIMULATIONS``
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
"""
if isinstance(random_state, int):
random_state = np.random.RandomState(random_state)
# only permute is supported for iv methods
if target_estimand.identifier_method.startswith("iv"):
if placebo_type != PlaceboType.PERMUTE:
logger.error(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods"
)
raise ValueError(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods."
)
# We need to change the identified estimand
# We make a copy as a safety measure, we don't want to change the
# original DataFrame
identified_estimand = copy.deepcopy(target_estimand)
identified_estimand.treatment_variable = ["placebo"]
if target_estimand.identifier_method.startswith("iv"):
identified_estimand.instrumental_variables = [
"placebo_" + s for s in identified_estimand.instrumental_variables
]
# For IV methods, the estimating_instrument_names should also be
# changed. Create a copy to avoid modifying original object
if estimate.params["method_params"] is not None and "iv_instrument_name" in estimate.params["method_params"]:
estimate = copy.deepcopy(estimate)
estimate.params["method_params"]["iv_instrument_name"] = [
"placebo_" + s for s in parse_state(estimate.params["method_params"]["iv_instrument_name"])
]
logger.info("Refutation over {} simulated datasets of {} treatment".format(num_simulations, placebo_type))
type_dict = dict(data.dtypes)
# Run refutation in parallel
sample_estimates = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(_refute_once)(
data, identified_estimand, estimate, treatment_names, type_dict, placebo_type, random_state
)
for _ in tqdm(
range(num_simulations),
disable=not show_progress_bar,
colour=CausalRefuter.PROGRESS_BAR_COLOR,
desc="Refuting Estimates: ",
)
)
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(
estimate.value, np.mean(sample_estimates), refutation_type="Refute: Use a Placebo Treatment"
)
# Note: We hardcode the estimate value to ZERO as we want to check if it falls in the distribution of the refuter
# Ideally we should expect that ZERO should fall in the distribution of the effect estimates as we have severed any causal
# relationship between the treatment and the outcome.
dummy_estimator = CausalEstimate(
estimate=0,
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_estimand=estimate.target_estimand,
realized_estimand_expr=estimate.realized_estimand_expr,
)
refute.add_significance_test_results(test_significance(dummy_estimator, sample_estimates))
return refute
| import copy
import logging
from enum import Enum
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate
from dowhy.causal_estimators.econml import Econml
from dowhy.causal_estimators.instrumental_variable_estimator import InstrumentalVariableEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, test_significance
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
# Default value of the p value taken for the distribution
DEFAULT_PROBABILITY_OF_BINOMIAL = 0.5
# Number of Trials: Number of cointosses to understand if a sample gets the treatment
DEFAULT_NUMBER_OF_TRIALS = 1
# Mean of the Normal Distribution
DEFAULT_MEAN_OF_NORMAL = 0
# Standard Deviation of the Normal Distribution
DEFAULT_STD_DEV_OF_NORMAL = 0
class PlaceboType(Enum):
DEFAULT = "Random Data"
PERMUTE = "permute"
class PlaceboTreatmentRefuter(CausalRefuter):
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
Supports additional parameters that can be specified in the refute_estimate() method. For joblib-related parameters (n_jobs, verbose), please refer to the joblib documentation for more details (https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html).
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:type placebo_type: str, optional
:param num_simulations: The number of simulations to be run, which is ``CausalRefuter.DEFAULT_NUM_SIMULATIONS`` by default
:type num_simulations: int, optional
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:type random_state: int, RandomState, optional
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:type n_jobs: int, optional
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
:type verbose: int, optional
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._placebo_type = kwargs.pop("placebo_type", None)
if self._placebo_type is None:
self._placebo_type = "Random Data"
self._num_simulations = kwargs.pop("num_simulations", CausalRefuter.DEFAULT_NUM_SIMULATIONS)
self._random_state = kwargs.pop("random_state", None)
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
refute = refute_placebo_treatment(
data=self._data,
target_estimand=self._target_estimand,
estimate=self._estimate,
treatment_names=self._treatment_name,
num_simulations=self._num_simulations,
placebo_type=PlaceboType(self._placebo_type),
random_state=self._random_state,
show_progress_bar=show_progress_bar,
n_jobs=self._n_jobs,
verbose=self._verbose,
)
refute.add_refuter(self)
return refute
def _refute_once(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List[str],
type_dict: Dict,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[np.random.RandomState] = None,
):
if placebo_type == PlaceboType.PERMUTE:
permuted_idx = None
if random_state is None:
permuted_idx = np.random.choice(data.shape[0], size=data.shape[0], replace=False)
else:
permuted_idx = random_state.choice(data.shape[0], size=data.shape[0], replace=False)
new_treatment = data[treatment_names].iloc[permuted_idx].values
if target_estimand.identifier_method.startswith("iv"):
new_instruments_values = data[estimate.estimator.estimating_instrument_names].iloc[permuted_idx].values
new_instruments_df = pd.DataFrame(
new_instruments_values,
columns=["placebo_" + s for s in data[estimate.estimator.estimating_instrument_names].columns],
)
else:
if "float" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Normal Distribution with Mean:{} and Variance:{}".format(
DEFAULT_MEAN_OF_NORMAL,
DEFAULT_STD_DEV_OF_NORMAL,
)
)
new_treatment = np.random.randn(data.shape[0]) * DEFAULT_STD_DEV_OF_NORMAL + DEFAULT_MEAN_OF_NORMAL
elif "bool" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Binomial Distribution with {} trials and {} probability of success".format(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
)
)
new_treatment = np.random.binomial(
DEFAULT_NUMBER_OF_TRIALS,
DEFAULT_PROBABILITY_OF_BINOMIAL,
data.shape[0],
).astype(bool)
elif "int" in type_dict[treatment_names[0]].name:
logger.info(
"Using a Discrete Uniform Distribution lying between {} and {}".format(
data[treatment_names[0]].min(), data[treatment_names[0]].max()
)
)
new_treatment = np.random.randint(
low=data[treatment_names[0]].min(), high=data[treatment_names[0]].max() + 1, size=data.shape[0]
)
elif "category" in type_dict[treatment_names[0]].name:
categories = data[treatment_names[0]].unique()
logger.info("Using a Discrete Uniform Distribution with the following categories:{}".format(categories))
sample = np.random.choice(categories, size=data.shape[0])
new_treatment = pd.Series(sample, index=data.index).astype("category")
# Create a new column in the data by the name of placebo
new_data = data.assign(placebo=new_treatment)
if target_estimand.identifier_method.startswith("iv"):
new_data = pd.concat((new_data, new_instruments_df), axis=1)
# Sanity check the data
logger.debug(new_data[0:10])
new_estimator = estimate.estimator.get_new_estimator_object(target_estimand)
new_estimator.fit(
new_data,
target_estimand.treatment_variable,
target_estimand.outcome_variable,
estimate.estimator._effect_modifier_names,
**new_estimator._econml_fit_params if isinstance(new_estimator, Econml) else {},
)
new_effect = new_estimator.estimate_effect(
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_units=estimate.estimator._target_units,
)
return new_effect.value
def refute_placebo_treatment(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_names: List,
num_simulations: int = 100,
placebo_type: PlaceboType = PlaceboType.DEFAULT,
random_state: Optional[Union[int, np.random.RandomState]] = None,
show_progress_bar: bool = False,
n_jobs: int = 1,
verbose: int = 0,
**_,
) -> CausalRefutation:
"""Refute an estimate by replacing treatment with a randomly-generated placebo variable.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_names: list: List of treatments
:param num_simulations: The number of simulations to be run, which defaults to ``CausalRefuter.DEFAULT_NUM_SIMULATIONS``
:param placebo_type: Default is to generate random values for the treatment. If placebo_type is "permute", then the original treatment values are permuted by row.
:param random_state: The seed value to be added if we wish to repeat the same random behavior. If we want to repeat the same behavior we push the same seed in the psuedo-random generator.
:param n_jobs: The maximum number of concurrently running jobs. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all (this is the default).
:param verbose: The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. The default is 0.
"""
if isinstance(random_state, int):
random_state = np.random.RandomState(random_state)
# only permute is supported for iv methods
if target_estimand.identifier_method.startswith("iv"):
if placebo_type != PlaceboType.PERMUTE:
logger.error(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods"
)
raise ValueError(
"Only placebo_type=''permute'' is supported for creating placebo for instrumental variable estimation methods."
)
# For IV methods, the estimating_instrument_names should also be
# changed. Create a copy to avoid modifying original object
if isinstance(estimate, InstrumentalVariableEstimator):
estimate = copy.deepcopy(estimate)
estimate.iv_instrument_name = ["placebo_" + s for s in parse_state(estimate.iv_instrument_name)]
# We need to change the identified estimand
# We make a copy as a safety measure, we don't want to change the
# original DataFrame
identified_estimand = copy.deepcopy(target_estimand)
identified_estimand.treatment_variable = ["placebo"]
if target_estimand.identifier_method.startswith("iv"):
identified_estimand.instrumental_variables = [
"placebo_" + s for s in identified_estimand.instrumental_variables
]
logger.info("Refutation over {} simulated datasets of {} treatment".format(num_simulations, placebo_type))
type_dict = dict(data.dtypes)
# Run refutation in parallel
sample_estimates = Parallel(n_jobs=n_jobs, verbose=verbose)(
delayed(_refute_once)(
data, identified_estimand, estimate, treatment_names, type_dict, placebo_type, random_state
)
for _ in tqdm(
range(num_simulations),
disable=not show_progress_bar,
colour=CausalRefuter.PROGRESS_BAR_COLOR,
desc="Refuting Estimates: ",
)
)
sample_estimates = np.array(sample_estimates)
refute = CausalRefutation(
estimate.value, np.mean(sample_estimates), refutation_type="Refute: Use a Placebo Treatment"
)
# Note: We hardcode the estimate value to ZERO as we want to check if it falls in the distribution of the refuter
# Ideally we should expect that ZERO should fall in the distribution of the effect estimates as we have severed any causal
# relationship between the treatment and the outcome.
dummy_estimator = CausalEstimate(
estimate=0,
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
target_estimand=estimate.target_estimand,
realized_estimand_expr=estimate.realized_estimand_expr,
)
refute.add_significance_test_results(test_significance(dummy_estimator, sample_estimates))
return refute
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | good catch, I removed it by mistake. | andresmor-ms | 260 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | poetry.lock | [[package]]
name = "absl-py"
version = "1.3.0"
description = "Abseil Python Common Libraries, see https://github.com/abseil/abseil-py."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "alabaster"
version = "0.7.12"
description = "A configurable sidebar-enabled Sphinx theme"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "anyio"
version = "3.6.2"
description = "High level compatibility layer for multiple asynchronous event loop implementations"
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
idna = ">=2.8"
sniffio = ">=1.1"
[package.extras]
doc = ["packaging", "sphinx-autodoc-typehints (>=1.2.0)", "sphinx-rtd-theme"]
test = ["contextlib2", "coverage[toml] (>=4.5)", "hypothesis (>=4.0)", "mock (>=4)", "pytest (>=7.0)", "pytest-mock (>=3.6.1)", "trustme", "uvloop (<0.15)", "uvloop (>=0.15)"]
trio = ["trio (>=0.16,<0.22)"]
[[package]]
name = "appnope"
version = "0.1.3"
description = "Disable App Nap on macOS >= 10.9"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "argon2-cffi"
version = "21.3.0"
description = "The secure Argon2 password hashing algorithm."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
argon2-cffi-bindings = "*"
[package.extras]
dev = ["cogapp", "coverage[toml] (>=5.0.2)", "furo", "hypothesis", "pre-commit", "pytest", "sphinx", "sphinx-notfound-page", "tomli"]
docs = ["furo", "sphinx", "sphinx-notfound-page"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pytest"]
[[package]]
name = "argon2-cffi-bindings"
version = "21.2.0"
description = "Low-level CFFI bindings for Argon2"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.0.1"
[package.extras]
dev = ["cogapp", "pre-commit", "pytest", "wheel"]
tests = ["pytest"]
[[package]]
name = "asttokens"
version = "2.1.0"
description = "Annotate AST trees with source code positions"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[package.extras]
test = ["astroid (<=2.5.3)", "pytest"]
[[package]]
name = "astunparse"
version = "1.6.3"
description = "An AST unparser for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = ">=1.6.1,<2.0"
wheel = ">=0.23.0,<1.0"
[[package]]
name = "attrs"
version = "22.1.0"
description = "Classes Without Boilerplate"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
dev = ["cloudpickle", "coverage[toml] (>=5.0.2)", "furo", "hypothesis", "mypy (>=0.900,!=0.940)", "pre-commit", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "sphinx", "sphinx-notfound-page", "zope.interface"]
docs = ["furo", "sphinx", "sphinx-notfound-page", "zope.interface"]
tests = ["cloudpickle", "coverage[toml] (>=5.0.2)", "hypothesis", "mypy (>=0.900,!=0.940)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "zope.interface"]
tests-no-zope = ["cloudpickle", "coverage[toml] (>=5.0.2)", "hypothesis", "mypy (>=0.900,!=0.940)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins"]
[[package]]
name = "autogluon-common"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
boto3 = "*"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
setuptools = "*"
[package.extras]
tests = ["pytest", "pytest-mypy", "types-requests", "types-setuptools"]
[[package]]
name = "autogluon-core"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
"autogluon.common" = "0.6.0"
boto3 = "*"
dask = ">=2021.09.1,<=2021.11.2"
distributed = ">=2021.09.1,<=2021.11.2"
matplotlib = "*"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
requests = "*"
scikit-learn = ">=1.0.0,<1.2"
scipy = ">=1.5.4,<1.10.0"
tqdm = ">=4.38.0"
[package.extras]
all = ["hyperopt (>=0.2.7,<0.2.8)", "ray (>=2.0,<2.1)", "ray[tune] (>=2.0,<2.1)"]
ray = ["ray (>=2.0,<2.1)"]
raytune = ["hyperopt (>=0.2.7,<0.2.8)", "ray[tune] (>=2.0,<2.1)"]
tests = ["pytest", "pytest-mypy", "types-requests", "types-setuptools"]
[[package]]
name = "autogluon-features"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
"autogluon.common" = "0.6.0"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
psutil = ">=5.7.3,<6"
scikit-learn = ">=1.0.0,<1.2"
[[package]]
name = "autogluon-tabular"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
"autogluon.core" = "0.6.0"
"autogluon.features" = "0.6.0"
catboost = {version = ">=1.0,<1.2", optional = true, markers = "extra == \"all\""}
fastai = {version = ">=2.3.1,<2.8", optional = true, markers = "extra == \"all\""}
lightgbm = {version = ">=3.3,<3.4", optional = true, markers = "extra == \"all\""}
networkx = ">=2.3,<3.0"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
psutil = ">=5.7.3,<6"
scikit-learn = ">=1.0.0,<1.2"
scipy = ">=1.5.4,<1.10.0"
torch = {version = ">=1.0,<1.13", optional = true, markers = "extra == \"all\""}
xgboost = {version = ">=1.6,<1.8", optional = true, markers = "extra == \"all\""}
[package.extras]
all = ["catboost (>=1.0,<1.2)", "fastai (>=2.3.1,<2.8)", "lightgbm (>=3.3,<3.4)", "torch (>=1.0,<1.13)", "xgboost (>=1.6,<1.8)"]
catboost = ["catboost (>=1.0,<1.2)"]
fastai = ["fastai (>=2.3.1,<2.8)", "torch (>=1.0,<1.13)"]
imodels = ["imodels (>=1.3.0)"]
lightgbm = ["lightgbm (>=3.3,<3.4)"]
skex = ["scikit-learn-intelex (>=2021.5,<2021.6)"]
skl2onnx = ["skl2onnx (>=1.12.0,<1.13.0)"]
tests = ["imodels (>=1.3.0)", "skl2onnx (>=1.12.0,<1.13.0)", "vowpalwabbit (>=8.10,<8.11)"]
vowpalwabbit = ["vowpalwabbit (>=8.10,<8.11)"]
xgboost = ["xgboost (>=1.6,<1.8)"]
[[package]]
name = "babel"
version = "2.11.0"
description = "Internationalization utilities"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pytz = ">=2015.7"
[[package]]
name = "backcall"
version = "0.2.0"
description = "Specifications for callback functions passed in to an API"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "backports-zoneinfo"
version = "0.2.1"
description = "Backport of the standard library zoneinfo module"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
tzdata = ["tzdata"]
[[package]]
name = "beautifulsoup4"
version = "4.11.1"
description = "Screen-scraping library"
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.dependencies]
soupsieve = ">1.2"
[package.extras]
html5lib = ["html5lib"]
lxml = ["lxml"]
[[package]]
name = "black"
version = "22.10.0"
description = "The uncompromising code formatter."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
click = ">=8.0.0"
ipython = {version = ">=7.8.0", optional = true, markers = "extra == \"jupyter\""}
mypy-extensions = ">=0.4.3"
pathspec = ">=0.9.0"
platformdirs = ">=2"
tokenize-rt = {version = ">=3.2.0", optional = true, markers = "extra == \"jupyter\""}
tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
[package.extras]
colorama = ["colorama (>=0.4.3)"]
d = ["aiohttp (>=3.7.4)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "bleach"
version = "5.0.1"
description = "An easy safelist-based HTML-sanitizing tool."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.9.0"
webencodings = "*"
[package.extras]
css = ["tinycss2 (>=1.1.0,<1.2)"]
dev = ["Sphinx (==4.3.2)", "black (==22.3.0)", "build (==0.8.0)", "flake8 (==4.0.1)", "hashin (==0.17.0)", "mypy (==0.961)", "pip-tools (==6.6.2)", "pytest (==7.1.2)", "tox (==3.25.0)", "twine (==4.0.1)", "wheel (==0.37.1)"]
[[package]]
name = "blis"
version = "0.7.9"
description = "The Blis BLAS-like linear algebra library, as a self-contained C-extension."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.15.0"
[[package]]
name = "boto3"
version = "1.26.15"
description = "The AWS SDK for Python"
category = "main"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
botocore = ">=1.29.15,<1.30.0"
jmespath = ">=0.7.1,<2.0.0"
s3transfer = ">=0.6.0,<0.7.0"
[package.extras]
crt = ["botocore[crt] (>=1.21.0,<2.0a0)"]
[[package]]
name = "botocore"
version = "1.29.15"
description = "Low-level, data-driven core of boto 3."
category = "main"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
jmespath = ">=0.7.1,<2.0.0"
python-dateutil = ">=2.1,<3.0.0"
urllib3 = ">=1.25.4,<1.27"
[package.extras]
crt = ["awscrt (==0.14.0)"]
[[package]]
name = "cachetools"
version = "5.2.0"
description = "Extensible memoizing collections and decorators"
category = "dev"
optional = false
python-versions = "~=3.7"
[[package]]
name = "catalogue"
version = "2.0.8"
description = "Super lightweight function registries for your library"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "catboost"
version = "1.1.1"
description = "Catboost Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
graphviz = "*"
matplotlib = "*"
numpy = ">=1.16.0"
pandas = ">=0.24.0"
plotly = "*"
scipy = "*"
six = "*"
[[package]]
name = "causal-learn"
version = "0.1.3.0"
description = "causal-learn Python Package"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
graphviz = "*"
matplotlib = "*"
networkx = "*"
numpy = "*"
pandas = "*"
pydot = "*"
scikit-learn = "*"
scipy = "*"
statsmodels = "*"
tqdm = "*"
[[package]]
name = "causalml"
version = "0.13.0"
description = "Python Package for Uplift Modeling and Causal Inference with Machine Learning Algorithms"
category = "main"
optional = true
python-versions = ">=3.7"
develop = false
[package.dependencies]
Cython = ">=0.28.0"
dill = "*"
forestci = "0.6"
graphviz = "*"
lightgbm = "*"
matplotlib = "*"
numpy = ">=1.18.5"
packaging = "*"
pandas = ">=0.24.1"
pathos = "0.2.9"
pip = ">=10.0"
pydotplus = "*"
pygam = "*"
pyro-ppl = "*"
scikit-learn = "<=1.0.2"
scipy = ">=1.4.1"
seaborn = "*"
setuptools = ">=41.0.0"
shap = "*"
statsmodels = ">=0.9.0"
torch = "*"
tqdm = "*"
xgboost = "*"
[package.extras]
tf = ["tensorflow (>=2.4.0)"]
[package.source]
type = "git"
url = "https://github.com/uber/causalml"
reference = "master"
resolved_reference = "7050c74c257254de3600f69d49bda84a3ac152e2"
[[package]]
name = "certifi"
version = "2022.9.24"
description = "Python package for providing Mozilla's CA Bundle."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cffi"
version = "1.15.1"
description = "Foreign Function Interface for Python calling C code."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pycparser = "*"
[[package]]
name = "charset-normalizer"
version = "2.1.1"
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
category = "main"
optional = false
python-versions = ">=3.6.0"
[package.extras]
unicode-backport = ["unicodedata2"]
[[package]]
name = "click"
version = "8.1.3"
description = "Composable command line interface toolkit"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "cloudpickle"
version = "2.2.0"
description = "Extended pickling support for Python objects"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "colorama"
version = "0.4.6"
description = "Cross-platform colored terminal text."
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
[[package]]
name = "confection"
version = "0.0.3"
description = "The sweetest config system for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0"
srsly = ">=2.4.0,<3.0.0"
[[package]]
name = "contourpy"
version = "1.0.6"
description = "Python library for calculating contours of 2D quadrilateral grids"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.16"
[package.extras]
bokeh = ["bokeh", "selenium"]
docs = ["docutils (<0.18)", "sphinx (<=5.2.0)", "sphinx-rtd-theme"]
test = ["Pillow", "flake8", "isort", "matplotlib", "pytest"]
test-minimal = ["pytest"]
test-no-codebase = ["Pillow", "matplotlib", "pytest"]
[[package]]
name = "coverage"
version = "6.5.0"
description = "Code coverage measurement for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
tomli = {version = "*", optional = true, markers = "python_full_version <= \"3.11.0a6\" and extra == \"toml\""}
[package.extras]
toml = ["tomli"]
[[package]]
name = "cycler"
version = "0.11.0"
description = "Composable style cycles"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cymem"
version = "2.0.7"
description = "Manage calls to calloc/free through Cython"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "cython"
version = "0.29.32"
description = "The Cython compiler for writing C extensions for the Python language."
category = "main"
optional = false
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "dask"
version = "2021.11.2"
description = "Parallel PyData with Task Scheduling"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
cloudpickle = ">=1.1.1"
fsspec = ">=0.6.0"
packaging = ">=20.0"
partd = ">=0.3.10"
pyyaml = "*"
toolz = ">=0.8.2"
[package.extras]
array = ["numpy (>=1.18)"]
complete = ["bokeh (>=1.0.0,!=2.0.0)", "distributed (==2021.11.2)", "jinja2", "numpy (>=1.18)", "pandas (>=1.0)"]
dataframe = ["numpy (>=1.18)", "pandas (>=1.0)"]
diagnostics = ["bokeh (>=1.0.0,!=2.0.0)", "jinja2"]
distributed = ["distributed (==2021.11.2)"]
test = ["pre-commit", "pytest", "pytest-rerunfailures", "pytest-xdist"]
[[package]]
name = "debugpy"
version = "1.6.3"
description = "An implementation of the Debug Adapter Protocol for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "decorator"
version = "5.1.1"
description = "Decorators for Humans"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "defusedxml"
version = "0.7.1"
description = "XML bomb protection for Python stdlib modules"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "dill"
version = "0.3.6"
description = "serialize all of python"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
graph = ["objgraph (>=1.7.2)"]
[[package]]
name = "distributed"
version = "2021.11.2"
description = "Distributed scheduler for Dask"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
click = ">=6.6"
cloudpickle = ">=1.5.0"
dask = "2021.11.2"
jinja2 = "*"
msgpack = ">=0.6.0"
psutil = ">=5.0"
pyyaml = "*"
setuptools = "*"
sortedcontainers = "<2.0.0 || >2.0.0,<2.0.1 || >2.0.1"
tblib = ">=1.6.0"
toolz = ">=0.8.2"
tornado = {version = ">=6.0.3", markers = "python_version >= \"3.8\""}
zict = ">=0.1.3"
[[package]]
name = "docutils"
version = "0.17.1"
description = "Docutils -- Python Documentation Utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "econml"
version = "0.14.0"
description = "This package contains several methods for calculating Conditional Average Treatment Effects"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
joblib = ">=0.13.0"
lightgbm = "*"
numpy = "*"
pandas = "*"
scikit-learn = ">0.22.0,<1.2"
scipy = ">1.4.0"
shap = ">=0.38.1,<0.41.0"
sparse = "*"
statsmodels = ">=0.10"
[package.extras]
all = ["azure-cli", "dowhy (<0.9)", "keras (<2.4)", "matplotlib (<3.6.0)", "protobuf (<4)", "tensorflow (>1.10,<2.3)"]
automl = ["azure-cli"]
dowhy = ["dowhy (<0.9)"]
plt = ["graphviz", "matplotlib (<3.6.0)"]
tf = ["keras (<2.4)", "protobuf (<4)", "tensorflow (>1.10,<2.3)"]
[[package]]
name = "entrypoints"
version = "0.4"
description = "Discover and load entry points from installed packages."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "exceptiongroup"
version = "1.0.4"
description = "Backport of PEP 654 (exception groups)"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pytest (>=6)"]
[[package]]
name = "executing"
version = "1.2.0"
description = "Get the currently executing AST node of a frame, and other information"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
tests = ["asttokens", "littleutils", "pytest", "rich"]
[[package]]
name = "fastai"
version = "2.7.10"
description = "fastai simplifies training fast and accurate neural nets using modern best practices"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastcore = ">=1.4.5,<1.6"
fastdownload = ">=0.0.5,<2"
fastprogress = ">=0.2.4"
matplotlib = "*"
packaging = "*"
pandas = "*"
pillow = ">6.0.0"
pip = "*"
pyyaml = "*"
requests = "*"
scikit-learn = "*"
scipy = "*"
spacy = "<4"
torch = ">=1.7,<1.14"
torchvision = ">=0.8.2"
[package.extras]
dev = ["accelerate (>=0.10.0)", "albumentations", "captum (>=0.3)", "catalyst", "comet-ml", "flask", "flask-compress", "ipywidgets", "kornia", "neptune-client", "ninja", "opencv-python", "pyarrow", "pydicom", "pytorch-ignite", "pytorch-lightning", "scikit-image", "sentencepiece", "tensorboard", "timm (>=0.6.2.dev)", "transformers", "wandb"]
[[package]]
name = "fastcore"
version = "1.5.27"
description = "Python supercharged for fastai development"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
pip = "*"
[package.extras]
dev = ["jupyterlab", "matplotlib", "nbdev (>=0.2.39)", "numpy", "pandas", "pillow", "torch"]
[[package]]
name = "fastdownload"
version = "0.0.7"
description = "A general purpose data downloading library."
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
fastcore = ">=1.3.26"
fastprogress = "*"
[[package]]
name = "fastjsonschema"
version = "2.16.2"
description = "Fastest Python implementation of JSON schema"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
devel = ["colorama", "json-spec", "jsonschema", "pylint", "pytest", "pytest-benchmark", "pytest-cache", "validictory"]
[[package]]
name = "fastprogress"
version = "1.0.3"
description = "A nested progress with plotting options for fastai"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "flake8"
version = "4.0.1"
description = "the modular source code checker: pep8 pyflakes and co"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mccabe = ">=0.6.0,<0.7.0"
pycodestyle = ">=2.8.0,<2.9.0"
pyflakes = ">=2.4.0,<2.5.0"
[[package]]
name = "flaky"
version = "3.7.0"
description = "Plugin for nose or pytest that automatically reruns flaky tests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "flatbuffers"
version = "22.10.26"
description = "The FlatBuffers serialization format for Python"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fonttools"
version = "4.38.0"
description = "Tools to manipulate font files"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
all = ["brotli (>=1.0.1)", "brotlicffi (>=0.8.0)", "fs (>=2.2.0,<3)", "lxml (>=4.0,<5)", "lz4 (>=1.7.4.2)", "matplotlib", "munkres", "scipy", "skia-pathops (>=0.5.0)", "sympy", "uharfbuzz (>=0.23.0)", "unicodedata2 (>=14.0.0)", "xattr", "zopfli (>=0.1.4)"]
graphite = ["lz4 (>=1.7.4.2)"]
interpolatable = ["munkres", "scipy"]
lxml = ["lxml (>=4.0,<5)"]
pathops = ["skia-pathops (>=0.5.0)"]
plot = ["matplotlib"]
repacker = ["uharfbuzz (>=0.23.0)"]
symfont = ["sympy"]
type1 = ["xattr"]
ufo = ["fs (>=2.2.0,<3)"]
unicode = ["unicodedata2 (>=14.0.0)"]
woff = ["brotli (>=1.0.1)", "brotlicffi (>=0.8.0)", "zopfli (>=0.1.4)"]
[[package]]
name = "forestci"
version = "0.6"
description = "forestci: confidence intervals for scikit-learn forest algorithms"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
numpy = ">=1.20"
scikit-learn = ">=0.23.1"
[[package]]
name = "fsspec"
version = "2022.11.0"
description = "File-system specification"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
abfs = ["adlfs"]
adl = ["adlfs"]
arrow = ["pyarrow (>=1)"]
dask = ["dask", "distributed"]
dropbox = ["dropbox", "dropboxdrivefs", "requests"]
entrypoints = ["importlib-metadata"]
fuse = ["fusepy"]
gcs = ["gcsfs"]
git = ["pygit2"]
github = ["requests"]
gs = ["gcsfs"]
gui = ["panel"]
hdfs = ["pyarrow (>=1)"]
http = ["aiohttp (!=4.0.0a0,!=4.0.0a1)", "requests"]
libarchive = ["libarchive-c"]
oci = ["ocifs"]
s3 = ["s3fs"]
sftp = ["paramiko"]
smb = ["smbprotocol"]
ssh = ["paramiko"]
tqdm = ["tqdm"]
[[package]]
name = "future"
version = "0.18.2"
description = "Clean single-source support for Python 3 and 2"
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "gast"
version = "0.4.0"
description = "Python AST that abstracts the underlying Python version"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "google-auth"
version = "2.14.1"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*"
[package.dependencies]
cachetools = ">=2.0.0,<6.0"
pyasn1-modules = ">=0.2.1"
rsa = {version = ">=3.1.4,<5", markers = "python_version >= \"3.6\""}
six = ">=1.9.0"
[package.extras]
aiohttp = ["aiohttp (>=3.6.2,<4.0.0dev)", "requests (>=2.20.0,<3.0.0dev)"]
enterprise-cert = ["cryptography (==36.0.2)", "pyopenssl (==22.0.0)"]
pyopenssl = ["cryptography (>=38.0.3)", "pyopenssl (>=20.0.0)"]
reauth = ["pyu2f (>=0.1.5)"]
[[package]]
name = "google-auth-oauthlib"
version = "0.4.6"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
google-auth = ">=1.0.0"
requests-oauthlib = ">=0.7.0"
[package.extras]
tool = ["click (>=6.0.0)"]
[[package]]
name = "google-pasta"
version = "0.2.0"
description = "pasta is an AST-based Python refactoring library"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[[package]]
name = "graphviz"
version = "0.20.1"
description = "Simple Python interface for Graphviz"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
dev = ["flake8", "pep8-naming", "tox (>=3)", "twine", "wheel"]
docs = ["sphinx (>=5)", "sphinx-autodoc-typehints", "sphinx-rtd-theme"]
test = ["coverage", "mock (>=4)", "pytest (>=7)", "pytest-cov", "pytest-mock (>=3)"]
[[package]]
name = "grpcio"
version = "1.50.0"
description = "HTTP/2-based RPC framework"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.5.2"
[package.extras]
protobuf = ["grpcio-tools (>=1.50.0)"]
[[package]]
name = "h5py"
version = "3.7.0"
description = "Read and write HDF5 files from Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.14.5"
[[package]]
name = "heapdict"
version = "1.0.1"
description = "a heap with decrease-key and increase-key operations"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "idna"
version = "3.4"
description = "Internationalized Domain Names in Applications (IDNA)"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "imagesize"
version = "1.4.1"
description = "Getting image size from png/jpeg/jpeg2000/gif file"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "importlib-metadata"
version = "5.0.0"
description = "Read metadata from Python packages"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = ">=0.5"
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"]
perf = ["ipython"]
testing = ["flake8 (<5)", "flufl.flake8", "importlib-resources (>=1.3)", "packaging", "pyfakefs", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)", "pytest-perf (>=0.9.2)"]
[[package]]
name = "importlib-resources"
version = "5.10.0"
description = "Read resources from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""}
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"]
testing = ["flake8 (<5)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
[[package]]
name = "iniconfig"
version = "1.1.1"
description = "iniconfig: brain-dead simple config-ini parsing"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipykernel"
version = "6.17.1"
description = "IPython Kernel for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "platform_system == \"Darwin\""}
debugpy = ">=1.0"
ipython = ">=7.23.1"
jupyter-client = ">=6.1.12"
matplotlib-inline = ">=0.1"
nest-asyncio = "*"
packaging = "*"
psutil = "*"
pyzmq = ">=17"
tornado = ">=6.1"
traitlets = ">=5.1.0"
[package.extras]
docs = ["myst-parser", "pydata-sphinx-theme", "sphinx", "sphinxcontrib-github-alt"]
test = ["flaky", "ipyparallel", "pre-commit", "pytest (>=7.0)", "pytest-cov", "pytest-timeout"]
[[package]]
name = "ipython"
version = "8.6.0"
description = "IPython: Productive Interactive Computing"
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "sys_platform == \"darwin\""}
backcall = "*"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
decorator = "*"
jedi = ">=0.16"
matplotlib-inline = "*"
pexpect = {version = ">4.3", markers = "sys_platform != \"win32\""}
pickleshare = "*"
prompt-toolkit = ">3.0.1,<3.1.0"
pygments = ">=2.4.0"
stack-data = "*"
traitlets = ">=5"
[package.extras]
all = ["black", "curio", "docrepr", "ipykernel", "ipyparallel", "ipywidgets", "matplotlib", "matplotlib (!=3.2.0)", "nbconvert", "nbformat", "notebook", "numpy (>=1.20)", "pandas", "pytest (<7)", "pytest (<7.1)", "pytest-asyncio", "qtconsole", "setuptools (>=18.5)", "sphinx (>=1.3)", "sphinx-rtd-theme", "stack-data", "testpath", "trio", "typing-extensions"]
black = ["black"]
doc = ["docrepr", "ipykernel", "matplotlib", "pytest (<7)", "pytest (<7.1)", "pytest-asyncio", "setuptools (>=18.5)", "sphinx (>=1.3)", "sphinx-rtd-theme", "stack-data", "testpath", "typing-extensions"]
kernel = ["ipykernel"]
nbconvert = ["nbconvert"]
nbformat = ["nbformat"]
notebook = ["ipywidgets", "notebook"]
parallel = ["ipyparallel"]
qtconsole = ["qtconsole"]
test = ["pytest (<7.1)", "pytest-asyncio", "testpath"]
test-extra = ["curio", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.20)", "pandas", "pytest (<7.1)", "pytest-asyncio", "testpath", "trio"]
[[package]]
name = "ipython-genutils"
version = "0.2.0"
description = "Vestigial utilities from IPython"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipywidgets"
version = "8.0.2"
description = "Jupyter interactive widgets"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = ">=4.5.1"
ipython = ">=6.1.0"
jupyterlab-widgets = ">=3.0,<4.0"
traitlets = ">=4.3.1"
widgetsnbextension = ">=4.0,<5.0"
[package.extras]
test = ["jsonschema", "pytest (>=3.6.0)", "pytest-cov", "pytz"]
[[package]]
name = "isort"
version = "5.10.1"
description = "A Python utility / library to sort Python imports."
category = "dev"
optional = false
python-versions = ">=3.6.1,<4.0"
[package.extras]
colors = ["colorama (>=0.4.3,<0.5.0)"]
pipfile-deprecated-finder = ["pipreqs", "requirementslib"]
plugins = ["setuptools"]
requirements-deprecated-finder = ["pip-api", "pipreqs"]
[[package]]
name = "jedi"
version = "0.18.2"
description = "An autocompletion tool for Python that can be used for text editors."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
parso = ">=0.8.0,<0.9.0"
[package.extras]
docs = ["Jinja2 (==2.11.3)", "MarkupSafe (==1.1.1)", "Pygments (==2.8.1)", "alabaster (==0.7.12)", "babel (==2.9.1)", "chardet (==4.0.0)", "commonmark (==0.8.1)", "docutils (==0.17.1)", "future (==0.18.2)", "idna (==2.10)", "imagesize (==1.2.0)", "mock (==1.0.1)", "packaging (==20.9)", "pyparsing (==2.4.7)", "pytz (==2021.1)", "readthedocs-sphinx-ext (==2.1.4)", "recommonmark (==0.5.0)", "requests (==2.25.1)", "six (==1.15.0)", "snowballstemmer (==2.1.0)", "sphinx (==1.8.5)", "sphinx-rtd-theme (==0.4.3)", "sphinxcontrib-serializinghtml (==1.1.4)", "sphinxcontrib-websupport (==1.2.4)", "urllib3 (==1.26.4)"]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["Django (<3.1)", "attrs", "colorama", "docopt", "pytest (<7.0.0)"]
[[package]]
name = "jinja2"
version = "3.1.2"
description = "A very fast and expressive template engine."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.0"
[package.extras]
i18n = ["Babel (>=2.7)"]
[[package]]
name = "jmespath"
version = "1.0.1"
description = "JSON Matching Expressions"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "joblib"
version = "1.2.0"
description = "Lightweight pipelining with Python functions"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jsonschema"
version = "4.17.1"
description = "An implementation of JSON Schema validation for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=17.4.0"
importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""}
pkgutil-resolve-name = {version = ">=1.3.10", markers = "python_version < \"3.9\""}
pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
[package.extras]
format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"]
format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=1.11)"]
[[package]]
name = "jupyter"
version = "1.0.0"
description = "Jupyter metapackage. Install all the Jupyter components in one go."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ipykernel = "*"
ipywidgets = "*"
jupyter-console = "*"
nbconvert = "*"
notebook = "*"
qtconsole = "*"
[[package]]
name = "jupyter-client"
version = "7.4.7"
description = "Jupyter protocol implementation and client libraries"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
entrypoints = "*"
jupyter-core = ">=4.9.2"
nest-asyncio = ">=1.5.4"
python-dateutil = ">=2.8.2"
pyzmq = ">=23.0"
tornado = ">=6.2"
traitlets = "*"
[package.extras]
doc = ["ipykernel", "myst-parser", "sphinx (>=1.3.6)", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
test = ["codecov", "coverage", "ipykernel (>=6.12)", "ipython", "mypy", "pre-commit", "pytest", "pytest-asyncio (>=0.18)", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-console"
version = "6.4.4"
description = "Jupyter terminal console"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = "*"
ipython = "*"
jupyter-client = ">=7.0.0"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = "*"
[package.extras]
test = ["pexpect"]
[[package]]
name = "jupyter-core"
version = "5.0.0"
description = "Jupyter core package. A base package on which Jupyter projects rely."
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
platformdirs = "*"
pywin32 = {version = ">=1.0", markers = "sys_platform == \"win32\" and platform_python_implementation != \"PyPy\""}
traitlets = "*"
[package.extras]
test = ["ipykernel", "pre-commit", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-server"
version = "1.23.3"
description = "The backend—i.e. core services, APIs, and REST endpoints—to Jupyter web applications."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
anyio = ">=3.1.0,<4"
argon2-cffi = "*"
jinja2 = "*"
jupyter-client = ">=6.1.12"
jupyter-core = ">=4.7.0"
nbconvert = ">=6.4.4"
nbformat = ">=5.2.0"
packaging = "*"
prometheus-client = "*"
pywinpty = {version = "*", markers = "os_name == \"nt\""}
pyzmq = ">=17"
Send2Trash = "*"
terminado = ">=0.8.3"
tornado = ">=6.1.0"
traitlets = ">=5.1"
websocket-client = "*"
[package.extras]
test = ["coverage", "ipykernel", "pre-commit", "pytest (>=7.0)", "pytest-console-scripts", "pytest-cov", "pytest-mock", "pytest-timeout", "pytest-tornasync", "requests"]
[[package]]
name = "jupyterlab-pygments"
version = "0.2.2"
description = "Pygments theme using JupyterLab CSS variables"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jupyterlab-widgets"
version = "3.0.3"
description = "Jupyter interactive widgets for JupyterLab"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "keras"
version = "2.11.0"
description = "Deep learning for humans."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "kiwisolver"
version = "1.4.4"
description = "A fast implementation of the Cassowary constraint solver"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "langcodes"
version = "3.3.0"
description = "Tools for labeling human languages with IETF language tags"
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
data = ["language-data (>=1.1,<2.0)"]
[[package]]
name = "libclang"
version = "14.0.6"
description = "Clang Python Bindings, mirrored from the official LLVM repo: https://github.com/llvm/llvm-project/tree/main/clang/bindings/python, to make the installation process easier."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "lightgbm"
version = "3.3.3"
description = "LightGBM Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = "*"
scikit-learn = "!=0.22.0"
scipy = "*"
wheel = "*"
[package.extras]
dask = ["dask[array] (>=2.0.0)", "dask[dataframe] (>=2.0.0)", "dask[distributed] (>=2.0.0)", "pandas"]
[[package]]
name = "llvmlite"
version = "0.36.0"
description = "lightweight wrapper around basic LLVM functionality"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[[package]]
name = "locket"
version = "1.0.0"
description = "File-based locks for Python on Linux and Windows"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "markdown"
version = "3.4.1"
description = "Python implementation of Markdown."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
[package.extras]
testing = ["coverage", "pyyaml"]
[[package]]
name = "markupsafe"
version = "2.1.1"
description = "Safely add untrusted strings to HTML/XML markup."
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "matplotlib"
version = "3.6.2"
description = "Python plotting package"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
contourpy = ">=1.0.1"
cycler = ">=0.10"
fonttools = ">=4.22.0"
kiwisolver = ">=1.0.1"
numpy = ">=1.19"
packaging = ">=20.0"
pillow = ">=6.2.0"
pyparsing = ">=2.2.1"
python-dateutil = ">=2.7"
setuptools_scm = ">=7"
[[package]]
name = "matplotlib-inline"
version = "0.1.6"
description = "Inline Matplotlib backend for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.dependencies]
traitlets = "*"
[[package]]
name = "mccabe"
version = "0.6.1"
description = "McCabe checker, plugin for flake8"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mistune"
version = "2.0.4"
description = "A sane Markdown parser with useful plugins and renderers"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mpmath"
version = "1.2.1"
description = "Python library for arbitrary-precision floating-point arithmetic"
category = "main"
optional = false
python-versions = "*"
[package.extras]
develop = ["codecov", "pycodestyle", "pytest (>=4.6)", "pytest-cov", "wheel"]
tests = ["pytest (>=4.6)"]
[[package]]
name = "msgpack"
version = "1.0.4"
description = "MessagePack serializer"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "multiprocess"
version = "0.70.14"
description = "better multiprocessing and multithreading in python"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
dill = ">=0.3.6"
[[package]]
name = "murmurhash"
version = "1.0.9"
description = "Cython bindings for MurmurHash"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "mypy"
version = "0.971"
description = "Optional static typing for Python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mypy-extensions = ">=0.4.3"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typing-extensions = ">=3.10"
[package.extras]
dmypy = ["psutil (>=4.0)"]
python2 = ["typed-ast (>=1.4.0,<2)"]
reports = ["lxml"]
[[package]]
name = "mypy-extensions"
version = "0.4.3"
description = "Experimental type system extensions for programs checked with the mypy typechecker."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "nbclassic"
version = "0.4.8"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=6.1.1"
jupyter-core = ">=4.6.1"
jupyter-server = ">=1.8"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
notebook-shim = ">=0.1.0"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["myst-parser", "nbsphinx", "sphinx", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
json-logging = ["json-logging"]
test = ["coverage", "nbval", "pytest", "pytest-cov", "pytest-playwright", "pytest-tornasync", "requests", "requests-unixsocket", "testpath"]
[[package]]
name = "nbclient"
version = "0.7.0"
description = "A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor."
category = "dev"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
jupyter-client = ">=6.1.5"
nbformat = ">=5.0"
nest-asyncio = "*"
traitlets = ">=5.2.2"
[package.extras]
sphinx = ["Sphinx (>=1.7)", "autodoc-traits", "mock", "moto", "myst-parser", "sphinx-book-theme"]
test = ["black", "check-manifest", "flake8", "ipykernel", "ipython", "ipywidgets", "mypy", "nbconvert", "pip (>=18.1)", "pre-commit", "pytest (>=4.1)", "pytest-asyncio", "pytest-cov (>=2.6.1)", "setuptools (>=60.0)", "testpath", "twine (>=1.11.0)", "xmltodict"]
[[package]]
name = "nbconvert"
version = "7.0.0rc3"
description = "Converting Jupyter Notebooks"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
bleach = "*"
defusedxml = "*"
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jinja2 = ">=3.0"
jupyter-core = ">=4.7"
jupyterlab-pygments = "*"
markupsafe = ">=2.0"
mistune = ">=2.0.2,<3"
nbclient = ">=0.5.0"
nbformat = ">=5.1"
packaging = "*"
pandocfilters = ">=1.4.1"
pygments = ">=2.4.1"
tinycss2 = "*"
traitlets = ">=5.0"
[package.extras]
all = ["ipykernel", "ipython", "ipywidgets (>=7)", "nbsphinx (>=0.2.12)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency", "sphinx (>=1.5.1)", "sphinx-rtd-theme", "tornado (>=6.1)"]
docs = ["ipython", "nbsphinx (>=0.2.12)", "sphinx (>=1.5.1)", "sphinx-rtd-theme"]
serve = ["tornado (>=6.1)"]
test = ["ipykernel", "ipywidgets (>=7)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency"]
webpdf = ["pyppeteer (>=1,<1.1)"]
[[package]]
name = "nbformat"
version = "5.7.0"
description = "The Jupyter Notebook format"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastjsonschema = "*"
jsonschema = ">=2.6"
jupyter-core = "*"
traitlets = ">=5.1"
[package.extras]
test = ["check-manifest", "pep440", "pre-commit", "pytest", "testpath"]
[[package]]
name = "nbsphinx"
version = "0.8.10"
description = "Jupyter Notebook Tools for Sphinx"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
docutils = "*"
jinja2 = "*"
nbconvert = "!=5.4"
nbformat = "*"
sphinx = ">=1.8"
traitlets = ">=5"
[[package]]
name = "nest-asyncio"
version = "1.5.6"
description = "Patch asyncio to allow nested event loops"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "networkx"
version = "2.8.8"
description = "Python package for creating and manipulating graphs and networks"
category = "main"
optional = false
python-versions = ">=3.8"
[package.extras]
default = ["matplotlib (>=3.4)", "numpy (>=1.19)", "pandas (>=1.3)", "scipy (>=1.8)"]
developer = ["mypy (>=0.982)", "pre-commit (>=2.20)"]
doc = ["nb2plots (>=0.6)", "numpydoc (>=1.5)", "pillow (>=9.2)", "pydata-sphinx-theme (>=0.11)", "sphinx (>=5.2)", "sphinx-gallery (>=0.11)", "texext (>=0.6.6)"]
extra = ["lxml (>=4.6)", "pydot (>=1.4.2)", "pygraphviz (>=1.9)", "sympy (>=1.10)"]
test = ["codecov (>=2.1)", "pytest (>=7.2)", "pytest-cov (>=4.0)"]
[[package]]
name = "notebook"
version = "6.5.2"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=5.3.4"
jupyter-core = ">=4.6.1"
nbclassic = ">=0.4.7"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["myst-parser", "nbsphinx", "sphinx", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
json-logging = ["json-logging"]
test = ["coverage", "nbval", "pytest", "pytest-cov", "requests", "requests-unixsocket", "selenium (==4.1.5)", "testpath"]
[[package]]
name = "notebook-shim"
version = "0.2.2"
description = "A shim layer for notebook traits and config"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
jupyter-server = ">=1.8,<3"
[package.extras]
test = ["pytest", "pytest-console-scripts", "pytest-tornasync"]
[[package]]
name = "numba"
version = "0.53.1"
description = "compiling Python code using LLVM"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[package.dependencies]
llvmlite = ">=0.36.0rc1,<0.37"
numpy = ">=1.15"
setuptools = "*"
[[package]]
name = "numpy"
version = "1.23.5"
description = "NumPy is the fundamental package for array computing with Python."
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "oauthlib"
version = "3.2.2"
description = "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
rsa = ["cryptography (>=3.0.0)"]
signals = ["blinker (>=1.4.0)"]
signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "opt-einsum"
version = "3.3.0"
description = "Optimizing numpys einsum function"
category = "main"
optional = false
python-versions = ">=3.5"
[package.dependencies]
numpy = ">=1.7"
[package.extras]
docs = ["numpydoc", "sphinx (==1.2.3)", "sphinx-rtd-theme", "sphinxcontrib-napoleon"]
tests = ["pytest", "pytest-cov", "pytest-pep8"]
[[package]]
name = "packaging"
version = "21.3"
description = "Core utilities for Python packages"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pyparsing = ">=2.0.2,<3.0.5 || >3.0.5"
[[package]]
name = "pandas"
version = "1.5.2"
description = "Powerful data structures for data analysis, time series, and statistics"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = {version = ">=1.20.3", markers = "python_version < \"3.10\""}
python-dateutil = ">=2.8.1"
pytz = ">=2020.1"
[package.extras]
test = ["hypothesis (>=5.5.3)", "pytest (>=6.0)", "pytest-xdist (>=1.31)"]
[[package]]
name = "pandocfilters"
version = "1.5.0"
description = "Utilities for writing pandoc filters in python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "parso"
version = "0.8.3"
description = "A Python Parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["docopt", "pytest (<6.0.0)"]
[[package]]
name = "partd"
version = "1.3.0"
description = "Appendable key-value storage"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
locket = "*"
toolz = "*"
[package.extras]
complete = ["blosc", "numpy (>=1.9.0)", "pandas (>=0.19.0)", "pyzmq"]
[[package]]
name = "pastel"
version = "0.2.1"
description = "Bring colors to your terminal."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pathos"
version = "0.2.9"
description = "parallel graph management and execution in heterogeneous computing"
category = "main"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*"
[package.dependencies]
dill = ">=0.3.5.1"
multiprocess = ">=0.70.13"
pox = ">=0.3.1"
ppft = ">=1.7.6.5"
[[package]]
name = "pathspec"
version = "0.10.2"
description = "Utility library for gitignore style pattern matching of file paths."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pathy"
version = "0.9.0"
description = "pathlib.Path subclasses for local and cloud bucket storage"
category = "main"
optional = false
python-versions = ">= 3.6"
[package.dependencies]
smart-open = ">=5.2.1,<6.0.0"
typer = ">=0.3.0,<1.0.0"
[package.extras]
all = ["azure-storage-blob", "boto3", "google-cloud-storage (>=1.26.0,<2.0.0)", "mock", "pytest", "pytest-coverage", "typer-cli"]
azure = ["azure-storage-blob"]
gcs = ["google-cloud-storage (>=1.26.0,<2.0.0)"]
s3 = ["boto3"]
test = ["mock", "pytest", "pytest-coverage", "typer-cli"]
[[package]]
name = "patsy"
version = "0.5.3"
description = "A Python package for describing statistical models and for building design matrices."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.4"
six = "*"
[package.extras]
test = ["pytest", "pytest-cov", "scipy"]
[[package]]
name = "pexpect"
version = "4.8.0"
description = "Pexpect allows easy control of interactive console applications."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ptyprocess = ">=0.5"
[[package]]
name = "pickleshare"
version = "0.7.5"
description = "Tiny 'shelve'-like database with concurrency support"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pillow"
version = "9.3.0"
description = "Python Imaging Library (Fork)"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"]
tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "pip"
version = "22.3.1"
description = "The PyPA recommended tool for installing Python packages."
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pkgutil-resolve-name"
version = "1.3.10"
description = "Resolve a name to an object."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "platformdirs"
version = "2.5.4"
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo (>=2022.9.29)", "proselint (>=0.13)", "sphinx (>=5.3)", "sphinx-autodoc-typehints (>=1.19.4)"]
test = ["appdirs (==1.4.4)", "pytest (>=7.2)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"]
[[package]]
name = "plotly"
version = "5.11.0"
description = "An open-source, interactive data visualization library for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
tenacity = ">=6.2.0"
[[package]]
name = "pluggy"
version = "1.0.0"
description = "plugin and hook calling mechanisms for python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
dev = ["pre-commit", "tox"]
testing = ["pytest", "pytest-benchmark"]
[[package]]
name = "poethepoet"
version = "0.16.4"
description = "A task runner that works well with poetry."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pastel = ">=0.2.1,<0.3.0"
tomli = ">=1.2.2"
[package.extras]
poetry-plugin = ["poetry (>=1.0,<2.0)"]
[[package]]
name = "pox"
version = "0.3.2"
description = "utilities for filesystem exploration and automated builds"
category = "main"
optional = true
python-versions = ">=3.7"
[[package]]
name = "ppft"
version = "1.7.6.6"
description = "distributed and parallel python"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
dill = ["dill (>=0.3.6)"]
[[package]]
name = "preshed"
version = "3.0.8"
description = "Cython hash table that trusts the keys are pre-hashed"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cymem = ">=2.0.2,<2.1.0"
murmurhash = ">=0.28.0,<1.1.0"
[[package]]
name = "progressbar2"
version = "4.2.0"
description = "A Python Progressbar library to provide visual (yet text based) progress to long running operations."
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
python-utils = ">=3.0.0"
[package.extras]
docs = ["sphinx (>=1.8.5)"]
tests = ["flake8 (>=3.7.7)", "freezegun (>=0.3.11)", "pytest (>=4.6.9)", "pytest-cov (>=2.6.1)", "pytest-mypy", "sphinx (>=1.8.5)"]
[[package]]
name = "prometheus-client"
version = "0.15.0"
description = "Python client for the Prometheus monitoring system."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
twisted = ["twisted"]
[[package]]
name = "prompt-toolkit"
version = "3.0.33"
description = "Library for building powerful interactive command lines in Python"
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
wcwidth = "*"
[[package]]
name = "protobuf"
version = "3.19.6"
description = "Protocol Buffers"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "psutil"
version = "5.9.4"
description = "Cross-platform lib for process and system monitoring in Python."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.extras]
test = ["enum34", "ipaddress", "mock", "pywin32", "wmi"]
[[package]]
name = "ptyprocess"
version = "0.7.0"
description = "Run a subprocess in a pseudo terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pure-eval"
version = "0.2.2"
description = "Safely evaluate AST nodes without side effects"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest"]
[[package]]
name = "py"
version = "1.11.0"
description = "library with cross-python path, ini-parsing, io, code, log facilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pyasn1"
version = "0.4.8"
description = "ASN.1 types and codecs"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pyasn1-modules"
version = "0.2.8"
description = "A collection of ASN.1-based protocols modules."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pyasn1 = ">=0.4.6,<0.5.0"
[[package]]
name = "pycodestyle"
version = "2.8.0"
description = "Python style guide checker"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pycparser"
version = "2.21"
description = "C parser in Python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pydantic"
version = "1.10.2"
description = "Data validation and settings management using python type hints"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
typing-extensions = ">=4.1.0"
[package.extras]
dotenv = ["python-dotenv (>=0.10.4)"]
email = ["email-validator (>=1.0.3)"]
[[package]]
name = "pydata-sphinx-theme"
version = "0.9.0"
description = "Bootstrap-based Sphinx theme from the PyData community"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
docutils = "!=0.17.0"
packaging = "*"
sphinx = ">=4.0.2"
[package.extras]
coverage = ["codecov", "pydata-sphinx-theme[test]", "pytest-cov"]
dev = ["nox", "pre-commit", "pydata-sphinx-theme[coverage]", "pyyaml"]
doc = ["jupyter_sphinx", "myst-parser", "numpy", "numpydoc", "pandas", "plotly", "pytest", "pytest-regressions", "sphinx-design", "sphinx-sitemap", "sphinxext-rediraffe", "xarray"]
test = ["pydata-sphinx-theme[doc]", "pytest"]
[[package]]
name = "pydot"
version = "1.4.2"
description = "Python interface to Graphviz's Dot"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
pyparsing = ">=2.1.4"
[[package]]
name = "pydotplus"
version = "2.0.2"
description = "Python interface to Graphviz's Dot language"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
pyparsing = ">=2.0.1"
[[package]]
name = "pyflakes"
version = "2.4.0"
description = "passive checker of Python programs"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pygam"
version = "0.8.0"
description = "GAM toolkit"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
future = "*"
numpy = "*"
progressbar2 = "*"
scipy = "*"
[[package]]
name = "pygments"
version = "2.13.0"
description = "Pygments is a syntax highlighting package written in Python."
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
plugins = ["importlib-metadata"]
[[package]]
name = "pygraphviz"
version = "1.10"
description = "Python interface to Graphviz"
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "pyparsing"
version = "3.0.9"
description = "pyparsing module - Classes and methods to define and execute parsing grammars"
category = "main"
optional = false
python-versions = ">=3.6.8"
[package.extras]
diagrams = ["jinja2", "railroad-diagrams"]
[[package]]
name = "pyro-api"
version = "0.1.2"
description = "Generic API for dispatch to Pyro backends."
category = "main"
optional = true
python-versions = "*"
[package.extras]
dev = ["ipython", "sphinx (>=2.0)", "sphinx-rtd-theme"]
test = ["flake8", "pytest (>=5.0)"]
[[package]]
name = "pyro-ppl"
version = "1.8.3"
description = "A Python library for probabilistic modeling and inference"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.7"
opt-einsum = ">=2.3.2"
pyro-api = ">=0.1.1"
torch = ">=1.11.0"
tqdm = ">=4.36"
[package.extras]
dev = ["black (>=21.4b0)", "flake8", "graphviz (>=0.8)", "isort (>=5.0)", "jupyter (>=1.0.0)", "lap", "matplotlib (>=1.3)", "mypy (>=0.812)", "nbformat", "nbsphinx (>=0.3.2)", "nbstripout", "nbval", "ninja", "pandas", "pillow (==8.2.0)", "pypandoc", "pytest (>=5.0)", "pytest-xdist", "scikit-learn", "scipy (>=1.1)", "seaborn (>=0.11.0)", "sphinx", "sphinx-rtd-theme", "torchvision (>=0.12.0)", "visdom (>=0.1.4,<0.2.2)", "wget", "yapf"]
extras = ["graphviz (>=0.8)", "jupyter (>=1.0.0)", "lap", "matplotlib (>=1.3)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn (>=0.11.0)", "torchvision (>=0.12.0)", "visdom (>=0.1.4,<0.2.2)", "wget"]
funsor = ["funsor[torch] (==0.4.3)"]
horovod = ["horovod[pytorch] (>=0.19)"]
profile = ["prettytable", "pytest-benchmark", "snakeviz"]
test = ["black (>=21.4b0)", "flake8", "graphviz (>=0.8)", "jupyter (>=1.0.0)", "lap", "matplotlib (>=1.3)", "nbval", "pandas", "pillow (==8.2.0)", "pytest (>=5.0)", "pytest-cov", "scikit-learn", "scipy (>=1.1)", "seaborn (>=0.11.0)", "torchvision (>=0.12.0)", "visdom (>=0.1.4,<0.2.2)", "wget"]
[[package]]
name = "pyrsistent"
version = "0.19.2"
description = "Persistent/Functional/Immutable data structures"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pytest"
version = "7.2.0"
description = "pytest: simple powerful testing with Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=19.2.0"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=0.12,<2.0"
tomli = {version = ">=1.0.0", markers = "python_version < \"3.11\""}
[package.extras]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
[[package]]
name = "pytest-cov"
version = "3.0.0"
description = "Pytest plugin for measuring coverage."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
coverage = {version = ">=5.2.1", extras = ["toml"]}
pytest = ">=4.6"
[package.extras]
testing = ["fields", "hunter", "process-tests", "pytest-xdist", "six", "virtualenv"]
[[package]]
name = "pytest-split"
version = "0.8.0"
description = "Pytest plugin which splits the test suite to equally sized sub suites based on test execution time."
category = "dev"
optional = false
python-versions = ">=3.7.1,<4.0"
[package.dependencies]
pytest = ">=5,<8"
[[package]]
name = "python-dateutil"
version = "2.8.2"
description = "Extensions to the standard Python datetime module"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
[package.dependencies]
six = ">=1.5"
[[package]]
name = "python-utils"
version = "3.4.5"
description = "Python Utils is a module with some convenient utilities not included with the standard Python install"
category = "main"
optional = true
python-versions = ">3.6.0"
[package.extras]
docs = ["mock", "python-utils", "sphinx"]
loguru = ["loguru"]
tests = ["flake8", "loguru", "pytest", "pytest-asyncio", "pytest-cov", "pytest-mypy", "sphinx", "types-setuptools"]
[[package]]
name = "pytz"
version = "2022.6"
description = "World timezone definitions, modern and historical"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pytz-deprecation-shim"
version = "0.1.0.post0"
description = "Shims to make deprecation of pytz easier"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version >= \"3.6\" and python_version < \"3.9\""}
tzdata = {version = "*", markers = "python_version >= \"3.6\""}
[[package]]
name = "pywin32"
version = "305"
description = "Python for Window Extensions"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywinpty"
version = "2.0.9"
description = "Pseudo terminal support for Windows from Python."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pyyaml"
version = "6.0"
description = "YAML parser and emitter for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "pyzmq"
version = "24.0.1"
description = "Python bindings for 0MQ"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = {version = "*", markers = "implementation_name == \"pypy\""}
py = {version = "*", markers = "implementation_name == \"pypy\""}
[[package]]
name = "qtconsole"
version = "5.4.0"
description = "Jupyter Qt console"
category = "dev"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
ipykernel = ">=4.1"
ipython-genutils = "*"
jupyter-client = ">=4.1"
jupyter-core = "*"
pygments = "*"
pyzmq = ">=17.1"
qtpy = ">=2.0.1"
traitlets = "<5.2.1 || >5.2.1,<5.2.2 || >5.2.2"
[package.extras]
doc = ["Sphinx (>=1.3)"]
test = ["flaky", "pytest", "pytest-qt"]
[[package]]
name = "qtpy"
version = "2.3.0"
description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
[package.extras]
test = ["pytest (>=6,!=7.0.0,!=7.0.1)", "pytest-cov (>=3.0.0)", "pytest-qt"]
[[package]]
name = "requests"
version = "2.28.1"
description = "Python HTTP for Humans."
category = "main"
optional = false
python-versions = ">=3.7, <4"
[package.dependencies]
certifi = ">=2017.4.17"
charset-normalizer = ">=2,<3"
idna = ">=2.5,<4"
urllib3 = ">=1.21.1,<1.27"
[package.extras]
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "requests-oauthlib"
version = "1.3.1"
description = "OAuthlib authentication support for Requests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
oauthlib = ">=3.0.0"
requests = ">=2.0.0"
[package.extras]
rsa = ["oauthlib[signedtoken] (>=3.0.0)"]
[[package]]
name = "rpy2"
version = "3.5.6"
description = "Python interface to the R language (embedded R)"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
cffi = ">=1.10.0"
jinja2 = "*"
packaging = {version = "*", markers = "platform_system == \"Windows\""}
pytz = "*"
tzlocal = "*"
[package.extras]
all = ["ipython", "numpy", "pandas", "pytest"]
numpy = ["pandas"]
pandas = ["numpy", "pandas"]
test = ["ipython", "numpy", "pandas", "pytest"]
[[package]]
name = "rsa"
version = "4.9"
description = "Pure-Python RSA implementation"
category = "dev"
optional = false
python-versions = ">=3.6,<4"
[package.dependencies]
pyasn1 = ">=0.1.3"
[[package]]
name = "s3transfer"
version = "0.6.0"
description = "An Amazon S3 Transfer Manager"
category = "main"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
botocore = ">=1.12.36,<2.0a.0"
[package.extras]
crt = ["botocore[crt] (>=1.20.29,<2.0a.0)"]
[[package]]
name = "scikit-learn"
version = "1.0.2"
description = "A set of python modules for machine learning and data mining"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
joblib = ">=0.11"
numpy = ">=1.14.6"
scipy = ">=1.1.0"
threadpoolctl = ">=2.0.0"
[package.extras]
benchmark = ["matplotlib (>=2.2.3)", "memory-profiler (>=0.57.0)", "pandas (>=0.25.0)"]
docs = ["Pillow (>=7.1.2)", "matplotlib (>=2.2.3)", "memory-profiler (>=0.57.0)", "numpydoc (>=1.0.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "seaborn (>=0.9.0)", "sphinx (>=4.0.1)", "sphinx-gallery (>=0.7.0)", "sphinx-prompt (>=1.3.0)", "sphinxext-opengraph (>=0.4.2)"]
examples = ["matplotlib (>=2.2.3)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "seaborn (>=0.9.0)"]
tests = ["black (>=21.6b0)", "flake8 (>=3.8.2)", "matplotlib (>=2.2.3)", "mypy (>=0.770)", "pandas (>=0.25.0)", "pyamg (>=4.0.0)", "pytest (>=5.0.1)", "pytest-cov (>=2.9.0)", "scikit-image (>=0.14.5)"]
[[package]]
name = "scipy"
version = "1.8.1"
description = "SciPy: Scientific Library for Python"
category = "main"
optional = false
python-versions = ">=3.8,<3.11"
[package.dependencies]
numpy = ">=1.17.3,<1.25.0"
[[package]]
name = "scipy"
version = "1.9.3"
description = "Fundamental algorithms for scientific computing in Python"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = ">=1.18.5,<1.26.0"
[package.extras]
dev = ["flake8", "mypy", "pycodestyle", "typing_extensions"]
doc = ["matplotlib (>2)", "numpydoc", "pydata-sphinx-theme (==0.9.0)", "sphinx (!=4.1.0)", "sphinx-panels (>=0.5.2)", "sphinx-tabs"]
test = ["asv", "gmpy2", "mpmath", "pytest", "pytest-cov", "pytest-xdist", "scikit-umfpack", "threadpoolctl"]
[[package]]
name = "seaborn"
version = "0.12.1"
description = "Statistical data visualization"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
matplotlib = ">=3.1,<3.6.1 || >3.6.1"
numpy = ">=1.17"
pandas = ">=0.25"
[package.extras]
dev = ["flake8", "mypy", "pandas-stubs", "pre-commit", "pytest", "pytest-cov", "pytest-xdist"]
docs = ["ipykernel", "nbconvert", "numpydoc", "pydata_sphinx_theme (==0.10.0rc2)", "pyyaml", "sphinx-copybutton", "sphinx-design", "sphinx-issues"]
stats = ["scipy (>=1.3)", "statsmodels (>=0.10)"]
[[package]]
name = "send2trash"
version = "1.8.0"
description = "Send file to trash natively under Mac OS X, Windows and Linux."
category = "dev"
optional = false
python-versions = "*"
[package.extras]
nativelib = ["pyobjc-framework-Cocoa", "pywin32"]
objc = ["pyobjc-framework-Cocoa"]
win32 = ["pywin32"]
[[package]]
name = "setuptools"
version = "65.6.1"
description = "Easily download, build, install, upgrade, and uninstall Python packages"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-notfound-page (==0.8.3)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8 (<5)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pip-run (>=8.8)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
[[package]]
name = "setuptools-scm"
version = "7.0.5"
description = "the blessed package to manage your versions by scm tags"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = ">=20.0"
setuptools = "*"
tomli = ">=1.0.0"
typing-extensions = "*"
[package.extras]
test = ["pytest (>=6.2)", "virtualenv (>20)"]
toml = ["setuptools (>=42)"]
[[package]]
name = "shap"
version = "0.40.0"
description = "A unified approach to explain the output of any machine learning model."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
cloudpickle = "*"
numba = "*"
numpy = "*"
packaging = ">20.9"
pandas = "*"
scikit-learn = "*"
scipy = "*"
slicer = "0.0.7"
tqdm = ">4.25.0"
[package.extras]
all = ["catboost", "ipython", "lightgbm", "lime", "matplotlib", "nbsphinx", "numpydoc", "opencv-python", "pyod", "pyspark", "pytest", "pytest-cov", "pytest-mpl", "sentencepiece", "sphinx", "sphinx_rtd_theme", "torch", "transformers", "xgboost"]
docs = ["ipython", "matplotlib", "nbsphinx", "numpydoc", "sphinx", "sphinx_rtd_theme"]
others = ["lime"]
plots = ["ipython", "matplotlib"]
test = ["catboost", "lightgbm", "opencv-python", "pyod", "pyspark", "pytest", "pytest-cov", "pytest-mpl", "sentencepiece", "torch", "transformers", "xgboost"]
[[package]]
name = "six"
version = "1.16.0"
description = "Python 2 and 3 compatibility utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "slicer"
version = "0.0.7"
description = "A small package for big slicing."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "smart-open"
version = "5.2.1"
description = "Utils for streaming large files (S3, HDFS, GCS, Azure Blob Storage, gzip, bz2...)"
category = "main"
optional = false
python-versions = ">=3.6,<4.0"
[package.extras]
all = ["azure-common", "azure-core", "azure-storage-blob", "boto3", "google-cloud-storage", "requests"]
azure = ["azure-common", "azure-core", "azure-storage-blob"]
gcs = ["google-cloud-storage"]
http = ["requests"]
s3 = ["boto3"]
test = ["azure-common", "azure-core", "azure-storage-blob", "boto3", "google-cloud-storage", "moto[server] (==1.3.14)", "parameterizedtestcase", "paramiko", "pathlib2", "pytest", "pytest-rerunfailures", "requests", "responses"]
webhdfs = ["requests"]
[[package]]
name = "sniffio"
version = "1.3.0"
description = "Sniff out which async library your code is running under"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "snowballstemmer"
version = "2.2.0"
description = "This package provides 29 stemmers for 28 languages generated from Snowball algorithms."
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "sortedcontainers"
version = "2.4.0"
description = "Sorted Containers -- Sorted List, Sorted Dict, Sorted Set"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "soupsieve"
version = "2.3.2.post1"
description = "A modern CSS selector implementation for Beautiful Soup."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "spacy"
version = "3.4.3"
description = "Industrial-strength Natural Language Processing (NLP) in Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
catalogue = ">=2.0.6,<2.1.0"
cymem = ">=2.0.2,<2.1.0"
jinja2 = "*"
langcodes = ">=3.2.0,<4.0.0"
murmurhash = ">=0.28.0,<1.1.0"
numpy = ">=1.15.0"
packaging = ">=20.0"
pathy = ">=0.3.5"
preshed = ">=3.0.2,<3.1.0"
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0"
requests = ">=2.13.0,<3.0.0"
setuptools = "*"
spacy-legacy = ">=3.0.10,<3.1.0"
spacy-loggers = ">=1.0.0,<2.0.0"
srsly = ">=2.4.3,<3.0.0"
thinc = ">=8.1.0,<8.2.0"
tqdm = ">=4.38.0,<5.0.0"
typer = ">=0.3.0,<0.8.0"
wasabi = ">=0.9.1,<1.1.0"
[package.extras]
apple = ["thinc-apple-ops (>=0.1.0.dev0,<1.0.0)"]
cuda = ["cupy (>=5.0.0b4,<12.0.0)"]
cuda-autodetect = ["cupy-wheel (>=11.0.0,<12.0.0)"]
cuda100 = ["cupy-cuda100 (>=5.0.0b4,<12.0.0)"]
cuda101 = ["cupy-cuda101 (>=5.0.0b4,<12.0.0)"]
cuda102 = ["cupy-cuda102 (>=5.0.0b4,<12.0.0)"]
cuda110 = ["cupy-cuda110 (>=5.0.0b4,<12.0.0)"]
cuda111 = ["cupy-cuda111 (>=5.0.0b4,<12.0.0)"]
cuda112 = ["cupy-cuda112 (>=5.0.0b4,<12.0.0)"]
cuda113 = ["cupy-cuda113 (>=5.0.0b4,<12.0.0)"]
cuda114 = ["cupy-cuda114 (>=5.0.0b4,<12.0.0)"]
cuda115 = ["cupy-cuda115 (>=5.0.0b4,<12.0.0)"]
cuda116 = ["cupy-cuda116 (>=5.0.0b4,<12.0.0)"]
cuda117 = ["cupy-cuda117 (>=5.0.0b4,<12.0.0)"]
cuda11x = ["cupy-cuda11x (>=11.0.0,<12.0.0)"]
cuda80 = ["cupy-cuda80 (>=5.0.0b4,<12.0.0)"]
cuda90 = ["cupy-cuda90 (>=5.0.0b4,<12.0.0)"]
cuda91 = ["cupy-cuda91 (>=5.0.0b4,<12.0.0)"]
cuda92 = ["cupy-cuda92 (>=5.0.0b4,<12.0.0)"]
ja = ["sudachidict-core (>=20211220)", "sudachipy (>=0.5.2,!=0.6.1)"]
ko = ["natto-py (>=0.9.0)"]
lookups = ["spacy-lookups-data (>=1.0.3,<1.1.0)"]
ray = ["spacy-ray (>=0.1.0,<1.0.0)"]
th = ["pythainlp (>=2.0)"]
transformers = ["spacy-transformers (>=1.1.2,<1.2.0)"]
[[package]]
name = "spacy-legacy"
version = "3.0.10"
description = "Legacy registered functions for spaCy backwards compatibility"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "spacy-loggers"
version = "1.0.3"
description = "Logging utilities for SpaCy"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
wasabi = ">=0.8.1,<1.1.0"
[[package]]
name = "sparse"
version = "0.13.0"
description = "Sparse n-dimensional arrays"
category = "main"
optional = false
python-versions = ">=3.6, <4"
[package.dependencies]
numba = ">=0.49"
numpy = ">=1.17"
scipy = ">=0.19"
[package.extras]
all = ["dask[array]", "pytest (>=3.5)", "pytest-black", "pytest-cov", "sphinx", "sphinx-rtd-theme", "tox"]
docs = ["sphinx", "sphinx-rtd-theme"]
tests = ["dask[array]", "pytest (>=3.5)", "pytest-black", "pytest-cov"]
tox = ["dask[array]", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox"]
[[package]]
name = "sphinx"
version = "5.3.0"
description = "Python documentation generator"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
alabaster = ">=0.7,<0.8"
babel = ">=2.9"
colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
docutils = ">=0.14,<0.20"
imagesize = ">=1.3"
importlib-metadata = {version = ">=4.8", markers = "python_version < \"3.10\""}
Jinja2 = ">=3.0"
packaging = ">=21.0"
Pygments = ">=2.12"
requests = ">=2.5.0"
snowballstemmer = ">=2.0"
sphinxcontrib-applehelp = "*"
sphinxcontrib-devhelp = "*"
sphinxcontrib-htmlhelp = ">=2.0.0"
sphinxcontrib-jsmath = "*"
sphinxcontrib-qthelp = "*"
sphinxcontrib-serializinghtml = ">=1.1.5"
[package.extras]
docs = ["sphinxcontrib-websupport"]
lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-bugbear", "flake8-comprehensions", "flake8-simplify", "isort", "mypy (>=0.981)", "sphinx-lint", "types-requests", "types-typed-ast"]
test = ["cython", "html5lib", "pytest (>=4.6)", "typed_ast"]
[[package]]
name = "sphinx-copybutton"
version = "0.5.0"
description = "Add a copy button to each of your code cells."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
sphinx = ">=1.8"
[package.extras]
code-style = ["pre-commit (==2.12.1)"]
rtd = ["ipython", "myst-nb", "sphinx", "sphinx-book-theme"]
[[package]]
name = "sphinx-design"
version = "0.3.0"
description = "A sphinx extension for designing beautiful, view size responsive web components."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
sphinx = ">=4,<6"
[package.extras]
code-style = ["pre-commit (>=2.12,<3.0)"]
rtd = ["myst-parser (>=0.18.0,<0.19.0)"]
testing = ["myst-parser (>=0.18.0,<0.19.0)", "pytest (>=7.1,<8.0)", "pytest-cov", "pytest-regressions"]
theme-furo = ["furo (>=2022.06.04,<2022.07)"]
theme-pydata = ["pydata-sphinx-theme (>=0.9.0,<0.10.0)"]
theme-rtd = ["sphinx-rtd-theme (>=1.0,<2.0)"]
theme-sbt = ["sphinx-book-theme (>=0.3.0,<0.4.0)"]
[[package]]
name = "sphinx-rtd-theme"
version = "1.1.1"
description = "Read the Docs theme for Sphinx"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
docutils = "<0.18"
sphinx = ">=1.6,<6"
[package.extras]
dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
[[package]]
name = "sphinxcontrib-applehelp"
version = "1.0.2"
description = "sphinxcontrib-applehelp is a sphinx extension which outputs Apple help books"
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "sphinxcontrib-devhelp"
version = "1.0.2"
description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "sphinxcontrib-googleanalytics"
version = "0.2.dev20220919"
description = "Sphinx extension googleanalytics"
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
Sphinx = ">=0.6"
[package.source]
type = "git"
url = "https://github.com/sphinx-contrib/googleanalytics.git"
reference = "master"
resolved_reference = "42b3df99fdc01a136b9c575f3f251ae80cdfbe1d"
[[package]]
name = "sphinxcontrib-htmlhelp"
version = "2.0.0"
description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["html5lib", "pytest"]
[[package]]
name = "sphinxcontrib-jsmath"
version = "1.0.1"
description = "A sphinx extension which renders display math in HTML via JavaScript"
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["flake8", "mypy", "pytest"]
[[package]]
name = "sphinxcontrib-qthelp"
version = "1.0.3"
description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "sphinxcontrib-serializinghtml"
version = "1.1.5"
description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "srsly"
version = "2.4.5"
description = "Modern high-performance serialization utilities for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
catalogue = ">=2.0.3,<2.1.0"
[[package]]
name = "stack-data"
version = "0.6.1"
description = "Extract data from python stack frames and tracebacks for informative displays"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
asttokens = ">=2.1.0"
executing = ">=1.2.0"
pure-eval = "*"
[package.extras]
tests = ["cython", "littleutils", "pygments", "pytest", "typeguard"]
[[package]]
name = "statsmodels"
version = "0.13.5"
description = "Statistical computations and models for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = {version = ">=1.17", markers = "python_version != \"3.10\" or platform_system != \"Windows\" or platform_python_implementation == \"PyPy\""}
packaging = ">=21.3"
pandas = ">=0.25"
patsy = ">=0.5.2"
scipy = [
{version = ">=1.3", markers = "(python_version > \"3.9\" or platform_system != \"Windows\" or platform_machine != \"x86\") and python_version < \"3.12\""},
{version = ">=1.3,<1.9", markers = "python_version == \"3.8\" and platform_system == \"Windows\" and platform_machine == \"x86\" or python_version == \"3.9\" and platform_system == \"Windows\" and platform_machine == \"x86\""},
]
[package.extras]
build = ["cython (>=0.29.32)"]
develop = ["Jinja2", "colorama", "cython (>=0.29.32)", "cython (>=0.29.32,<3.0.0)", "flake8", "isort", "joblib", "matplotlib (>=3)", "oldest-supported-numpy (>=2022.4.18)", "pytest (>=7.0.1,<7.1.0)", "pytest-randomly", "pytest-xdist", "pywinpty", "setuptools-scm[toml] (>=7.0.0,<7.1.0)"]
docs = ["ipykernel", "jupyter-client", "matplotlib", "nbconvert", "nbformat", "numpydoc", "pandas-datareader", "sphinx"]
[[package]]
name = "sympy"
version = "1.11.1"
description = "Computer algebra system (CAS) in Python"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
mpmath = ">=0.19"
[[package]]
name = "tblib"
version = "1.7.0"
description = "Traceback serialization library."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "tenacity"
version = "8.1.0"
description = "Retry code until it succeeds"
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
doc = ["reno", "sphinx", "tornado (>=4.5)"]
[[package]]
name = "tensorboard"
version = "2.11.0"
description = "TensorBoard lets you watch Tensors Flow"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=0.4"
google-auth = ">=1.6.3,<3"
google-auth-oauthlib = ">=0.4.1,<0.5"
grpcio = ">=1.24.3"
markdown = ">=2.6.8"
numpy = ">=1.12.0"
protobuf = ">=3.9.2,<4"
requests = ">=2.21.0,<3"
setuptools = ">=41.0.0"
tensorboard-data-server = ">=0.6.0,<0.7.0"
tensorboard-plugin-wit = ">=1.6.0"
werkzeug = ">=1.0.1"
wheel = ">=0.26"
[[package]]
name = "tensorboard-data-server"
version = "0.6.1"
description = "Fast data loading for TensorBoard"
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tensorboard-plugin-wit"
version = "1.8.1"
description = "What-If Tool TensorBoard plugin."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "tensorflow"
version = "2.11.0"
description = "TensorFlow is an open source machine learning framework for everyone."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=1.0.0"
astunparse = ">=1.6.0"
flatbuffers = ">=2.0"
gast = ">=0.2.1,<=0.4.0"
google-pasta = ">=0.1.1"
grpcio = ">=1.24.3,<2.0"
h5py = ">=2.9.0"
keras = ">=2.11.0,<2.12"
libclang = ">=13.0.0"
numpy = ">=1.20"
opt-einsum = ">=2.3.2"
packaging = "*"
protobuf = ">=3.9.2,<3.20"
setuptools = "*"
six = ">=1.12.0"
tensorboard = ">=2.11,<2.12"
tensorflow-estimator = ">=2.11.0,<2.12"
tensorflow-io-gcs-filesystem = {version = ">=0.23.1", markers = "platform_machine != \"arm64\" or platform_system != \"Darwin\""}
termcolor = ">=1.1.0"
typing-extensions = ">=3.6.6"
wrapt = ">=1.11.0"
[[package]]
name = "tensorflow-estimator"
version = "2.11.0"
description = "TensorFlow Estimator."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tensorflow-io-gcs-filesystem"
version = "0.28.0"
description = "TensorFlow IO"
category = "dev"
optional = false
python-versions = ">=3.7, <3.11"
[package.extras]
tensorflow = ["tensorflow (>=2.11.0,<2.12.0)"]
tensorflow-aarch64 = ["tensorflow-aarch64 (>=2.11.0,<2.12.0)"]
tensorflow-cpu = ["tensorflow-cpu (>=2.11.0,<2.12.0)"]
tensorflow-gpu = ["tensorflow-gpu (>=2.11.0,<2.12.0)"]
tensorflow-rocm = ["tensorflow-rocm (>=2.11.0,<2.12.0)"]
[[package]]
name = "termcolor"
version = "2.1.1"
description = "ANSI color formatting for output in terminal"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
tests = ["pytest", "pytest-cov"]
[[package]]
name = "terminado"
version = "0.17.0"
description = "Tornado websocket backend for the Xterm.js Javascript terminal emulator library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ptyprocess = {version = "*", markers = "os_name != \"nt\""}
pywinpty = {version = ">=1.1.0", markers = "os_name == \"nt\""}
tornado = ">=6.1.0"
[package.extras]
docs = ["pydata-sphinx-theme", "sphinx"]
test = ["pre-commit", "pytest (>=7.0)", "pytest-timeout"]
[[package]]
name = "thinc"
version = "8.1.5"
description = "A refreshing functional take on deep learning, compatible with your favorite libraries"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
blis = ">=0.7.8,<0.8.0"
catalogue = ">=2.0.4,<2.1.0"
confection = ">=0.0.1,<1.0.0"
cymem = ">=2.0.2,<2.1.0"
murmurhash = ">=1.0.2,<1.1.0"
numpy = ">=1.15.0"
preshed = ">=3.0.2,<3.1.0"
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0"
setuptools = "*"
srsly = ">=2.4.0,<3.0.0"
wasabi = ">=0.8.1,<1.1.0"
[package.extras]
cuda = ["cupy (>=5.0.0b4)"]
cuda-autodetect = ["cupy-wheel (>=11.0.0)"]
cuda100 = ["cupy-cuda100 (>=5.0.0b4)"]
cuda101 = ["cupy-cuda101 (>=5.0.0b4)"]
cuda102 = ["cupy-cuda102 (>=5.0.0b4)"]
cuda110 = ["cupy-cuda110 (>=5.0.0b4)"]
cuda111 = ["cupy-cuda111 (>=5.0.0b4)"]
cuda112 = ["cupy-cuda112 (>=5.0.0b4)"]
cuda113 = ["cupy-cuda113 (>=5.0.0b4)"]
cuda114 = ["cupy-cuda114 (>=5.0.0b4)"]
cuda115 = ["cupy-cuda115 (>=5.0.0b4)"]
cuda116 = ["cupy-cuda116 (>=5.0.0b4)"]
cuda117 = ["cupy-cuda117 (>=5.0.0b4)"]
cuda11x = ["cupy-cuda11x (>=11.0.0)"]
cuda80 = ["cupy-cuda80 (>=5.0.0b4)"]
cuda90 = ["cupy-cuda90 (>=5.0.0b4)"]
cuda91 = ["cupy-cuda91 (>=5.0.0b4)"]
cuda92 = ["cupy-cuda92 (>=5.0.0b4)"]
datasets = ["ml-datasets (>=0.2.0,<0.3.0)"]
mxnet = ["mxnet (>=1.5.1,<1.6.0)"]
tensorflow = ["tensorflow (>=2.0.0,<2.6.0)"]
torch = ["torch (>=1.6.0)"]
[[package]]
name = "threadpoolctl"
version = "3.1.0"
description = "threadpoolctl"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tinycss2"
version = "1.2.1"
description = "A tiny CSS parser"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
webencodings = ">=0.4"
[package.extras]
doc = ["sphinx", "sphinx_rtd_theme"]
test = ["flake8", "isort", "pytest"]
[[package]]
name = "tokenize-rt"
version = "5.0.0"
description = "A wrapper around the stdlib `tokenize` which roundtrips."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tomli"
version = "2.0.1"
description = "A lil' TOML parser"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "toolz"
version = "0.12.0"
description = "List processing tools and functional utilities"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "torch"
version = "1.12.1"
description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration"
category = "main"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
typing-extensions = "*"
[[package]]
name = "torchvision"
version = "0.13.1"
description = "image and video datasets and models for torch deep learning"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = "*"
pillow = ">=5.3.0,<8.3.0 || >=8.4.0"
requests = "*"
torch = "1.12.1"
typing-extensions = "*"
[package.extras]
scipy = ["scipy"]
[[package]]
name = "tornado"
version = "6.2"
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
category = "main"
optional = false
python-versions = ">= 3.7"
[[package]]
name = "tqdm"
version = "4.64.1"
description = "Fast, Extensible Progress Meter"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
dev = ["py-make (>=0.1.0)", "twine", "wheel"]
notebook = ["ipywidgets (>=6)"]
slack = ["slack-sdk"]
telegram = ["requests"]
[[package]]
name = "traitlets"
version = "5.5.0"
description = ""
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["myst-parser", "pydata-sphinx-theme", "sphinx"]
test = ["pre-commit", "pytest"]
[[package]]
name = "typer"
version = "0.7.0"
description = "Typer, build great CLIs. Easy to code. Based on Python type hints."
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
click = ">=7.1.1,<9.0.0"
[package.extras]
all = ["colorama (>=0.4.3,<0.5.0)", "rich (>=10.11.0,<13.0.0)", "shellingham (>=1.3.0,<2.0.0)"]
dev = ["autoflake (>=1.3.1,<2.0.0)", "flake8 (>=3.8.3,<4.0.0)", "pre-commit (>=2.17.0,<3.0.0)"]
doc = ["cairosvg (>=2.5.2,<3.0.0)", "mdx-include (>=1.4.1,<2.0.0)", "mkdocs (>=1.1.2,<2.0.0)", "mkdocs-material (>=8.1.4,<9.0.0)", "pillow (>=9.3.0,<10.0.0)"]
test = ["black (>=22.3.0,<23.0.0)", "coverage (>=6.2,<7.0)", "isort (>=5.0.6,<6.0.0)", "mypy (==0.910)", "pytest (>=4.4.0,<8.0.0)", "pytest-cov (>=2.10.0,<5.0.0)", "pytest-sugar (>=0.9.4,<0.10.0)", "pytest-xdist (>=1.32.0,<4.0.0)", "rich (>=10.11.0,<13.0.0)", "shellingham (>=1.3.0,<2.0.0)"]
[[package]]
name = "typing-extensions"
version = "4.4.0"
description = "Backported and Experimental Type Hints for Python 3.7+"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tzdata"
version = "2022.6"
description = "Provider of IANA time zone data"
category = "dev"
optional = false
python-versions = ">=2"
[[package]]
name = "tzlocal"
version = "4.2"
description = "tzinfo object for the local timezone"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version < \"3.9\""}
pytz-deprecation-shim = "*"
tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["black", "pyroma", "pytest-cov", "zest.releaser"]
test = ["pytest (>=4.3)", "pytest-mock (>=3.3)"]
[[package]]
name = "urllib3"
version = "1.26.12"
description = "HTTP library with thread-safe connection pooling, file post, and more."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, <4"
[package.extras]
brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)", "brotlipy (>=0.6.0)"]
secure = ["certifi", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "ipaddress", "pyOpenSSL (>=0.14)", "urllib3-secure-extra"]
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
[[package]]
name = "wasabi"
version = "0.10.1"
description = "A lightweight console printing and formatting toolkit"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "wcwidth"
version = "0.2.5"
description = "Measures the displayed width of unicode strings in a terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "webencodings"
version = "0.5.1"
description = "Character encoding aliases for legacy web content"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "websocket-client"
version = "1.4.2"
description = "WebSocket client for Python with low level API options"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["Sphinx (>=3.4)", "sphinx-rtd-theme (>=0.5)"]
optional = ["python-socks", "wsaccel"]
test = ["websockets"]
[[package]]
name = "werkzeug"
version = "2.2.2"
description = "The comprehensive WSGI web application library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.1.1"
[package.extras]
watchdog = ["watchdog"]
[[package]]
name = "wheel"
version = "0.38.4"
description = "A built-package format for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pytest (>=3.0.0)"]
[[package]]
name = "widgetsnbextension"
version = "4.0.3"
description = "Jupyter interactive widgets for Jupyter Notebook"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "wrapt"
version = "1.14.1"
description = "Module for decorators, wrappers and monkey patching."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "xgboost"
version = "1.7.1"
description = "XGBoost Python Package"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = "*"
scipy = "*"
[package.extras]
dask = ["dask", "distributed", "pandas"]
datatable = ["datatable"]
pandas = ["pandas"]
plotting = ["graphviz", "matplotlib"]
pyspark = ["cloudpickle", "pyspark", "scikit-learn"]
scikit-learn = ["scikit-learn"]
[[package]]
name = "zict"
version = "2.2.0"
description = "Mutable mapping tools"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
heapdict = "*"
[[package]]
name = "zipp"
version = "3.10.0"
description = "Backport of pathlib-compatible object wrapper for zip files"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"]
testing = ["flake8 (<5)", "func-timeout", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
[extras]
causalml = ["causalml", "llvmlite", "cython"]
econml = ["econml"]
plotting = ["matplotlib"]
pydot = ["pydot"]
pygraphviz = ["pygraphviz"]
[metadata]
lock-version = "1.1"
python-versions = ">=3.8,<3.10"
content-hash = "12d40b6d9616d209cd632e2315aafc72f78d3e35efdf6e52ca410588465787cc"
[metadata.files]
absl-py = [
{file = "absl-py-1.3.0.tar.gz", hash = "sha256:463c38a08d2e4cef6c498b76ba5bd4858e4c6ef51da1a5a1f27139a022e20248"},
{file = "absl_py-1.3.0-py3-none-any.whl", hash = "sha256:34995df9bd7a09b3b8749e230408f5a2a2dd7a68a0d33c12a3d0cb15a041a507"},
]
alabaster = [
{file = "alabaster-0.7.12-py2.py3-none-any.whl", hash = "sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359"},
{file = "alabaster-0.7.12.tar.gz", hash = "sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02"},
]
anyio = [
{file = "anyio-3.6.2-py3-none-any.whl", hash = "sha256:fbbe32bd270d2a2ef3ed1c5d45041250284e31fc0a4df4a5a6071842051a51e3"},
{file = "anyio-3.6.2.tar.gz", hash = "sha256:25ea0d673ae30af41a0c442f81cf3b38c7e79fdc7b60335a4c14e05eb0947421"},
]
appnope = [
{file = "appnope-0.1.3-py2.py3-none-any.whl", hash = "sha256:265a455292d0bd8a72453494fa24df5a11eb18373a60c7c0430889f22548605e"},
{file = "appnope-0.1.3.tar.gz", hash = "sha256:02bd91c4de869fbb1e1c50aafc4098827a7a54ab2f39d9dcba6c9547ed920e24"},
]
argon2-cffi = [
{file = "argon2-cffi-21.3.0.tar.gz", hash = "sha256:d384164d944190a7dd7ef22c6aa3ff197da12962bd04b17f64d4e93d934dba5b"},
{file = "argon2_cffi-21.3.0-py3-none-any.whl", hash = "sha256:8c976986f2c5c0e5000919e6de187906cfd81fb1c72bf9d88c01177e77da7f80"},
]
argon2-cffi-bindings = [
{file = "argon2-cffi-bindings-21.2.0.tar.gz", hash = "sha256:bb89ceffa6c791807d1305ceb77dbfacc5aa499891d2c55661c6459651fc39e3"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ccb949252cb2ab3a08c02024acb77cfb179492d5701c7cbdbfd776124d4d2367"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9524464572e12979364b7d600abf96181d3541da11e23ddf565a32e70bd4dc0d"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b746dba803a79238e925d9046a63aa26bf86ab2a2fe74ce6b009a1c3f5c8f2ae"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58ed19212051f49a523abb1dbe954337dc82d947fb6e5a0da60f7c8471a8476c"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd46088725ef7f58b5a1ef7ca06647ebaf0eb4baff7d1d0d177c6cc8744abd86"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_i686.whl", hash = "sha256:8cd69c07dd875537a824deec19f978e0f2078fdda07fd5c42ac29668dda5f40f"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f1152ac548bd5b8bcecfb0b0371f082037e47128653df2e8ba6e914d384f3c3e"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win32.whl", hash = "sha256:603ca0aba86b1349b147cab91ae970c63118a0f30444d4bc80355937c950c082"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:b2ef1c30440dbbcba7a5dc3e319408b59676e2e039e2ae11a8775ecf482b192f"},
{file = "argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e415e3f62c8d124ee16018e491a009937f8cf7ebf5eb430ffc5de21b900dad93"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3e385d1c39c520c08b53d63300c3ecc28622f076f4c2b0e6d7e796e9f6502194"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3e3cc67fdb7d82c4718f19b4e7a87123caf8a93fde7e23cf66ac0337d3cb3f"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a22ad9800121b71099d0fb0a65323810a15f2e292f2ba450810a7316e128ee5"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f9f8b450ed0547e3d473fdc8612083fd08dd2120d6ac8f73828df9b7d45bb351"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:93f9bf70084f97245ba10ee36575f0c3f1e7d7724d67d8e5b08e61787c320ed7"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3b9ef65804859d335dc6b31582cad2c5166f0c3e7975f324d9ffaa34ee7e6583"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4966ef5848d820776f5f562a7d45fdd70c2f330c961d0d745b784034bd9f48d"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20ef543a89dee4db46a1a6e206cd015360e5a75822f76df533845c3cbaf72670"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed2937d286e2ad0cc79a7087d3c272832865f779430e0cc2b4f3718d3159b0cb"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5e00316dabdaea0b2dd82d141cc66889ced0cdcbfa599e8b471cf22c620c329a"},
]
asttokens = [
{file = "asttokens-2.1.0-py2.py3-none-any.whl", hash = "sha256:1b28ed85e254b724439afc783d4bee767f780b936c3fe8b3275332f42cf5f561"},
{file = "asttokens-2.1.0.tar.gz", hash = "sha256:4aa76401a151c8cc572d906aad7aea2a841780834a19d780f4321c0fe1b54635"},
]
astunparse = [
{file = "astunparse-1.6.3-py2.py3-none-any.whl", hash = "sha256:c2652417f2c8b5bb325c885ae329bdf3f86424075c4fd1a128674bc6fba4b8e8"},
{file = "astunparse-1.6.3.tar.gz", hash = "sha256:5ad93a8456f0d084c3456d059fd9a92cce667963232cbf763eac3bc5b7940872"},
]
attrs = [
{file = "attrs-22.1.0-py2.py3-none-any.whl", hash = "sha256:86efa402f67bf2df34f51a335487cf46b1ec130d02b8d39fd248abfd30da551c"},
{file = "attrs-22.1.0.tar.gz", hash = "sha256:29adc2665447e5191d0e7c568fde78b21f9672d344281d0c6e1ab085429b22b6"},
]
autogluon-common = [
{file = "autogluon.common-0.6.0-py3-none-any.whl", hash = "sha256:8e1a46efaab051069589b875e417df30b38150a908e9aa2ff3ab479747a487ce"},
{file = "autogluon.common-0.6.0.tar.gz", hash = "sha256:d967844c728ad8e9a5c0f9e0deddbe6c4beb0e47cdf829a44a4834b5917798e0"},
]
autogluon-core = [
{file = "autogluon.core-0.6.0-py3-none-any.whl", hash = "sha256:b7efd2dfebfc9a3be0e39d1bf1bd352f45b23cccd503cf32afb9f5f23d58126b"},
{file = "autogluon.core-0.6.0.tar.gz", hash = "sha256:a6b6d57ec38d4193afab6b121cde63a6085446a51f84b9fa358221b7fed71ff4"},
]
autogluon-features = [
{file = "autogluon.features-0.6.0-py3-none-any.whl", hash = "sha256:ecff1a69cc768bc55777b3f7453ee89859352162dd43adda4451faadc9e583bf"},
{file = "autogluon.features-0.6.0.tar.gz", hash = "sha256:dced399ac2652c7c872da5208d0a0383778aeca3706a1b987b9781c9420d80c7"},
]
autogluon-tabular = [
{file = "autogluon.tabular-0.6.0-py3-none-any.whl", hash = "sha256:16404037c475e8746d61a7b1c977d5fd14afd853ebc9777fb0eafc851d37f8ad"},
{file = "autogluon.tabular-0.6.0.tar.gz", hash = "sha256:91892b7c9749942526eabfdd1bbb6d9daae2c24f785570a0552b2c7b9b851ab4"},
]
babel = [
{file = "Babel-2.11.0-py3-none-any.whl", hash = "sha256:1ad3eca1c885218f6dce2ab67291178944f810a10a9b5f3cb8382a5a232b64fe"},
{file = "Babel-2.11.0.tar.gz", hash = "sha256:5ef4b3226b0180dedded4229651c8b0e1a3a6a2837d45a073272f313e4cf97f6"},
]
backcall = [
{file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
{file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"},
]
backports-zoneinfo = [
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:da6013fd84a690242c310d77ddb8441a559e9cb3d3d59ebac9aca1a57b2e18bc"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:89a48c0d158a3cc3f654da4c2de1ceba85263fafb861b98b59040a5086259722"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:1c5742112073a563c81f786e77514969acb58649bcdf6cdf0b4ed31a348d4546"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win32.whl", hash = "sha256:e8236383a20872c0cdf5a62b554b27538db7fa1bbec52429d8d106effbaeca08"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:8439c030a11780786a2002261569bdf362264f605dfa4d65090b64b05c9f79a7"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:f04e857b59d9d1ccc39ce2da1021d196e47234873820cbeaad210724b1ee28ac"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:17746bd546106fa389c51dbea67c8b7c8f0d14b5526a579ca6ccf5ed72c526cf"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5c144945a7752ca544b4b78c8c41544cdfaf9786f25fe5ffb10e838e19a27570"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win32.whl", hash = "sha256:e55b384612d93be96506932a786bbcde5a2db7a9e6a4bb4bffe8b733f5b9036b"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a76b38c52400b762e48131494ba26be363491ac4f9a04c1b7e92483d169f6582"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:8961c0f32cd0336fb8e8ead11a1f8cd99ec07145ec2931122faaac1c8f7fd987"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e81b76cace8eda1fca50e345242ba977f9be6ae3945af8d46326d776b4cf78d1"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7b0a64cda4145548fed9efc10322770f929b944ce5cee6c0dfe0c87bf4c0c8c9"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win32.whl", hash = "sha256:1b13e654a55cd45672cb54ed12148cd33628f672548f373963b0bff67b217328"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:4a0f800587060bf8880f954dbef70de6c11bbe59c673c3d818921f042f9954a6"},
{file = "backports.zoneinfo-0.2.1.tar.gz", hash = "sha256:fadbfe37f74051d024037f223b8e001611eac868b5c5b06144ef4d8b799862f2"},
]
beautifulsoup4 = [
{file = "beautifulsoup4-4.11.1-py3-none-any.whl", hash = "sha256:58d5c3d29f5a36ffeb94f02f0d786cd53014cf9b3b3951d42e0080d8a9498d30"},
{file = "beautifulsoup4-4.11.1.tar.gz", hash = "sha256:ad9aa55b65ef2808eb405f46cf74df7fcb7044d5cbc26487f96eb2ef2e436693"},
]
black = [
{file = "black-22.10.0-1fixedarch-cp310-cp310-macosx_11_0_x86_64.whl", hash = "sha256:5cc42ca67989e9c3cf859e84c2bf014f6633db63d1cbdf8fdb666dcd9e77e3fa"},
{file = "black-22.10.0-1fixedarch-cp311-cp311-macosx_11_0_x86_64.whl", hash = "sha256:5d8f74030e67087b219b032aa33a919fae8806d49c867846bfacde57f43972ef"},
{file = "black-22.10.0-1fixedarch-cp37-cp37m-macosx_10_16_x86_64.whl", hash = "sha256:197df8509263b0b8614e1df1756b1dd41be6738eed2ba9e9769f3880c2b9d7b6"},
{file = "black-22.10.0-1fixedarch-cp38-cp38-macosx_10_16_x86_64.whl", hash = "sha256:2644b5d63633702bc2c5f3754b1b475378fbbfb481f62319388235d0cd104c2d"},
{file = "black-22.10.0-1fixedarch-cp39-cp39-macosx_11_0_x86_64.whl", hash = "sha256:e41a86c6c650bcecc6633ee3180d80a025db041a8e2398dcc059b3afa8382cd4"},
{file = "black-22.10.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2039230db3c6c639bd84efe3292ec7b06e9214a2992cd9beb293d639c6402edb"},
{file = "black-22.10.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:14ff67aec0a47c424bc99b71005202045dc09270da44a27848d534600ac64fc7"},
{file = "black-22.10.0-cp310-cp310-win_amd64.whl", hash = "sha256:819dc789f4498ecc91438a7de64427c73b45035e2e3680c92e18795a839ebb66"},
{file = "black-22.10.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5b9b29da4f564ba8787c119f37d174f2b69cdfdf9015b7d8c5c16121ddc054ae"},
{file = "black-22.10.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b8b49776299fece66bffaafe357d929ca9451450f5466e997a7285ab0fe28e3b"},
{file = "black-22.10.0-cp311-cp311-win_amd64.whl", hash = "sha256:21199526696b8f09c3997e2b4db8d0b108d801a348414264d2eb8eb2532e540d"},
{file = "black-22.10.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1e464456d24e23d11fced2bc8c47ef66d471f845c7b7a42f3bd77bf3d1789650"},
{file = "black-22.10.0-cp37-cp37m-win_amd64.whl", hash = "sha256:9311e99228ae10023300ecac05be5a296f60d2fd10fff31cf5c1fa4ca4b1988d"},
{file = "black-22.10.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:fba8a281e570adafb79f7755ac8721b6cf1bbf691186a287e990c7929c7692ff"},
{file = "black-22.10.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:915ace4ff03fdfff953962fa672d44be269deb2eaf88499a0f8805221bc68c87"},
{file = "black-22.10.0-cp38-cp38-win_amd64.whl", hash = "sha256:444ebfb4e441254e87bad00c661fe32df9969b2bf224373a448d8aca2132b395"},
{file = "black-22.10.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:974308c58d057a651d182208a484ce80a26dac0caef2895836a92dd6ebd725e0"},
{file = "black-22.10.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:72ef3925f30e12a184889aac03d77d031056860ccae8a1e519f6cbb742736383"},
{file = "black-22.10.0-cp39-cp39-win_amd64.whl", hash = "sha256:432247333090c8c5366e69627ccb363bc58514ae3e63f7fc75c54b1ea80fa7de"},
{file = "black-22.10.0-py3-none-any.whl", hash = "sha256:c957b2b4ea88587b46cf49d1dc17681c1e672864fd7af32fc1e9664d572b3458"},
{file = "black-22.10.0.tar.gz", hash = "sha256:f513588da599943e0cde4e32cc9879e825d58720d6557062d1098c5ad80080e1"},
]
bleach = [
{file = "bleach-5.0.1-py3-none-any.whl", hash = "sha256:085f7f33c15bd408dd9b17a4ad77c577db66d76203e5984b1bd59baeee948b2a"},
{file = "bleach-5.0.1.tar.gz", hash = "sha256:0d03255c47eb9bd2f26aa9bb7f2107732e7e8fe195ca2f64709fcf3b0a4a085c"},
]
blis = [
{file = "blis-0.7.9-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b3ea73707a7938304c08363a0b990600e579bfb52dece7c674eafac4bf2df9f7"},
{file = "blis-0.7.9-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e85993364cae82707bfe7e637bee64ec96e232af31301e5c81a351778cb394b9"},
{file = "blis-0.7.9-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d205a7e69523e2bacdd67ea906b82b84034067e0de83b33bd83eb96b9e844ae3"},
{file = "blis-0.7.9-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b9737035636452fb6d08e7ab79e5a9904be18a0736868a129179cd9f9ab59825"},
{file = "blis-0.7.9-cp310-cp310-win_amd64.whl", hash = "sha256:d3882b4f44a33367812b5e287c0690027092830ffb1cce124b02f64e761819a4"},
{file = "blis-0.7.9-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3dbb44311029263a6f65ed55a35f970aeb1d20b18bfac4c025de5aadf7889a8c"},
{file = "blis-0.7.9-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:6fd5941bd5a21082b19d1dd0f6d62cd35609c25eb769aa3457d9877ef2ce37a9"},
{file = "blis-0.7.9-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:97ad55e9ef36e4ff06b35802d0cf7bfc56f9697c6bc9427f59c90956bb98377d"},
{file = "blis-0.7.9-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f7b6315d7b1ac5546bc0350f5f8d7cc064438d23db19a5c21aaa6ae7d93c1ab5"},
{file = "blis-0.7.9-cp311-cp311-win_amd64.whl", hash = "sha256:5fd46c649acd1920482b4f5556d1c88693cba9bf6a494a020b00f14b42e1132f"},
{file = "blis-0.7.9-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:db2959560dcb34e912dad0e0d091f19b05b61363bac15d78307c01334a4e5d9d"},
{file = "blis-0.7.9-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0521231bc95ab522f280da3bbb096299c910a62cac2376d48d4a1d403c54393"},
{file = "blis-0.7.9-cp36-cp36m-win_amd64.whl", hash = "sha256:d811e88480203d75e6e959f313fdbf3326393b4e2b317067d952347f5c56216e"},
{file = "blis-0.7.9-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5cb1db88ab629ccb39eac110b742b98e3511d48ce9caa82ca32609d9169a9c9c"},
{file = "blis-0.7.9-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c399a03de4059bf8e700b921f9ff5d72b2a86673616c40db40cd0592051bdd07"},
{file = "blis-0.7.9-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d4eb70a79562a211bd2e6b6db63f1e2eed32c0ab3e9ef921d86f657ae8375845"},
{file = "blis-0.7.9-cp37-cp37m-win_amd64.whl", hash = "sha256:3e3f95e035c7456a1f5f3b5a3cfe708483a00335a3a8ad2211d57ba4d5f749a5"},
{file = "blis-0.7.9-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:179037cb5e6744c2e93b6b5facc6e4a0073776d514933c3db1e1f064a3253425"},
{file = "blis-0.7.9-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d0e82a6e0337d5231129a4e8b36978fa7b973ad3bb0257fd8e3714a9b35ceffd"},
{file = "blis-0.7.9-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6d12475e588a322e66a18346a3faa9eb92523504042e665c193d1b9b0b3f0482"},
{file = "blis-0.7.9-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4d5755ef37a573647be62684ca1545698879d07321f1e5b89a4fd669ce355eb0"},
{file = "blis-0.7.9-cp38-cp38-win_amd64.whl", hash = "sha256:b8a1fcd2eb267301ab13e1e4209c165d172cdf9c0c9e08186a9e234bf91daa16"},
{file = "blis-0.7.9-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8275f6b6eee714b85f00bf882720f508ed6a60974bcde489715d37fd35529da8"},
{file = "blis-0.7.9-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:7417667c221e29fe8662c3b2ff9bc201c6a5214bbb5eb6cc290484868802258d"},
{file = "blis-0.7.9-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5f4691bf62013eccc167c38a85c09a0bf0c6e3e80d4c2229cdf2668c1124eb0"},
{file = "blis-0.7.9-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f5cec812ee47b29107eb36af9b457be7191163eab65d61775ed63538232c59d5"},
{file = "blis-0.7.9-cp39-cp39-win_amd64.whl", hash = "sha256:d81c3f627d33545fc25c9dcb5fee66c476d89288a27d63ac16ea63453401ffd5"},
{file = "blis-0.7.9.tar.gz", hash = "sha256:29ef4c25007785a90ffc2f0ab3d3bd3b75cd2d7856a9a482b7d0dac8d511a09d"},
]
boto3 = [
{file = "boto3-1.26.15-py3-none-any.whl", hash = "sha256:0e455bc50190cec1af819c9e4a257130661c4f2fad1e211b4dd2cb8f9af89464"},
{file = "boto3-1.26.15.tar.gz", hash = "sha256:e2bfc955fb70053951589d01919c9233c6ef091ae1404bb5249a0f27e05b6b36"},
]
botocore = [
{file = "botocore-1.29.15-py3-none-any.whl", hash = "sha256:02cfa6d060c50853a028b36ada96f4ddb225948bf9e7e0a4dc5b72f9e3878f15"},
{file = "botocore-1.29.15.tar.gz", hash = "sha256:7d4e148870c98bbaab04b0c85b4d3565fc00fec6148cab9da96ab4419dbfb941"},
]
cachetools = [
{file = "cachetools-5.2.0-py3-none-any.whl", hash = "sha256:f9f17d2aec496a9aa6b76f53e3b614c965223c061982d434d160f930c698a9db"},
{file = "cachetools-5.2.0.tar.gz", hash = "sha256:6a94c6402995a99c3970cc7e4884bb60b4a8639938157eeed436098bf9831757"},
]
catalogue = [
{file = "catalogue-2.0.8-py3-none-any.whl", hash = "sha256:2d786e229d8d202b4f8a2a059858e45a2331201d831e39746732daa704b99f69"},
{file = "catalogue-2.0.8.tar.gz", hash = "sha256:b325c77659208bfb6af1b0d93b1a1aa4112e1bb29a4c5ced816758a722f0e388"},
]
catboost = [
{file = "catboost-1.1.1-cp310-none-macosx_10_6_universal2.whl", hash = "sha256:93532f6807228f74db9c8184a0893ab222232d23fc5b3db534e2d8fedbba42cf"},
{file = "catboost-1.1.1-cp310-none-manylinux1_x86_64.whl", hash = "sha256:7c7364d79d5ff9deb56956560ba91a1b62b84204961d540bffd97f7b995e8cba"},
{file = "catboost-1.1.1-cp310-none-win_amd64.whl", hash = "sha256:5ec0c9bd65e53ae6c26d17c06f9c28e4febbd7cbdeb858460eb3d34249a10f30"},
{file = "catboost-1.1.1-cp36-none-macosx_10_6_universal2.whl", hash = "sha256:60acc4448eb45242f4d30aea6ccdf45bfaa8646bbc4ede3200cf25ba0d6bcf3d"},
{file = "catboost-1.1.1-cp36-none-manylinux1_x86_64.whl", hash = "sha256:b7443b40b5ddb141c6d14bff16c13f7cf4852893b57d7eda5dff30fb7517e14d"},
{file = "catboost-1.1.1-cp36-none-win_amd64.whl", hash = "sha256:190828590270e3dea5fb58f0fd13715ee2324f6ee321866592c422a1da141961"},
{file = "catboost-1.1.1-cp37-none-macosx_10_6_universal2.whl", hash = "sha256:a2fe4d08a360c3c3cabfa3a94c586f2261b93a3fff043ae2b43d2d4de121c2ce"},
{file = "catboost-1.1.1-cp37-none-manylinux1_x86_64.whl", hash = "sha256:4e350c40920dbd9644f1c7b88cb74cb8b96f1ecbbd7c12f6223964465d83b968"},
{file = "catboost-1.1.1-cp37-none-win_amd64.whl", hash = "sha256:0033569f2e6314a04a84ec83eecd39f77402426b52571b78991e629d7252c6f7"},
{file = "catboost-1.1.1-cp38-none-macosx_10_6_universal2.whl", hash = "sha256:454aae50922b10172b94971033d4b0607128a2e2ca8a5845cf8879ea28d80942"},
{file = "catboost-1.1.1-cp38-none-manylinux1_x86_64.whl", hash = "sha256:3fd12d9f1f89440292c63b242ccabdab012d313250e2b1e8a779d6618c734b32"},
{file = "catboost-1.1.1-cp38-none-win_amd64.whl", hash = "sha256:840348bf56dd11f6096030208601cbce87f1e6426ef33140fb6cc97bceb5fef3"},
{file = "catboost-1.1.1-cp39-none-macosx_10_6_universal2.whl", hash = "sha256:9e7c47050c8840ccaff4d394907d443bda01280a30778ae9d71939a7528f5ae3"},
{file = "catboost-1.1.1-cp39-none-manylinux1_x86_64.whl", hash = "sha256:a60ae2630f7b3752f262515a51b265521a4993df75dea26fa60777ec6e479395"},
{file = "catboost-1.1.1-cp39-none-win_amd64.whl", hash = "sha256:156264dbe9e841cb0b6333383e928cb8f65df4d00429a9771eb8b06b9bcfa17c"},
]
causal-learn = [
{file = "causal-learn-0.1.3.0.tar.gz", hash = "sha256:8242bced95e11eb4b4ee5f8085c528a25496d20c87bd5f3fcdb17d4678d7de63"},
{file = "causal_learn-0.1.3.0-py3-none-any.whl", hash = "sha256:d7271b0a60e839b725735373c4c5c012446dd216f17cc4b46aed550e08054d72"},
]
causalml = []
certifi = [
{file = "certifi-2022.9.24-py3-none-any.whl", hash = "sha256:90c1a32f1d68f940488354e36370f6cca89f0f106db09518524c88d6ed83f382"},
{file = "certifi-2022.9.24.tar.gz", hash = "sha256:0d9c601124e5a6ba9712dbc60d9c53c21e34f5f641fe83002317394311bdce14"},
]
cffi = [
{file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
{file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
{file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
{file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
{file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
{file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
{file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
{file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
{file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
{file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
{file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
{file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
{file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
{file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
{file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
{file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
{file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
{file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
{file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
{file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
{file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
{file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
{file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
{file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
{file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
]
charset-normalizer = [
{file = "charset-normalizer-2.1.1.tar.gz", hash = "sha256:5a3d016c7c547f69d6f81fb0db9449ce888b418b5b9952cc5e6e66843e9dd845"},
{file = "charset_normalizer-2.1.1-py3-none-any.whl", hash = "sha256:83e9a75d1911279afd89352c68b45348559d1fc0506b054b346651b5e7fee29f"},
]
click = [
{file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
{file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
]
cloudpickle = [
{file = "cloudpickle-2.2.0-py3-none-any.whl", hash = "sha256:7428798d5926d8fcbfd092d18d01a2a03daf8237d8fcdc8095d256b8490796f0"},
{file = "cloudpickle-2.2.0.tar.gz", hash = "sha256:3f4219469c55453cfe4737e564b67c2a149109dabf7f242478948b895f61106f"},
]
colorama = [
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
]
confection = [
{file = "confection-0.0.3-py3-none-any.whl", hash = "sha256:51af839c1240430421da2b248541ebc95f9d0ee385bcafa768b8acdbd2b0111d"},
{file = "confection-0.0.3.tar.gz", hash = "sha256:4fec47190057c43c9acbecb8b1b87a9bf31c469caa0d6888a5b9384432fdba5a"},
]
contourpy = [
{file = "contourpy-1.0.6-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:613c665529899b5d9fade7e5d1760111a0b011231277a0d36c49f0d3d6914bd6"},
{file = "contourpy-1.0.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:78ced51807ccb2f45d4ea73aca339756d75d021069604c2fccd05390dc3c28eb"},
{file = "contourpy-1.0.6-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b3b1bd7577c530eaf9d2bc52d1a93fef50ac516a8b1062c3d1b9bcec9ebe329b"},
{file = "contourpy-1.0.6-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d8834c14b8c3dd849005e06703469db9bf96ba2d66a3f88ecc539c9a8982e0ee"},
{file = "contourpy-1.0.6-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f4052a8a4926d4468416fc7d4b2a7b2a3e35f25b39f4061a7e2a3a2748c4fc48"},
{file = "contourpy-1.0.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1c0e1308307a75e07d1f1b5f0f56b5af84538a5e9027109a7bcf6cb47c434e72"},
{file = "contourpy-1.0.6-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9fc4e7973ed0e1fe689435842a6e6b330eb7ccc696080dda9a97b1a1b78e41db"},
{file = "contourpy-1.0.6-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:08e8d09d96219ace6cb596506fb9b64ea5f270b2fb9121158b976d88871fcfd1"},
{file = "contourpy-1.0.6-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:f33da6b5d19ad1bb5e7ad38bb8ba5c426d2178928bc2b2c44e8823ea0ecb6ff3"},
{file = "contourpy-1.0.6-cp310-cp310-win32.whl", hash = "sha256:12a7dc8439544ed05c6553bf026d5e8fa7fad48d63958a95d61698df0e00092b"},
{file = "contourpy-1.0.6-cp310-cp310-win_amd64.whl", hash = "sha256:eadad75bf91897f922e0fb3dca1b322a58b1726a953f98c2e5f0606bd8408621"},
{file = "contourpy-1.0.6-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:913bac9d064cff033cf3719e855d4f1db9f1c179e0ecf3ba9fdef21c21c6a16a"},
{file = "contourpy-1.0.6-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:46deb310a276cc5c1fd27958e358cce68b1e8a515fa5a574c670a504c3a3fe30"},
{file = "contourpy-1.0.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b64f747e92af7da3b85631a55d68c45a2d728b4036b03cdaba4bd94bcc85bd6f"},
{file = "contourpy-1.0.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50627bf76abb6ba291ad08db583161939c2c5fab38c38181b7833423ab9c7de3"},
{file = "contourpy-1.0.6-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:358f6364e4873f4d73360b35da30066f40387dd3c427a3e5432c6b28dd24a8fa"},
{file = "contourpy-1.0.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c78bfbc1a7bff053baf7e508449d2765964d67735c909b583204e3240a2aca45"},
{file = "contourpy-1.0.6-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e43255a83835a129ef98f75d13d643844d8c646b258bebd11e4a0975203e018f"},
{file = "contourpy-1.0.6-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:375d81366afd547b8558c4720337218345148bc2fcffa3a9870cab82b29667f2"},
{file = "contourpy-1.0.6-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:b98c820608e2dca6442e786817f646d11057c09a23b68d2b3737e6dcb6e4a49b"},
{file = "contourpy-1.0.6-cp311-cp311-win32.whl", hash = "sha256:0e4854cc02006ad6684ce092bdadab6f0912d131f91c2450ce6dbdea78ee3c0b"},
{file = "contourpy-1.0.6-cp311-cp311-win_amd64.whl", hash = "sha256:d2eff2af97ea0b61381828b1ad6cd249bbd41d280e53aea5cccd7b2b31b8225c"},
{file = "contourpy-1.0.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5b117d29433fc8393b18a696d794961464e37afb34a6eeb8b2c37b5f4128a83e"},
{file = "contourpy-1.0.6-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:341330ed19074f956cb20877ad8d2ae50e458884bfa6a6df3ae28487cc76c768"},
{file = "contourpy-1.0.6-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:371f6570a81dfdddbb837ba432293a63b4babb942a9eb7aaa699997adfb53278"},
{file = "contourpy-1.0.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9447c45df407d3ecb717d837af3b70cfef432138530712263730783b3d016512"},
{file = "contourpy-1.0.6-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:730c27978a0003b47b359935478b7d63fd8386dbb2dcd36c1e8de88cbfc1e9de"},
{file = "contourpy-1.0.6-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:da1ef35fd79be2926ba80fbb36327463e3656c02526e9b5b4c2b366588b74d9a"},
{file = "contourpy-1.0.6-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:cd2bc0c8f2e8de7dd89a7f1c10b8844e291bca17d359373203ef2e6100819edd"},
{file = "contourpy-1.0.6-cp37-cp37m-win32.whl", hash = "sha256:3a1917d3941dd58732c449c810fa7ce46cc305ce9325a11261d740118b85e6f3"},
{file = "contourpy-1.0.6-cp37-cp37m-win_amd64.whl", hash = "sha256:06ca79e1efbbe2df795822df2fa173d1a2b38b6e0f047a0ec7903fbca1d1847e"},
{file = "contourpy-1.0.6-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e626cefff8491bce356221c22af5a3ea528b0b41fbabc719c00ae233819ea0bf"},
{file = "contourpy-1.0.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:dbe6fe7a1166b1ddd7b6d887ea6fa8389d3f28b5ed3f73a8f40ece1fc5a3d340"},
{file = "contourpy-1.0.6-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e13b31d1b4b68db60b3b29f8e337908f328c7f05b9add4b1b5c74e0691180109"},
{file = "contourpy-1.0.6-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a79d239fc22c3b8d9d3de492aa0c245533f4f4c7608e5749af866949c0f1b1b9"},
{file = "contourpy-1.0.6-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9e8e686a6db92a46111a1ee0ee6f7fbfae4048f0019de207149f43ac1812cf95"},
{file = "contourpy-1.0.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:acd2bd02f1a7adff3a1f33e431eb96ab6d7987b039d2946a9b39fe6fb16a1036"},
{file = "contourpy-1.0.6-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:03d1b9c6b44a9e30d554654c72be89af94fab7510b4b9f62356c64c81cec8b7d"},
{file = "contourpy-1.0.6-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:b48d94386f1994db7c70c76b5808c12e23ed7a4ee13693c2fc5ab109d60243c0"},
{file = "contourpy-1.0.6-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:208bc904889c910d95aafcf7be9e677726df9ef71e216780170dbb7e37d118fa"},
{file = "contourpy-1.0.6-cp38-cp38-win32.whl", hash = "sha256:444fb776f58f4906d8d354eb6f6ce59d0a60f7b6a720da6c1ccb839db7c80eb9"},
{file = "contourpy-1.0.6-cp38-cp38-win_amd64.whl", hash = "sha256:9bc407a6af672da20da74823443707e38ece8b93a04009dca25856c2d9adadb1"},
{file = "contourpy-1.0.6-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:aa4674cf3fa2bd9c322982644967f01eed0c91bb890f624e0e0daf7a5c3383e9"},
{file = "contourpy-1.0.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6f56515e7c6fae4529b731f6c117752247bef9cdad2b12fc5ddf8ca6a50965a5"},
{file = "contourpy-1.0.6-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:344cb3badf6fc7316ad51835f56ac387bdf86c8e1b670904f18f437d70da4183"},
{file = "contourpy-1.0.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0b1e66346acfb17694d46175a0cea7d9036f12ed0c31dfe86f0f405eedde2bdd"},
{file = "contourpy-1.0.6-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8468b40528fa1e15181cccec4198623b55dcd58306f8815a793803f51f6c474a"},
{file = "contourpy-1.0.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1dedf4c64185a216c35eb488e6f433297c660321275734401760dafaeb0ad5c2"},
{file = "contourpy-1.0.6-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:494efed2c761f0f37262815f9e3c4bb9917c5c69806abdee1d1cb6611a7174a0"},
{file = "contourpy-1.0.6-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:75a2e638042118118ab39d337da4c7908c1af74a8464cad59f19fbc5bbafec9b"},
{file = "contourpy-1.0.6-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a628bba09ba72e472bf7b31018b6281fd4cc903f0888049a3724afba13b6e0b8"},
{file = "contourpy-1.0.6-cp39-cp39-win32.whl", hash = "sha256:e1739496c2f0108013629aa095cc32a8c6363444361960c07493818d0dea2da4"},
{file = "contourpy-1.0.6-cp39-cp39-win_amd64.whl", hash = "sha256:a457ee72d9032e86730f62c5eeddf402e732fdf5ca8b13b41772aa8ae13a4563"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:d912f0154a20a80ea449daada904a7eb6941c83281a9fab95de50529bfc3a1da"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4081918147fc4c29fad328d5066cfc751da100a1098398742f9f364be63803fc"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0537cc1195245bbe24f2913d1f9211b8f04eb203de9044630abd3664c6cc339c"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dcd556c8fc37a342dd636d7eef150b1399f823a4462f8c968e11e1ebeabee769"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:f6ca38dd8d988eca8f07305125dec6f54ac1c518f1aaddcc14d08c01aebb6efc"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:c1baa49ab9fedbf19d40d93163b7d3e735d9cd8d5efe4cce9907902a6dad391f"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:211dfe2bd43bf5791d23afbe23a7952e8ac8b67591d24be3638cabb648b3a6eb"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c38c6536c2d71ca2f7e418acaf5bca30a3af7f2a2fa106083c7d738337848dbe"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b1ee48a130da4dd0eb8055bbab34abf3f6262957832fd575e0cab4979a15a41"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5641927cc5ae66155d0c80195dc35726eae060e7defc18b7ab27600f39dd1fe7"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:7ee394502026d68652c2824348a40bf50f31351a668977b51437131a90d777ea"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0b97454ed5b1368b66ed414c754cba15b9750ce69938fc6153679787402e4cdf"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0236875c5a0784215b49d00ebbe80c5b6b5d5244b3655a36dda88105334dea17"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:84c593aeff7a0171f639da92cb86d24954bbb61f8a1b530f74eb750a14685832"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:9b0e7fe7f949fb719b206548e5cde2518ffb29936afa4303d8a1c4db43dcb675"},
{file = "contourpy-1.0.6.tar.gz", hash = "sha256:6e459ebb8bb5ee4c22c19cc000174f8059981971a33ce11e17dddf6aca97a142"},
]
coverage = [
{file = "coverage-6.5.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ef8674b0ee8cc11e2d574e3e2998aea5df5ab242e012286824ea3c6970580e53"},
{file = "coverage-6.5.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:784f53ebc9f3fd0e2a3f6a78b2be1bd1f5575d7863e10c6e12504f240fd06660"},
{file = "coverage-6.5.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b4a5be1748d538a710f87542f22c2cad22f80545a847ad91ce45e77417293eb4"},
{file = "coverage-6.5.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:83516205e254a0cb77d2d7bb3632ee019d93d9f4005de31dca0a8c3667d5bc04"},
{file = "coverage-6.5.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:af4fffaffc4067232253715065e30c5a7ec6faac36f8fc8d6f64263b15f74db0"},
{file = "coverage-6.5.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:97117225cdd992a9c2a5515db1f66b59db634f59d0679ca1fa3fe8da32749cae"},
{file = "coverage-6.5.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:a1170fa54185845505fbfa672f1c1ab175446c887cce8212c44149581cf2d466"},
{file = "coverage-6.5.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:11b990d520ea75e7ee8dcab5bc908072aaada194a794db9f6d7d5cfd19661e5a"},
{file = "coverage-6.5.0-cp310-cp310-win32.whl", hash = "sha256:5dbec3b9095749390c09ab7c89d314727f18800060d8d24e87f01fb9cfb40b32"},
{file = "coverage-6.5.0-cp310-cp310-win_amd64.whl", hash = "sha256:59f53f1dc5b656cafb1badd0feb428c1e7bc19b867479ff72f7a9dd9b479f10e"},
{file = "coverage-6.5.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:4a5375e28c5191ac38cca59b38edd33ef4cc914732c916f2929029b4bfb50795"},
{file = "coverage-6.5.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c4ed2820d919351f4167e52425e096af41bfabacb1857186c1ea32ff9983ed75"},
{file = "coverage-6.5.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:33a7da4376d5977fbf0a8ed91c4dffaaa8dbf0ddbf4c8eea500a2486d8bc4d7b"},
{file = "coverage-6.5.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a8fb6cf131ac4070c9c5a3e21de0f7dc5a0fbe8bc77c9456ced896c12fcdad91"},
{file = "coverage-6.5.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a6b7d95969b8845250586f269e81e5dfdd8ff828ddeb8567a4a2eaa7313460c4"},
{file = "coverage-6.5.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:1ef221513e6f68b69ee9e159506d583d31aa3567e0ae84eaad9d6ec1107dddaa"},
{file = "coverage-6.5.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cca4435eebea7962a52bdb216dec27215d0df64cf27fc1dd538415f5d2b9da6b"},
{file = "coverage-6.5.0-cp311-cp311-win32.whl", hash = "sha256:98e8a10b7a314f454d9eff4216a9a94d143a7ee65018dd12442e898ee2310578"},
{file = "coverage-6.5.0-cp311-cp311-win_amd64.whl", hash = "sha256:bc8ef5e043a2af066fa8cbfc6e708d58017024dc4345a1f9757b329a249f041b"},
{file = "coverage-6.5.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:4433b90fae13f86fafff0b326453dd42fc9a639a0d9e4eec4d366436d1a41b6d"},
{file = "coverage-6.5.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f4f05d88d9a80ad3cac6244d36dd89a3c00abc16371769f1340101d3cb899fc3"},
{file = "coverage-6.5.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:94e2565443291bd778421856bc975d351738963071e9b8839ca1fc08b42d4bef"},
{file = "coverage-6.5.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:027018943386e7b942fa832372ebc120155fd970837489896099f5cfa2890f79"},
{file = "coverage-6.5.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:255758a1e3b61db372ec2736c8e2a1fdfaf563977eedbdf131de003ca5779b7d"},
{file = "coverage-6.5.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:851cf4ff24062c6aec510a454b2584f6e998cada52d4cb58c5e233d07172e50c"},
{file = "coverage-6.5.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:12adf310e4aafddc58afdb04d686795f33f4d7a6fa67a7a9d4ce7d6ae24d949f"},
{file = "coverage-6.5.0-cp37-cp37m-win32.whl", hash = "sha256:b5604380f3415ba69de87a289a2b56687faa4fe04dbee0754bfcae433489316b"},
{file = "coverage-6.5.0-cp37-cp37m-win_amd64.whl", hash = "sha256:4a8dbc1f0fbb2ae3de73eb0bdbb914180c7abfbf258e90b311dcd4f585d44bd2"},
{file = "coverage-6.5.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d900bb429fdfd7f511f868cedd03a6bbb142f3f9118c09b99ef8dc9bf9643c3c"},
{file = "coverage-6.5.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:2198ea6fc548de52adc826f62cb18554caedfb1d26548c1b7c88d8f7faa8f6ba"},
{file = "coverage-6.5.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c4459b3de97b75e3bd6b7d4b7f0db13f17f504f3d13e2a7c623786289dd670e"},
{file = "coverage-6.5.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:20c8ac5386253717e5ccc827caad43ed66fea0efe255727b1053a8154d952398"},
{file = "coverage-6.5.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6b07130585d54fe8dff3d97b93b0e20290de974dc8177c320aeaf23459219c0b"},
{file = "coverage-6.5.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:dbdb91cd8c048c2b09eb17713b0c12a54fbd587d79adcebad543bc0cd9a3410b"},
{file = "coverage-6.5.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:de3001a203182842a4630e7b8d1a2c7c07ec1b45d3084a83d5d227a3806f530f"},
{file = "coverage-6.5.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:e07f4a4a9b41583d6eabec04f8b68076ab3cd44c20bd29332c6572dda36f372e"},
{file = "coverage-6.5.0-cp38-cp38-win32.whl", hash = "sha256:6d4817234349a80dbf03640cec6109cd90cba068330703fa65ddf56b60223a6d"},
{file = "coverage-6.5.0-cp38-cp38-win_amd64.whl", hash = "sha256:7ccf362abd726b0410bf8911c31fbf97f09f8f1061f8c1cf03dfc4b6372848f6"},
{file = "coverage-6.5.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:633713d70ad6bfc49b34ead4060531658dc6dfc9b3eb7d8a716d5873377ab745"},
{file = "coverage-6.5.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:95203854f974e07af96358c0b261f1048d8e1083f2de9b1c565e1be4a3a48cfc"},
{file = "coverage-6.5.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b9023e237f4c02ff739581ef35969c3739445fb059b060ca51771e69101efffe"},
{file = "coverage-6.5.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:265de0fa6778d07de30bcf4d9dc471c3dc4314a23a3c6603d356a3c9abc2dfcf"},
{file = "coverage-6.5.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f830ed581b45b82451a40faabb89c84e1a998124ee4212d440e9c6cf70083e5"},
{file = "coverage-6.5.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:7b6be138d61e458e18d8e6ddcddd36dd96215edfe5f1168de0b1b32635839b62"},
{file = "coverage-6.5.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:42eafe6778551cf006a7c43153af1211c3aaab658d4d66fa5fcc021613d02518"},
{file = "coverage-6.5.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:723e8130d4ecc8f56e9a611e73b31219595baa3bb252d539206f7bbbab6ffc1f"},
{file = "coverage-6.5.0-cp39-cp39-win32.whl", hash = "sha256:d9ecf0829c6a62b9b573c7bb6d4dcd6ba8b6f80be9ba4fc7ed50bf4ac9aecd72"},
{file = "coverage-6.5.0-cp39-cp39-win_amd64.whl", hash = "sha256:fc2af30ed0d5ae0b1abdb4ebdce598eafd5b35397d4d75deb341a614d333d987"},
{file = "coverage-6.5.0-pp36.pp37.pp38-none-any.whl", hash = "sha256:1431986dac3923c5945271f169f59c45b8802a114c8f548d611f2015133df77a"},
{file = "coverage-6.5.0.tar.gz", hash = "sha256:f642e90754ee3e06b0e7e51bce3379590e76b7f76b708e1a71ff043f87025c84"},
]
cycler = [
{file = "cycler-0.11.0-py3-none-any.whl", hash = "sha256:3a27e95f763a428a739d2add979fa7494c912a32c17c4c38c4d5f082cad165a3"},
{file = "cycler-0.11.0.tar.gz", hash = "sha256:9c87405839a19696e837b3b818fed3f5f69f16f1eec1a1ad77e043dcea9c772f"},
]
cymem = [
{file = "cymem-2.0.7-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:4981fc9182cc1fe54bfedf5f73bfec3ce0c27582d9be71e130c46e35958beef0"},
{file = "cymem-2.0.7-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:42aedfd2e77aa0518a24a2a60a2147308903abc8b13c84504af58539c39e52a3"},
{file = "cymem-2.0.7-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c183257dc5ab237b664f64156c743e788f562417c74ea58c5a3939fe2d48d6f6"},
{file = "cymem-2.0.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d18250f97eeb13af2e8b19d3cefe4bf743b963d93320b0a2e729771410fd8cf4"},
{file = "cymem-2.0.7-cp310-cp310-win_amd64.whl", hash = "sha256:864701e626b65eb2256060564ed8eb034ebb0a8f14ce3fbef337e88352cdee9f"},
{file = "cymem-2.0.7-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:314273be1f143da674388e0a125d409e2721fbf669c380ae27c5cbae4011e26d"},
{file = "cymem-2.0.7-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:df543a36e7000808fe0a03d92fd6cd8bf23fa8737c3f7ae791a5386de797bf79"},
{file = "cymem-2.0.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e5e1b7de7952d89508d07601b9e95b2244e70d7ef60fbc161b3ad68f22815f8"},
{file = "cymem-2.0.7-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2aa33f1dbd7ceda37970e174c38fd1cf106817a261aa58521ba9918156868231"},
{file = "cymem-2.0.7-cp311-cp311-win_amd64.whl", hash = "sha256:10178e402bb512b2686b8c2f41f930111e597237ca8f85cb583ea93822ef798d"},
{file = "cymem-2.0.7-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a2971b7da5aa2e65d8fbbe9f2acfc19ff8e73f1896e3d6e1223cc9bf275a0207"},
{file = "cymem-2.0.7-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:85359ab7b490e6c897c04863704481600bd45188a0e2ca7375eb5db193e13cb7"},
{file = "cymem-2.0.7-cp36-cp36m-win_amd64.whl", hash = "sha256:0ac45088abffbae9b7db2c597f098de51b7e3c1023cb314e55c0f7f08440cf66"},
{file = "cymem-2.0.7-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:26e5d5c6958855d2fe3d5629afe85a6aae5531abaa76f4bc21b9abf9caaccdfe"},
{file = "cymem-2.0.7-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:011039e12d3144ac1bf3a6b38f5722b817f0d6487c8184e88c891b360b69f533"},
{file = "cymem-2.0.7-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f9e63e5ad4ed6ffa21fd8db1c03b05be3fea2f32e32fdace67a840ea2702c3d"},
{file = "cymem-2.0.7-cp37-cp37m-win_amd64.whl", hash = "sha256:5ea6b027fdad0c3e9a4f1b94d28d213be08c466a60c72c633eb9db76cf30e53a"},
{file = "cymem-2.0.7-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:4302df5793a320c4f4a263c7785d2fa7f29928d72cb83ebeb34d64a610f8d819"},
{file = "cymem-2.0.7-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:24b779046484674c054af1e779c68cb224dc9694200ac13b22129d7fb7e99e6d"},
{file = "cymem-2.0.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c50794c612801ed8b599cd4af1ed810a0d39011711c8224f93e1153c00e08d1"},
{file = "cymem-2.0.7-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a9525ad563b36dc1e30889d0087a0daa67dd7bb7d3e1530c4b61cd65cc756a5b"},
{file = "cymem-2.0.7-cp38-cp38-win_amd64.whl", hash = "sha256:48b98da6b906fe976865263e27734ebc64f972a978a999d447ad6c83334e3f90"},
{file = "cymem-2.0.7-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:e156788d32ad8f7141330913c5d5d2aa67182fca8f15ae22645e9f379abe8a4c"},
{file = "cymem-2.0.7-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3da89464021fe669932fce1578343fcaf701e47e3206f50d320f4f21e6683ca5"},
{file = "cymem-2.0.7-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4f359cab9f16e25b3098f816c40acbf1697a3b614a8d02c56e6ebcb9c89a06b3"},
{file = "cymem-2.0.7-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f165d7bce55d6730930e29d8294569788aa127f1be8d1642d9550ed96223cb37"},
{file = "cymem-2.0.7-cp39-cp39-win_amd64.whl", hash = "sha256:59a09cf0e71b1b88bfa0de544b801585d81d06ea123c1725e7c5da05b7ca0d20"},
{file = "cymem-2.0.7.tar.gz", hash = "sha256:e6034badb5dd4e10344211c81f16505a55553a7164adc314c75bd80cf07e57a8"},
]
cython = [
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:39afb4679b8c6bf7ccb15b24025568f4f9b4d7f9bf3cbd981021f542acecd75b"},
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:dbee03b8d42dca924e6aa057b836a064c769ddfd2a4c2919e65da2c8a362d528"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ba622326f2862f9c1f99ca8d47ade49871241920a352c917e16861e25b0e5c3"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e6ffa08aa1c111a1ebcbd1cf4afaaec120bc0bbdec3f2545f8bb7d3e8e77a1cd"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:97335b2cd4acebf30d14e2855d882de83ad838491a09be2011745579ac975833"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:06be83490c906b6429b4389e13487a26254ccaad2eef6f3d4ee21d8d3a4aaa2b"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:eefd2b9a5f38ded8d859fe96cc28d7d06e098dc3f677e7adbafda4dcdd4a461c"},
{file = "Cython-0.29.32-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5514f3b4122cb22317122a48e175a7194e18e1803ca555c4c959d7dfe68eaf98"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:656dc5ff1d269de4d11ee8542f2ffd15ab466c447c1f10e5b8aba6f561967276"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:cdf10af3e2e3279dc09fdc5f95deaa624850a53913f30350ceee824dc14fc1a6"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:3875c2b2ea752816a4d7ae59d45bb546e7c4c79093c83e3ba7f4d9051dd02928"},
{file = "Cython-0.29.32-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:79e3bab19cf1b021b613567c22eb18b76c0c547b9bc3903881a07bfd9e7e64cf"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0595aee62809ba353cebc5c7978e0e443760c3e882e2c7672c73ffe46383673"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0ea8267fc373a2c5064ad77d8ff7bf0ea8b88f7407098ff51829381f8ec1d5d9"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:c8e8025f496b5acb6ba95da2fb3e9dacffc97d9a92711aacfdd42f9c5927e094"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:afbce249133a830f121b917f8c9404a44f2950e0e4f5d1e68f043da4c2e9f457"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:513e9707407608ac0d306c8b09d55a28be23ea4152cbd356ceaec0f32ef08d65"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e83228e0994497900af954adcac27f64c9a57cd70a9ec768ab0cb2c01fd15cf1"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:ea1dcc07bfb37367b639415333cfbfe4a93c3be340edf1db10964bc27d42ed64"},
{file = "Cython-0.29.32-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8669cadeb26d9a58a5e6b8ce34d2c8986cc3b5c0bfa77eda6ceb471596cb2ec3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:ed087eeb88a8cf96c60fb76c5c3b5fb87188adee5e179f89ec9ad9a43c0c54b3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:3f85eb2343d20d91a4ea9cf14e5748092b376a64b7e07fc224e85b2753e9070b"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:63b79d9e1f7c4d1f498ab1322156a0d7dc1b6004bf981a8abda3f66800e140cd"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e1958e0227a4a6a2c06fd6e35b7469de50adf174102454db397cec6e1403cce3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:856d2fec682b3f31583719cb6925c6cdbb9aa30f03122bcc45c65c8b6f515754"},
{file = "Cython-0.29.32-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:479690d2892ca56d34812fe6ab8f58e4b2e0129140f3d94518f15993c40553da"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:67fdd2f652f8d4840042e2d2d91e15636ba2bcdcd92e7e5ffbc68e6ef633a754"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:4a4b03ab483271f69221c3210f7cde0dcc456749ecf8243b95bc7a701e5677e0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:40eff7aa26e91cf108fd740ffd4daf49f39b2fdffadabc7292b4b7dc5df879f0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0bbc27abdf6aebfa1bce34cd92bd403070356f28b0ecb3198ff8a182791d58b9"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:cddc47ec746a08603037731f5d10aebf770ced08666100bd2cdcaf06a85d4d1b"},
{file = "Cython-0.29.32-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:eca3065a1279456e81c615211d025ea11bfe4e19f0c5650b859868ca04b3fcbd"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d968ffc403d92addf20b68924d95428d523436adfd25cf505d427ed7ba3bee8b"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f3fd44cc362eee8ae569025f070d56208908916794b6ab21e139cea56470a2b3"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:b6da3063c5c476f5311fd76854abae6c315f1513ef7d7904deed2e774623bbb9"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:061e25151c38f2361bc790d3bcf7f9d9828a0b6a4d5afa56fbed3bd33fb2373a"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f9944013588a3543fca795fffb0a070a31a243aa4f2d212f118aa95e69485831"},
{file = "Cython-0.29.32-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:07d173d3289415bb496e72cb0ddd609961be08fe2968c39094d5712ffb78672b"},
{file = "Cython-0.29.32-py2.py3-none-any.whl", hash = "sha256:eeb475eb6f0ccf6c039035eb4f0f928eb53ead88777e0a760eccb140ad90930b"},
{file = "Cython-0.29.32.tar.gz", hash = "sha256:8733cf4758b79304f2a4e39ebfac5e92341bce47bcceb26c1254398b2f8c1af7"},
]
dask = [
{file = "dask-2021.11.2-py3-none-any.whl", hash = "sha256:2b0ad7beba8950add4fdc7c5cb94fa9444915ddb00c711d5743e2c4bb0a95ef5"},
{file = "dask-2021.11.2.tar.gz", hash = "sha256:e12bfe272928d62fa99623d98d0e0b0c045b33a47509ef31a22175aa5fd10917"},
]
debugpy = [
{file = "debugpy-1.6.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:c4b2bd5c245eeb49824bf7e539f95fb17f9a756186e51c3e513e32999d8846f3"},
{file = "debugpy-1.6.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b8deaeb779699350deeed835322730a3efec170b88927debc9ba07a1a38e2585"},
{file = "debugpy-1.6.3-cp310-cp310-win32.whl", hash = "sha256:fc233a0160f3b117b20216f1169e7211b83235e3cd6749bcdd8dbb72177030c7"},
{file = "debugpy-1.6.3-cp310-cp310-win_amd64.whl", hash = "sha256:dda8652520eae3945833e061cbe2993ad94a0b545aebd62e4e6b80ee616c76b2"},
{file = "debugpy-1.6.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:d5c814596a170a0a58fa6fad74947e30bfd7e192a5d2d7bd6a12156c2899e13a"},
{file = "debugpy-1.6.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c4cd6f37e3c168080d61d698390dfe2cd9e74ebf80b448069822a15dadcda57d"},
{file = "debugpy-1.6.3-cp37-cp37m-win32.whl", hash = "sha256:3c9f985944a30cfc9ae4306ac6a27b9c31dba72ca943214dad4a0ab3840f6161"},
{file = "debugpy-1.6.3-cp37-cp37m-win_amd64.whl", hash = "sha256:5ad571a36cec137ae6ed951d0ff75b5e092e9af6683da084753231150cbc5b25"},
{file = "debugpy-1.6.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:adcfea5ea06d55d505375995e150c06445e2b20cd12885bcae566148c076636b"},
{file = "debugpy-1.6.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:daadab4403427abd090eccb38d8901afd8b393e01fd243048fab3f1d7132abb4"},
{file = "debugpy-1.6.3-cp38-cp38-win32.whl", hash = "sha256:6efc30325b68e451118b795eff6fe8488253ca3958251d5158106d9c87581bc6"},
{file = "debugpy-1.6.3-cp38-cp38-win_amd64.whl", hash = "sha256:86d784b72c5411c833af1cd45b83d80c252b77c3bfdb43db17c441d772f4c734"},
{file = "debugpy-1.6.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:4e255982552b0edfe3a6264438dbd62d404baa6556a81a88f9420d3ed79b06ae"},
{file = "debugpy-1.6.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:cca23cb6161ac89698d629d892520327dd1be9321c0960e610bbcb807232b45d"},
{file = "debugpy-1.6.3-cp39-cp39-win32.whl", hash = "sha256:7c302095a81be0d5c19f6529b600bac971440db3e226dce85347cc27e6a61908"},
{file = "debugpy-1.6.3-cp39-cp39-win_amd64.whl", hash = "sha256:34d2cdd3a7c87302ba5322b86e79c32c2115be396f3f09ca13306d8a04fe0f16"},
{file = "debugpy-1.6.3-py2.py3-none-any.whl", hash = "sha256:84c39940a0cac410bf6aa4db00ba174f973eef521fbe9dd058e26bcabad89c4f"},
{file = "debugpy-1.6.3.zip", hash = "sha256:e8922090514a890eec99cfb991bab872dd2e353ebb793164d5f01c362b9a40bf"},
]
decorator = [
{file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"},
{file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"},
]
defusedxml = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
]
dill = [
{file = "dill-0.3.6-py3-none-any.whl", hash = "sha256:a07ffd2351b8c678dfc4a856a3005f8067aea51d6ba6c700796a4d9e280f39f0"},
{file = "dill-0.3.6.tar.gz", hash = "sha256:e5db55f3687856d8fbdab002ed78544e1c4559a130302693d839dfe8f93f2373"},
]
distributed = [
{file = "distributed-2021.11.2-py3-none-any.whl", hash = "sha256:af1f7b98d85d43886fefe2354379c848c7a5aa6ae4d2313a7aca9ab9081a7e56"},
{file = "distributed-2021.11.2.tar.gz", hash = "sha256:f86a01a2e1e678865d2e42300c47552b5012cd81a2d354e47827a1fd074cc302"},
]
docutils = [
{file = "docutils-0.17.1-py2.py3-none-any.whl", hash = "sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61"},
{file = "docutils-0.17.1.tar.gz", hash = "sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125"},
]
econml = [
{file = "econml-0.14.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9c2fc1d67d98774d00bfe8e76d76af3de5ebc8d5f7a440da3c667d5ad244f971"},
{file = "econml-0.14.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9b02aca395eaa905bff080c3efd4f74bf281f168c674d74bdf899fc9467311e1"},
{file = "econml-0.14.0-cp310-cp310-win_amd64.whl", hash = "sha256:d2cca82486826c2b13f47ed0140f3fc85d8016fb43153a1b2de025345b190c6c"},
{file = "econml-0.14.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:ce98668ba93d33856b60750e23312b9a6d503af6890b5588ab708db9de05ff49"},
{file = "econml-0.14.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3b6b9938a2f48bf3055ae0ea47ac5a627d1c180f22e62531943961427769b0ef"},
{file = "econml-0.14.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3c780c49a97bd688475f8863a7bdad2cbe19fdb4417708e3874f2bdae102852f"},
{file = "econml-0.14.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7f2930eb311ea576195718b97fde83b4f2d29f3f3dc57ce0834b52fee410bfac"},
{file = "econml-0.14.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:36be15da6ff3b295bc5cf80b95753e19bc123a1103bf53a2a0744daef49273e5"},
{file = "econml-0.14.0-cp38-cp38-win_amd64.whl", hash = "sha256:f71ab406f37b64dead4bee1b4c4869204faf9c55887dc8117bd9396d977edaf3"},
{file = "econml-0.14.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1b0e67419c4eff2acdf8138f208de333a85c3e6fded831a6664bb02d6f4bcbe1"},
{file = "econml-0.14.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:376724e0535ad9cbc585f768110eb23bfd3b3218032a61cac8793a09ee3bce95"},
{file = "econml-0.14.0-cp39-cp39-win_amd64.whl", hash = "sha256:6e1f0554d0f930dc639dbf3d7cb171297aa113dd64b7db322e0abb7d12eaa4dc"},
{file = "econml-0.14.0.tar.gz", hash = "sha256:5637d36c7548fb3ad01956d091cc6a9f788b090bc8b892bd527012e5bdbce041"},
]
entrypoints = [
{file = "entrypoints-0.4-py3-none-any.whl", hash = "sha256:f174b5ff827504fd3cd97cc3f8649f3693f51538c7e4bdf3ef002c8429d42f9f"},
{file = "entrypoints-0.4.tar.gz", hash = "sha256:b706eddaa9218a19ebcd67b56818f05bb27589b1ca9e8d797b74affad4ccacd4"},
]
exceptiongroup = [
{file = "exceptiongroup-1.0.4-py3-none-any.whl", hash = "sha256:542adf9dea4055530d6e1279602fa5cb11dab2395fa650b8674eaec35fc4a828"},
{file = "exceptiongroup-1.0.4.tar.gz", hash = "sha256:bd14967b79cd9bdb54d97323216f8fdf533e278df937aa2a90089e7d6e06e5ec"},
]
executing = [
{file = "executing-1.2.0-py2.py3-none-any.whl", hash = "sha256:0314a69e37426e3608aada02473b4161d4caf5a4b244d1d0c48072b8fee7bacc"},
{file = "executing-1.2.0.tar.gz", hash = "sha256:19da64c18d2d851112f09c287f8d3dbbdf725ab0e569077efb6cdcbd3497c107"},
]
fastai = [
{file = "fastai-2.7.10-py3-none-any.whl", hash = "sha256:db3709d6ff9ede9cd29111420b3669238248fa4f5a29d98daf37d52d122d9424"},
{file = "fastai-2.7.10.tar.gz", hash = "sha256:ccef6a185ae3a637efc9bcd9fea8e48b75f454d0ebad3b6df426f22fae20039d"},
]
fastcore = [
{file = "fastcore-1.5.27-py3-none-any.whl", hash = "sha256:79dffaa3de96066e4d7f2b8793f1a8a9468c82bc97d3d48ec002de34097b2a9f"},
{file = "fastcore-1.5.27.tar.gz", hash = "sha256:c6b66b35569d17251e25999bafc7d9bcdd6446c1e710503c08670c3ff1eef271"},
]
fastdownload = [
{file = "fastdownload-0.0.7-py3-none-any.whl", hash = "sha256:b791fa3406a2da003ba64615f03c60e2ea041c3c555796450b9a9a601bc0bbac"},
{file = "fastdownload-0.0.7.tar.gz", hash = "sha256:20507edb8e89406a1fbd7775e6e2a3d81a4dd633dd506b0e9cf0e1613e831d6a"},
]
fastjsonschema = [
{file = "fastjsonschema-2.16.2-py3-none-any.whl", hash = "sha256:21f918e8d9a1a4ba9c22e09574ba72267a6762d47822db9add95f6454e51cc1c"},
{file = "fastjsonschema-2.16.2.tar.gz", hash = "sha256:01e366f25d9047816fe3d288cbfc3e10541daf0af2044763f3d0ade42476da18"},
]
fastprogress = [
{file = "fastprogress-1.0.3-py3-none-any.whl", hash = "sha256:6dfea88f7a4717b0a8d6ee2048beae5dbed369f932a368c5dd9caff34796f7c5"},
{file = "fastprogress-1.0.3.tar.gz", hash = "sha256:7a17d2b438890f838c048eefce32c4ded47197ecc8ea042cecc33d3deb8022f5"},
]
flake8 = [
{file = "flake8-4.0.1-py2.py3-none-any.whl", hash = "sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d"},
{file = "flake8-4.0.1.tar.gz", hash = "sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"},
]
flaky = [
{file = "flaky-3.7.0-py2.py3-none-any.whl", hash = "sha256:d6eda73cab5ae7364504b7c44670f70abed9e75f77dd116352f662817592ec9c"},
{file = "flaky-3.7.0.tar.gz", hash = "sha256:3ad100780721a1911f57a165809b7ea265a7863305acb66708220820caf8aa0d"},
]
flatbuffers = [
{file = "flatbuffers-22.10.26-py2.py3-none-any.whl", hash = "sha256:e36d5ba7a5e9483ff0ec1d238fdc3011c866aab7f8ce77d5e9d445ac12071d84"},
{file = "flatbuffers-22.10.26.tar.gz", hash = "sha256:8698aaa635ca8cf805c7d8414d4a4a8ecbffadca0325fa60551cb3ca78612356"},
]
fonttools = [
{file = "fonttools-4.38.0-py3-none-any.whl", hash = "sha256:820466f43c8be8c3009aef8b87e785014133508f0de64ec469e4efb643ae54fb"},
{file = "fonttools-4.38.0.zip", hash = "sha256:2bb244009f9bf3fa100fc3ead6aeb99febe5985fa20afbfbaa2f8946c2fbdaf1"},
]
forestci = [
{file = "forestci-0.6-py3-none-any.whl", hash = "sha256:025e76b20e23ddbdfc0a9c9c7f261751ee376b33a7b257b86e72fbad8312d650"},
{file = "forestci-0.6.tar.gz", hash = "sha256:f74f51eba9a7c189fdb673203cea10383f0a34504d2d28dee0fd712d19945b5a"},
]
fsspec = [
{file = "fsspec-2022.11.0-py3-none-any.whl", hash = "sha256:d6e462003e3dcdcb8c7aa84c73a228f8227e72453cd22570e2363e8844edfe7b"},
{file = "fsspec-2022.11.0.tar.gz", hash = "sha256:259d5fd5c8e756ff2ea72f42e7613c32667dc2049a4ac3d84364a7ca034acb8b"},
]
future = [
{file = "future-0.18.2.tar.gz", hash = "sha256:b1bead90b70cf6ec3f0710ae53a525360fa360d306a86583adc6bf83a4db537d"},
]
gast = [
{file = "gast-0.4.0-py3-none-any.whl", hash = "sha256:b7adcdd5adbebf1adf17378da5ba3f543684dbec47b1cda1f3997e573cd542c4"},
{file = "gast-0.4.0.tar.gz", hash = "sha256:40feb7b8b8434785585ab224d1568b857edb18297e5a3047f1ba012bc83b42c1"},
]
google-auth = [
{file = "google-auth-2.14.1.tar.gz", hash = "sha256:ccaa901f31ad5cbb562615eb8b664b3dd0bf5404a67618e642307f00613eda4d"},
{file = "google_auth-2.14.1-py2.py3-none-any.whl", hash = "sha256:f5d8701633bebc12e0deea4df8abd8aff31c28b355360597f7f2ee60f2e4d016"},
]
google-auth-oauthlib = [
{file = "google-auth-oauthlib-0.4.6.tar.gz", hash = "sha256:a90a072f6993f2c327067bf65270046384cda5a8ecb20b94ea9a687f1f233a7a"},
{file = "google_auth_oauthlib-0.4.6-py2.py3-none-any.whl", hash = "sha256:3f2a6e802eebbb6fb736a370fbf3b055edcb6b52878bf2f26330b5e041316c73"},
]
google-pasta = [
{file = "google-pasta-0.2.0.tar.gz", hash = "sha256:c9f2c8dfc8f96d0d5808299920721be30c9eec37f2389f28904f454565c8a16e"},
{file = "google_pasta-0.2.0-py2-none-any.whl", hash = "sha256:4612951da876b1a10fe3960d7226f0c7682cf901e16ac06e473b267a5afa8954"},
{file = "google_pasta-0.2.0-py3-none-any.whl", hash = "sha256:b32482794a366b5366a32c92a9a9201b107821889935a02b3e51f6b432ea84ed"},
]
graphviz = [
{file = "graphviz-0.20.1-py3-none-any.whl", hash = "sha256:587c58a223b51611c0cf461132da386edd896a029524ca61a1462b880bf97977"},
{file = "graphviz-0.20.1.zip", hash = "sha256:8c58f14adaa3b947daf26c19bc1e98c4e0702cdc31cf99153e6f06904d492bf8"},
]
grpcio = [
{file = "grpcio-1.50.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:906f4d1beb83b3496be91684c47a5d870ee628715227d5d7c54b04a8de802974"},
{file = "grpcio-1.50.0-cp310-cp310-macosx_12_0_x86_64.whl", hash = "sha256:2d9fd6e38b16c4d286a01e1776fdf6c7a4123d99ae8d6b3f0b4a03a34bf6ce45"},
{file = "grpcio-1.50.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:4b123fbb7a777a2fedec684ca0b723d85e1d2379b6032a9a9b7851829ed3ca9a"},
{file = "grpcio-1.50.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b2f77a90ba7b85bfb31329f8eab9d9540da2cf8a302128fb1241d7ea239a5469"},
{file = "grpcio-1.50.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9eea18a878cffc804506d39c6682d71f6b42ec1c151d21865a95fae743fda500"},
{file = "grpcio-1.50.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:2b71916fa8f9eb2abd93151fafe12e18cebb302686b924bd4ec39266211da525"},
{file = "grpcio-1.50.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:95ce51f7a09491fb3da8cf3935005bff19983b77c4e9437ef77235d787b06842"},
{file = "grpcio-1.50.0-cp310-cp310-win32.whl", hash = "sha256:f7025930039a011ed7d7e7ef95a1cb5f516e23c5a6ecc7947259b67bea8e06ca"},
{file = "grpcio-1.50.0-cp310-cp310-win_amd64.whl", hash = "sha256:05f7c248e440f538aaad13eee78ef35f0541e73498dd6f832fe284542ac4b298"},
{file = "grpcio-1.50.0-cp311-cp311-linux_armv7l.whl", hash = "sha256:ca8a2254ab88482936ce941485c1c20cdeaef0efa71a61dbad171ab6758ec998"},
{file = "grpcio-1.50.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:3b611b3de3dfd2c47549ca01abfa9bbb95937eb0ea546ea1d762a335739887be"},
{file = "grpcio-1.50.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1a4cd8cb09d1bc70b3ea37802be484c5ae5a576108bad14728f2516279165dd7"},
{file = "grpcio-1.50.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:156f8009e36780fab48c979c5605eda646065d4695deea4cfcbcfdd06627ddb6"},
{file = "grpcio-1.50.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:de411d2b030134b642c092e986d21aefb9d26a28bf5a18c47dd08ded411a3bc5"},
{file = "grpcio-1.50.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d144ad10eeca4c1d1ce930faa105899f86f5d99cecfe0d7224f3c4c76265c15e"},
{file = "grpcio-1.50.0-cp311-cp311-win32.whl", hash = "sha256:92d7635d1059d40d2ec29c8bf5ec58900120b3ce5150ef7414119430a4b2dd5c"},
{file = "grpcio-1.50.0-cp311-cp311-win_amd64.whl", hash = "sha256:ce8513aee0af9c159319692bfbf488b718d1793d764798c3d5cff827a09e25ef"},
{file = "grpcio-1.50.0-cp37-cp37m-linux_armv7l.whl", hash = "sha256:8e8999a097ad89b30d584c034929f7c0be280cd7851ac23e9067111167dcbf55"},
{file = "grpcio-1.50.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:a50a1be449b9e238b9bd43d3857d40edf65df9416dea988929891d92a9f8a778"},
{file = "grpcio-1.50.0-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:cf151f97f5f381163912e8952eb5b3afe89dec9ed723d1561d59cabf1e219a35"},
{file = "grpcio-1.50.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a23d47f2fc7111869f0ff547f771733661ff2818562b04b9ed674fa208e261f4"},
{file = "grpcio-1.50.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d84d04dec64cc4ed726d07c5d17b73c343c8ddcd6b59c7199c801d6bbb9d9ed1"},
{file = "grpcio-1.50.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:67dd41a31f6fc5c7db097a5c14a3fa588af54736ffc174af4411d34c4f306f68"},
{file = "grpcio-1.50.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8d4c8e73bf20fb53fe5a7318e768b9734cf122fe671fcce75654b98ba12dfb75"},
{file = "grpcio-1.50.0-cp37-cp37m-win32.whl", hash = "sha256:7489dbb901f4fdf7aec8d3753eadd40839c9085967737606d2c35b43074eea24"},
{file = "grpcio-1.50.0-cp37-cp37m-win_amd64.whl", hash = "sha256:531f8b46f3d3db91d9ef285191825d108090856b3bc86a75b7c3930f16ce432f"},
{file = "grpcio-1.50.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:d534d169673dd5e6e12fb57cc67664c2641361e1a0885545495e65a7b761b0f4"},
{file = "grpcio-1.50.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:1d8d02dbb616c0a9260ce587eb751c9c7dc689bc39efa6a88cc4fa3e9c138a7b"},
{file = "grpcio-1.50.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:baab51dcc4f2aecabf4ed1e2f57bceab240987c8b03533f1cef90890e6502067"},
{file = "grpcio-1.50.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40838061e24f960b853d7bce85086c8e1b81c6342b1f4c47ff0edd44bbae2722"},
{file = "grpcio-1.50.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:931e746d0f75b2a5cff0a1197d21827a3a2f400c06bace036762110f19d3d507"},
{file = "grpcio-1.50.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:15f9e6d7f564e8f0776770e6ef32dac172c6f9960c478616c366862933fa08b4"},
{file = "grpcio-1.50.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:a4c23e54f58e016761b576976da6a34d876420b993f45f66a2bfb00363ecc1f9"},
{file = "grpcio-1.50.0-cp38-cp38-win32.whl", hash = "sha256:3e4244c09cc1b65c286d709658c061f12c61c814be0b7030a2d9966ff02611e0"},
{file = "grpcio-1.50.0-cp38-cp38-win_amd64.whl", hash = "sha256:8e69aa4e9b7f065f01d3fdcecbe0397895a772d99954bb82eefbb1682d274518"},
{file = "grpcio-1.50.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:af98d49e56605a2912cf330b4627e5286243242706c3a9fa0bcec6e6f68646fc"},
{file = "grpcio-1.50.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:080b66253f29e1646ac53ef288c12944b131a2829488ac3bac8f52abb4413c0d"},
{file = "grpcio-1.50.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:ab5d0e3590f0a16cb88de4a3fa78d10eb66a84ca80901eb2c17c1d2c308c230f"},
{file = "grpcio-1.50.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb11464f480e6103c59d558a3875bd84eed6723f0921290325ebe97262ae1347"},
{file = "grpcio-1.50.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e07fe0d7ae395897981d16be61f0db9791f482f03fee7d1851fe20ddb4f69c03"},
{file = "grpcio-1.50.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:d75061367a69808ab2e84c960e9dce54749bcc1e44ad3f85deee3a6c75b4ede9"},
{file = "grpcio-1.50.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ae23daa7eda93c1c49a9ecc316e027ceb99adbad750fbd3a56fa9e4a2ffd5ae0"},
{file = "grpcio-1.50.0-cp39-cp39-win32.whl", hash = "sha256:177afaa7dba3ab5bfc211a71b90da1b887d441df33732e94e26860b3321434d9"},
{file = "grpcio-1.50.0-cp39-cp39-win_amd64.whl", hash = "sha256:ea8ccf95e4c7e20419b7827aa5b6da6f02720270686ac63bd3493a651830235c"},
{file = "grpcio-1.50.0.tar.gz", hash = "sha256:12b479839a5e753580b5e6053571de14006157f2ef9b71f38c56dc9b23b95ad6"},
]
h5py = [
{file = "h5py-3.7.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d77af42cb751ad6cc44f11bae73075a07429a5cf2094dfde2b1e716e059b3911"},
{file = "h5py-3.7.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:63beb8b7b47d0896c50de6efb9a1eaa81dbe211f3767e7dd7db159cea51ba37a"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:04e2e1e2fc51b8873e972a08d2f89625ef999b1f2d276199011af57bb9fc7851"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f73307c876af49aa869ec5df1818e9bb0bdcfcf8a5ba773cc45a4fba5a286a5c"},
{file = "h5py-3.7.0-cp310-cp310-win_amd64.whl", hash = "sha256:f514b24cacdd983e61f8d371edac8c1b780c279d0acb8485639e97339c866073"},
{file = "h5py-3.7.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:43fed4d13743cf02798a9a03a360a88e589d81285e72b83f47d37bb64ed44881"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c038399ce09a58ff8d89ec3e62f00aa7cb82d14f34e24735b920e2a811a3a426"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03d64fb86bb86b978928bad923b64419a23e836499ec6363e305ad28afd9d287"},
{file = "h5py-3.7.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e5b7820b75f9519499d76cc708e27242ccfdd9dfb511d6deb98701961d0445aa"},
{file = "h5py-3.7.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a9351d729ea754db36d175098361b920573fdad334125f86ac1dd3a083355e20"},
{file = "h5py-3.7.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6776d896fb90c5938de8acb925e057e2f9f28755f67ec3edcbc8344832616c38"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0a047fddbe6951bce40e9cde63373c838a978c5e05a011a682db9ba6334b8e85"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0798a9c0ff45f17d0192e4d7114d734cac9f8b2b2c76dd1d923c4d0923f27bb6"},
{file = "h5py-3.7.0-cp38-cp38-win_amd64.whl", hash = "sha256:0d8de8cb619fc597da7cf8cdcbf3b7ff8c5f6db836568afc7dc16d21f59b2b49"},
{file = "h5py-3.7.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f084bbe816907dfe59006756f8f2d16d352faff2d107f4ffeb1d8de126fc5dc7"},
{file = "h5py-3.7.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1fcb11a2dc8eb7ddcae08afd8fae02ba10467753a857fa07a404d700a93f3d53"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ed43e2cc4f511756fd664fb45d6b66c3cbed4e3bd0f70e29c37809b2ae013c44"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e7535df5ee3dc3e5d1f408fdfc0b33b46bc9b34db82743c82cd674d8239b9ad"},
{file = "h5py-3.7.0-cp39-cp39-win_amd64.whl", hash = "sha256:9e2ad2aa000f5b1e73b5dfe22f358ca46bf1a2b6ca394d9659874d7fc251731a"},
{file = "h5py-3.7.0.tar.gz", hash = "sha256:3fcf37884383c5da64846ab510190720027dca0768def34dd8dcb659dbe5cbf3"},
]
heapdict = [
{file = "HeapDict-1.0.1-py3-none-any.whl", hash = "sha256:6065f90933ab1bb7e50db403b90cab653c853690c5992e69294c2de2b253fc92"},
{file = "HeapDict-1.0.1.tar.gz", hash = "sha256:8495f57b3e03d8e46d5f1b2cc62ca881aca392fd5cc048dc0aa2e1a6d23ecdb6"},
]
idna = [
{file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
{file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
]
imagesize = [
{file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
{file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
]
importlib-metadata = [
{file = "importlib_metadata-5.0.0-py3-none-any.whl", hash = "sha256:ddb0e35065e8938f867ed4928d0ae5bf2a53b7773871bfe6bcc7e4fcdc7dea43"},
{file = "importlib_metadata-5.0.0.tar.gz", hash = "sha256:da31db32b304314d044d3c12c79bd59e307889b287ad12ff387b3500835fc2ab"},
]
importlib-resources = [
{file = "importlib_resources-5.10.0-py3-none-any.whl", hash = "sha256:ee17ec648f85480d523596ce49eae8ead87d5631ae1551f913c0100b5edd3437"},
{file = "importlib_resources-5.10.0.tar.gz", hash = "sha256:c01b1b94210d9849f286b86bb51bcea7cd56dde0600d8db721d7b81330711668"},
]
iniconfig = [
{file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"},
{file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"},
]
ipykernel = [
{file = "ipykernel-6.17.1-py3-none-any.whl", hash = "sha256:3a9a1b2ad6dbbd5879855aabb4557f08e63fa2208bffed897f03070e2bb436f6"},
{file = "ipykernel-6.17.1.tar.gz", hash = "sha256:e178c1788399f93a459c241fe07c3b810771c607b1fb064a99d2c5d40c90c5d4"},
]
ipython = [
{file = "ipython-8.6.0-py3-none-any.whl", hash = "sha256:91ef03016bcf72dd17190f863476e7c799c6126ec7e8be97719d1bc9a78a59a4"},
{file = "ipython-8.6.0.tar.gz", hash = "sha256:7c959e3dedbf7ed81f9b9d8833df252c430610e2a4a6464ec13cd20975ce20a5"},
]
ipython-genutils = [
{file = "ipython_genutils-0.2.0-py2.py3-none-any.whl", hash = "sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8"},
{file = "ipython_genutils-0.2.0.tar.gz", hash = "sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"},
]
ipywidgets = [
{file = "ipywidgets-8.0.2-py3-none-any.whl", hash = "sha256:1dc3dd4ee19ded045ea7c86eb273033d238d8e43f9e7872c52d092683f263891"},
{file = "ipywidgets-8.0.2.tar.gz", hash = "sha256:08cb75c6e0a96836147cbfdc55580ae04d13e05d26ffbc377b4e1c68baa28b1f"},
]
isort = [
{file = "isort-5.10.1-py3-none-any.whl", hash = "sha256:6f62d78e2f89b4500b080fe3a81690850cd254227f27f75c3a0c491a1f351ba7"},
{file = "isort-5.10.1.tar.gz", hash = "sha256:e8443a5e7a020e9d7f97f1d7d9cd17c88bcb3bc7e218bf9cf5095fe550be2951"},
]
jedi = [
{file = "jedi-0.18.2-py2.py3-none-any.whl", hash = "sha256:203c1fd9d969ab8f2119ec0a3342e0b49910045abe6af0a3ae83a5764d54639e"},
{file = "jedi-0.18.2.tar.gz", hash = "sha256:bae794c30d07f6d910d32a7048af09b5a39ed740918da923c6b780790ebac612"},
]
jinja2 = [
{file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
{file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
]
jmespath = [
{file = "jmespath-1.0.1-py3-none-any.whl", hash = "sha256:02e2e4cc71b5bcab88332eebf907519190dd9e6e82107fa7f83b1003a6252980"},
{file = "jmespath-1.0.1.tar.gz", hash = "sha256:90261b206d6defd58fdd5e85f478bf633a2901798906be2ad389150c5c60edbe"},
]
joblib = [
{file = "joblib-1.2.0-py3-none-any.whl", hash = "sha256:091138ed78f800342968c523bdde947e7a305b8594b910a0fea2ab83c3c6d385"},
{file = "joblib-1.2.0.tar.gz", hash = "sha256:e1cee4a79e4af22881164f218d4311f60074197fb707e082e803b61f6d137018"},
]
jsonschema = [
{file = "jsonschema-4.17.1-py3-none-any.whl", hash = "sha256:410ef23dcdbca4eaedc08b850079179883c2ed09378bd1f760d4af4aacfa28d7"},
{file = "jsonschema-4.17.1.tar.gz", hash = "sha256:05b2d22c83640cde0b7e0aa329ca7754fbd98ea66ad8ae24aa61328dfe057fa3"},
]
jupyter = [
{file = "jupyter-1.0.0-py2.py3-none-any.whl", hash = "sha256:5b290f93b98ffbc21c0c7e749f054b3267782166d72fa5e3ed1ed4eaf34a2b78"},
{file = "jupyter-1.0.0.tar.gz", hash = "sha256:d9dc4b3318f310e34c82951ea5d6683f67bed7def4b259fafbfe4f1beb1d8e5f"},
{file = "jupyter-1.0.0.zip", hash = "sha256:3e1f86076bbb7c8c207829390305a2b1fe836d471ed54be66a3b8c41e7f46cc7"},
]
jupyter-client = [
{file = "jupyter_client-7.4.7-py3-none-any.whl", hash = "sha256:df56ae23b8e1da1b66f89dee1368e948b24a7f780fa822c5735187589fc4c157"},
{file = "jupyter_client-7.4.7.tar.gz", hash = "sha256:330f6b627e0b4bf2f54a3a0dd9e4a22d2b649c8518168afedce2c96a1ceb2860"},
]
jupyter-console = [
{file = "jupyter_console-6.4.4-py3-none-any.whl", hash = "sha256:756df7f4f60c986e7bc0172e4493d3830a7e6e75c08750bbe59c0a5403ad6dee"},
{file = "jupyter_console-6.4.4.tar.gz", hash = "sha256:172f5335e31d600df61613a97b7f0352f2c8250bbd1092ef2d658f77249f89fb"},
]
jupyter-core = [
{file = "jupyter_core-5.0.0-py3-none-any.whl", hash = "sha256:6da1fae48190da8551e1b5dbbb19d51d00b079d59a073c7030407ecaf96dbb1e"},
{file = "jupyter_core-5.0.0.tar.gz", hash = "sha256:4ed68b7c606197c7e344a24b7195eef57898157075a69655a886074b6beb7043"},
]
jupyter-server = [
{file = "jupyter_server-1.23.3-py3-none-any.whl", hash = "sha256:438496cac509709cc85e60172e5538ca45b4c8a0862bb97cd73e49f2ace419cb"},
{file = "jupyter_server-1.23.3.tar.gz", hash = "sha256:f7f7a2f9d36f4150ad125afef0e20b1c76c8ff83eb5e39fb02d3b9df0f9b79ab"},
]
jupyterlab-pygments = [
{file = "jupyterlab_pygments-0.2.2-py2.py3-none-any.whl", hash = "sha256:2405800db07c9f770863bcf8049a529c3dd4d3e28536638bd7c1c01d2748309f"},
{file = "jupyterlab_pygments-0.2.2.tar.gz", hash = "sha256:7405d7fde60819d905a9fa8ce89e4cd830e318cdad22a0030f7a901da705585d"},
]
jupyterlab-widgets = [
{file = "jupyterlab_widgets-3.0.3-py3-none-any.whl", hash = "sha256:6aa1bc0045470d54d76b9c0b7609a8f8f0087573bae25700a370c11f82cb38c8"},
{file = "jupyterlab_widgets-3.0.3.tar.gz", hash = "sha256:c767181399b4ca8b647befe2d913b1260f51bf9d8ef9b7a14632d4c1a7b536bd"},
]
keras = [
{file = "keras-2.11.0-py2.py3-none-any.whl", hash = "sha256:38c6fff0ea9a8b06a2717736565c92a73c8cd9b1c239e7125ccb188b7848f65e"},
]
kiwisolver = [
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:2f5e60fabb7343a836360c4f0919b8cd0d6dbf08ad2ca6b9cf90bf0c76a3c4f6"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:10ee06759482c78bdb864f4109886dff7b8a56529bc1609d4f1112b93fe6423c"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c79ebe8f3676a4c6630fd3f777f3cfecf9289666c84e775a67d1d358578dc2e3"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abbe9fa13da955feb8202e215c4018f4bb57469b1b78c7a4c5c7b93001699938"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7577c1987baa3adc4b3c62c33bd1118c3ef5c8ddef36f0f2c950ae0b199e100d"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8ad8285b01b0d4695102546b342b493b3ccc6781fc28c8c6a1bb63e95d22f09"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8ed58b8acf29798b036d347791141767ccf65eee7f26bde03a71c944449e53de"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a68b62a02953b9841730db7797422f983935aeefceb1679f0fc85cbfbd311c32"},
{file = "kiwisolver-1.4.4-cp310-cp310-win32.whl", hash = "sha256:e92a513161077b53447160b9bd8f522edfbed4bd9759e4c18ab05d7ef7e49408"},
{file = "kiwisolver-1.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:3fe20f63c9ecee44560d0e7f116b3a747a5d7203376abeea292ab3152334d004"},
{file = "kiwisolver-1.4.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:e0ea21f66820452a3f5d1655f8704a60d66ba1191359b96541eaf457710a5fc6"},
{file = "kiwisolver-1.4.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:bc9db8a3efb3e403e4ecc6cd9489ea2bac94244f80c78e27c31dcc00d2790ac2"},
{file = "kiwisolver-1.4.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d5b61785a9ce44e5a4b880272baa7cf6c8f48a5180c3e81c59553ba0cb0821ca"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c2dbb44c3f7e6c4d3487b31037b1bdbf424d97687c1747ce4ff2895795c9bf69"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6295ecd49304dcf3bfbfa45d9a081c96509e95f4b9d0eb7ee4ec0530c4a96514"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4bd472dbe5e136f96a4b18f295d159d7f26fd399136f5b17b08c4e5f498cd494"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bf7d9fce9bcc4752ca4a1b80aabd38f6d19009ea5cbda0e0856983cf6d0023f5"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78d6601aed50c74e0ef02f4204da1816147a6d3fbdc8b3872d263338a9052c51"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:877272cf6b4b7e94c9614f9b10140e198d2186363728ed0f701c6eee1baec1da"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:db608a6757adabb32f1cfe6066e39b3706d8c3aa69bbc353a5b61edad36a5cb4"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:5853eb494c71e267912275e5586fe281444eb5e722de4e131cddf9d442615626"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:f0a1dbdb5ecbef0d34eb77e56fcb3e95bbd7e50835d9782a45df81cc46949750"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:283dffbf061a4ec60391d51e6155e372a1f7a4f5b15d59c8505339454f8989e4"},
{file = "kiwisolver-1.4.4-cp311-cp311-win32.whl", hash = "sha256:d06adcfa62a4431d404c31216f0f8ac97397d799cd53800e9d3efc2fbb3cf14e"},
{file = "kiwisolver-1.4.4-cp311-cp311-win_amd64.whl", hash = "sha256:e7da3fec7408813a7cebc9e4ec55afed2d0fd65c4754bc376bf03498d4e92686"},
{file = "kiwisolver-1.4.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:62ac9cc684da4cf1778d07a89bf5f81b35834cb96ca523d3a7fb32509380cbf6"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41dae968a94b1ef1897cb322b39360a0812661dba7c682aa45098eb8e193dbdf"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:02f79693ec433cb4b5f51694e8477ae83b3205768a6fb48ffba60549080e295b"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d0611a0a2a518464c05ddd5a3a1a0e856ccc10e67079bb17f265ad19ab3c7597"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:db5283d90da4174865d520e7366801a93777201e91e79bacbac6e6927cbceede"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:1041feb4cda8708ce73bb4dcb9ce1ccf49d553bf87c3954bdfa46f0c3f77252c"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win32.whl", hash = "sha256:a553dadda40fef6bfa1456dc4be49b113aa92c2a9a9e8711e955618cd69622e3"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:03baab2d6b4a54ddbb43bba1a3a2d1627e82d205c5cf8f4c924dc49284b87166"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:841293b17ad704d70c578f1f0013c890e219952169ce8a24ebc063eecf775454"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f4f270de01dd3e129a72efad823da90cc4d6aafb64c410c9033aba70db9f1ff0"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f9f39e2f049db33a908319cf46624a569b36983c7c78318e9726a4cb8923b26c"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c97528e64cb9ebeff9701e7938653a9951922f2a38bd847787d4a8e498cc83ae"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d1573129aa0fd901076e2bfb4275a35f5b7aa60fbfb984499d661ec950320b0"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad881edc7ccb9d65b0224f4e4d05a1e85cf62d73aab798943df6d48ab0cd79a1"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b428ef021242344340460fa4c9185d0b1f66fbdbfecc6c63eff4b7c29fad429d"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:2e407cb4bd5a13984a6c2c0fe1845e4e41e96f183e5e5cd4d77a857d9693494c"},
{file = "kiwisolver-1.4.4-cp38-cp38-win32.whl", hash = "sha256:75facbe9606748f43428fc91a43edb46c7ff68889b91fa31f53b58894503a191"},
{file = "kiwisolver-1.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:5bce61af018b0cb2055e0e72e7d65290d822d3feee430b7b8203d8a855e78766"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8c808594c88a025d4e322d5bb549282c93c8e1ba71b790f539567932722d7bd8"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f0a71d85ecdd570ded8ac3d1c0f480842f49a40beb423bb8014539a9f32a5897"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b533558eae785e33e8c148a8d9921692a9fe5aa516efbdff8606e7d87b9d5824"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:efda5fc8cc1c61e4f639b8067d118e742b812c930f708e6667a5ce0d13499e29"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7c43e1e1206cd421cd92e6b3280d4385d41d7166b3ed577ac20444b6995a445f"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc8d3bd6c72b2dd9decf16ce70e20abcb3274ba01b4e1c96031e0c4067d1e7cd"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4ea39b0ccc4f5d803e3337dd46bcce60b702be4d86fd0b3d7531ef10fd99a1ac"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:968f44fdbf6dd757d12920d63b566eeb4d5b395fd2d00d29d7ef00a00582aac9"},
{file = "kiwisolver-1.4.4-cp39-cp39-win32.whl", hash = "sha256:da7e547706e69e45d95e116e6939488d62174e033b763ab1496b4c29b76fabea"},
{file = "kiwisolver-1.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:ba59c92039ec0a66103b1d5fe588fa546373587a7d68f5c96f743c3396afc04b"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:91672bacaa030f92fc2f43b620d7b337fd9a5af28b0d6ed3f77afc43c4a64b5a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:787518a6789009c159453da4d6b683f468ef7a65bbde796bcea803ccf191058d"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da152d8cdcab0e56e4f45eb08b9aea6455845ec83172092f09b0e077ece2cf7a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ecb1fa0db7bf4cff9dac752abb19505a233c7f16684c5826d1f11ebd9472b871"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:28bc5b299f48150b5f822ce68624e445040595a4ac3d59251703779836eceff9"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:81e38381b782cc7e1e46c4e14cd997ee6040768101aefc8fa3c24a4cc58e98f8"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2a66fdfb34e05b705620dd567f5a03f239a088d5a3f321e7b6ac3239d22aa286"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:872b8ca05c40d309ed13eb2e582cab0c5a05e81e987ab9c521bf05ad1d5cf5cb"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:70e7c2e7b750585569564e2e5ca9845acfaa5da56ac46df68414f29fea97be9f"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:9f85003f5dfa867e86d53fac6f7e6f30c045673fa27b603c397753bebadc3008"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2e307eb9bd99801f82789b44bb45e9f541961831c7311521b13a6c85afc09767"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b1792d939ec70abe76f5054d3f36ed5656021dcad1322d1cc996d4e54165cef9"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6cb459eea32a4e2cf18ba5fcece2dbdf496384413bc1bae15583f19e567f3b2"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:36dafec3d6d6088d34e2de6b85f9d8e2324eb734162fba59d2ba9ed7a2043d5b"},
{file = "kiwisolver-1.4.4.tar.gz", hash = "sha256:d41997519fcba4a1e46eb4a2fe31bc12f0ff957b2b81bac28db24744f333e955"},
]
langcodes = [
{file = "langcodes-3.3.0-py3-none-any.whl", hash = "sha256:4d89fc9acb6e9c8fdef70bcdf376113a3db09b67285d9e1d534de6d8818e7e69"},
{file = "langcodes-3.3.0.tar.gz", hash = "sha256:794d07d5a28781231ac335a1561b8442f8648ca07cd518310aeb45d6f0807ef6"},
]
libclang = [
{file = "libclang-14.0.6-py2.py3-none-macosx_10_9_x86_64.whl", hash = "sha256:8791cf3c3b087c373a6d61e9199da7a541da922c9ddcfed1122090586b996d6e"},
{file = "libclang-14.0.6-py2.py3-none-macosx_11_0_arm64.whl", hash = "sha256:7b06fc76bd1e67c8b04b5719bf2ac5d6a323b289b245dfa9e468561d99538188"},
{file = "libclang-14.0.6-py2.py3-none-manylinux1_x86_64.whl", hash = "sha256:e429853939423f276a25140b0b702442d7da9a09e001c05e48df888336947614"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2010_x86_64.whl", hash = "sha256:206d2789e4450a37d054e63b70451a6fc1873466397443fa13de2b3d4adb2796"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_aarch64.whl", hash = "sha256:e2add1703129b2abe066fb1890afa880870a89fd6ab4ec5d2a7a8dc8d271677e"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_armv7l.whl", hash = "sha256:5dd3c6fca1b007d308a4114afa8e4e9d32f32b2572520701d45fcc626ac5cd6c"},
{file = "libclang-14.0.6-py2.py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:cfb0e892ebb5dff6bd498ab5778adb8581f26a00fd8347b3c76c989fe2fd04f7"},
{file = "libclang-14.0.6-py2.py3-none-win_amd64.whl", hash = "sha256:ea03c12675151837660cdd5dce65bd89320896ac3421efef43a36678f113ce95"},
{file = "libclang-14.0.6-py2.py3-none-win_arm64.whl", hash = "sha256:2e4303e04517fcd11173cb2e51a7070eed71e16ef45d4e26a82c5e881cac3d27"},
{file = "libclang-14.0.6.tar.gz", hash = "sha256:9052a8284d8846984f6fa826b1d7460a66d3b23a486d782633b42b6e3b418789"},
]
lightgbm = [
{file = "lightgbm-3.3.3-py3-none-macosx_10_15_x86_64.macosx_11_6_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:27b0ae82549d6c59ede4fa3245f4b21a6bf71ab5ec5c55601cf5a962a18c6f80"},
{file = "lightgbm-3.3.3-py3-none-manylinux1_x86_64.whl", hash = "sha256:389edda68b7f24a1755a6af4dad06e16236e374e9de64253a105b12982b153e2"},
{file = "lightgbm-3.3.3-py3-none-manylinux2014_aarch64.whl", hash = "sha256:b0af55bd476785726eaacbd3c880f8168d362d4bba098790f55cd10fe928591b"},
{file = "lightgbm-3.3.3-py3-none-win_amd64.whl", hash = "sha256:b334dbcd670e3d87f4ff3cfe31d652ab18eb88ad9092a02010916320549b7d10"},
{file = "lightgbm-3.3.3.tar.gz", hash = "sha256:857e559ae84a22963ce2b62168292969d21add30bc9246a84d4e7eedae67966d"},
]
llvmlite = [
{file = "llvmlite-0.36.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cc0f9b9644b4ab0e4a5edb17f1531d791630c88858220d3cc688d6edf10da100"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f7918dbac02b1ebbfd7302ad8e8307d7877ab57d782d5f04b70ff9696b53c21b"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:7768658646c418b9b3beccb7044277a608bc8c62b82a85e73c7e5c065e4157c2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win32.whl", hash = "sha256:05f807209a360d39526d98141b6f281b9c7c771c77a4d1fc22002440642c8de2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win_amd64.whl", hash = "sha256:d1fdd63c371626c25ad834e1c6297eb76cf2f093a40dbb401a87b6476ab4e34e"},
{file = "llvmlite-0.36.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7c4e7066447305d5095d0b0a9cae7b835d2f0fde143456b3124110eab0856426"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:9dad7e4bb042492914292aea3f4172eca84db731f9478250240955aedba95e08"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:1ce5bc0a638d874a08d4222be0a7e48e5df305d094c2ff8dec525ef32b581551"},
{file = "llvmlite-0.36.0-cp37-cp37m-win32.whl", hash = "sha256:dbedff0f6d417b374253a6bab39aa4b5364f1caab30c06ba8726904776fcf1cb"},
{file = "llvmlite-0.36.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3b17fc4b0dd17bd29d7297d054e2915fad535889907c3f65232ee21f483447c5"},
{file = "llvmlite-0.36.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b3a77e46e6053e2a86e607e87b97651dda81e619febb914824a927bff4e88737"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:048a7c117641c9be87b90005684e64a6f33ea0897ebab1df8a01214a10d6e79a"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:7db4b0eef93125af1c4092c64a3c73c7dc904101117ef53f8d78a1a499b8d5f4"},
{file = "llvmlite-0.36.0-cp38-cp38-win32.whl", hash = "sha256:50b1828bde514b31431b2bba1aa20b387f5625b81ad6e12fede430a04645e47a"},
{file = "llvmlite-0.36.0-cp38-cp38-win_amd64.whl", hash = "sha256:f608bae781b2d343e15e080c546468c5a6f35f57f0446923ea198dd21f23757e"},
{file = "llvmlite-0.36.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6a3abc8a8889aeb06bf9c4a7e5df5bc7bb1aa0aedd91a599813809abeec80b5a"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:705f0323d931684428bb3451549603299bb5e17dd60fb979d67c3807de0debc1"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:5a6548b4899facb182145147185e9166c69826fb424895f227e6b7cf924a8da1"},
{file = "llvmlite-0.36.0-cp39-cp39-win32.whl", hash = "sha256:ff52fb9c2be66b95b0e67d56fce11038397e5be1ea410ee53f5f1175fdbb107a"},
{file = "llvmlite-0.36.0-cp39-cp39-win_amd64.whl", hash = "sha256:1dee416ea49fd338c74ec15c0c013e5273b0961528169af06ff90772614f7f6c"},
{file = "llvmlite-0.36.0.tar.gz", hash = "sha256:765128fdf5f149ed0b889ffbe2b05eb1717f8e20a5c87fa2b4018fbcce0fcfc9"},
]
locket = [
{file = "locket-1.0.0-py2.py3-none-any.whl", hash = "sha256:b6c819a722f7b6bd955b80781788e4a66a55628b858d347536b7e81325a3a5e3"},
{file = "locket-1.0.0.tar.gz", hash = "sha256:5c0d4c052a8bbbf750e056a8e65ccd309086f4f0f18a2eac306a8dfa4112a632"},
]
markdown = [
{file = "Markdown-3.4.1-py3-none-any.whl", hash = "sha256:08fb8465cffd03d10b9dd34a5c3fea908e20391a2a90b88d66362cb05beed186"},
{file = "Markdown-3.4.1.tar.gz", hash = "sha256:3b809086bb6efad416156e00a0da66fe47618a5d6918dd688f53f40c8e4cfeff"},
]
markupsafe = [
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:86b1f75c4e7c2ac2ccdaec2b9022845dbb81880ca318bb7a0a01fbf7813e3812"},
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f121a1420d4e173a5d96e47e9a0c0dcff965afdf1626d28de1460815f7c4ee7a"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a49907dd8420c5685cfa064a1335b6754b74541bbb3706c259c02ed65b644b3e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10c1bfff05d95783da83491be968e8fe789263689c02724e0c691933c52994f5"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7bd98b796e2b6553da7225aeb61f447f80a1ca64f41d83612e6139ca5213aa4"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b09bf97215625a311f669476f44b8b318b075847b49316d3e28c08e41a7a573f"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:694deca8d702d5db21ec83983ce0bb4b26a578e71fbdbd4fdcd387daa90e4d5e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:efc1913fd2ca4f334418481c7e595c00aad186563bbc1ec76067848c7ca0a933"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win32.whl", hash = "sha256:4a33dea2b688b3190ee12bd7cfa29d39c9ed176bda40bfa11099a3ce5d3a7ac6"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:dda30ba7e87fbbb7eab1ec9f58678558fd9a6b8b853530e176eabd064da81417"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:671cd1187ed5e62818414afe79ed29da836dde67166a9fac6d435873c44fdd02"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3799351e2336dc91ea70b034983ee71cf2f9533cdff7c14c90ea126bfd95d65a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e72591e9ecd94d7feb70c1cbd7be7b3ebea3f548870aa91e2732960fa4d57a37"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6fbf47b5d3728c6aea2abb0589b5d30459e369baa772e0f37a0320185e87c980"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d5ee4f386140395a2c818d149221149c54849dfcfcb9f1debfe07a8b8bd63f9a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:bcb3ed405ed3222f9904899563d6fc492ff75cce56cba05e32eff40e6acbeaa3"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:e1c0b87e09fa55a220f058d1d49d3fb8df88fbfab58558f1198e08c1e1de842a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win32.whl", hash = "sha256:8dc1c72a69aa7e082593c4a203dcf94ddb74bb5c8a731e4e1eb68d031e8498ff"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:97a68e6ada378df82bc9f16b800ab77cbf4b2fada0081794318520138c088e4a"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e8c843bbcda3a2f1e3c2ab25913c80a3c5376cd00c6e8c4a86a89a28c8dc5452"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0212a68688482dc52b2d45013df70d169f542b7394fc744c02a57374a4207003"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e576a51ad59e4bfaac456023a78f6b5e6e7651dcd383bcc3e18d06f9b55d6d1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b9fe39a2ccc108a4accc2676e77da025ce383c108593d65cc909add5c3bd601"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96e37a3dc86e80bf81758c152fe66dbf60ed5eca3d26305edf01892257049925"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6d0072fea50feec76a4c418096652f2c3238eaa014b2f94aeb1d56a66b41403f"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:089cf3dbf0cd6c100f02945abeb18484bd1ee57a079aefd52cffd17fba910b88"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a074d34ee7a5ce3effbc526b7083ec9731bb3cbf921bbe1d3005d4d2bdb3a63"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win32.whl", hash = "sha256:421be9fbf0ffe9ffd7a378aafebbf6f4602d564d34be190fc19a193232fd12b1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:fc7b548b17d238737688817ab67deebb30e8073c95749d55538ed473130ec0c7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e04e26803c9c3851c931eac40c695602c6295b8d432cbe78609649ad9bd2da8a"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b87db4360013327109564f0e591bd2a3b318547bcef31b468a92ee504d07ae4f"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99a2a507ed3ac881b975a2976d59f38c19386d128e7a9a18b7df6fff1fd4c1d6"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56442863ed2b06d19c37f94d999035e15ee982988920e12a5b4ba29b62ad1f77"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3ce11ee3f23f79dbd06fb3d63e2f6af7b12db1d46932fe7bd8afa259a5996603"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:33b74d289bd2f5e527beadcaa3f401e0df0a89927c1559c8566c066fa4248ab7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:43093fb83d8343aac0b1baa75516da6092f58f41200907ef92448ecab8825135"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8e3dcf21f367459434c18e71b2a9532d96547aef8a871872a5bd69a715c15f96"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win32.whl", hash = "sha256:d4306c36ca495956b6d568d276ac11fdd9c30a36f1b6eb928070dc5360b22e1c"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:46d00d6cfecdde84d40e572d63735ef81423ad31184100411e6e3388d405e247"},
{file = "MarkupSafe-2.1.1.tar.gz", hash = "sha256:7f91197cc9e48f989d12e4e6fbc46495c446636dfc81b9ccf50bb0ec74b91d4b"},
]
matplotlib = [
{file = "matplotlib-3.6.2-cp310-cp310-macosx_10_12_universal2.whl", hash = "sha256:8d0068e40837c1d0df6e3abf1cdc9a34a6d2611d90e29610fa1d2455aeb4e2e5"},
{file = "matplotlib-3.6.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:252957e208c23db72ca9918cb33e160c7833faebf295aaedb43f5b083832a267"},
{file = "matplotlib-3.6.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d50e8c1e571ee39b5dfbc295c11ad65988879f68009dd281a6e1edbc2ff6c18c"},
{file = "matplotlib-3.6.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d840adcad7354be6f2ec28d0706528b0026e4c3934cc6566b84eac18633eab1b"},
{file = "matplotlib-3.6.2-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:78ec3c3412cf277e6252764ee4acbdbec6920cc87ad65862272aaa0e24381eee"},
{file = "matplotlib-3.6.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9347cc6822f38db2b1d1ce992f375289670e595a2d1c15961aacbe0977407dfc"},
{file = "matplotlib-3.6.2-cp310-cp310-win32.whl", hash = "sha256:e0bbee6c2a5bf2a0017a9b5e397babb88f230e6f07c3cdff4a4c4bc75ed7c617"},
{file = "matplotlib-3.6.2-cp310-cp310-win_amd64.whl", hash = "sha256:8a0ae37576ed444fe853709bdceb2be4c7df6f7acae17b8378765bd28e61b3ae"},
{file = "matplotlib-3.6.2-cp311-cp311-macosx_10_12_universal2.whl", hash = "sha256:5ecfc6559132116dedfc482d0ad9df8a89dc5909eebffd22f3deb684132d002f"},
{file = "matplotlib-3.6.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:9f335e5625feb90e323d7e3868ec337f7b9ad88b5d633f876e3b778813021dab"},
{file = "matplotlib-3.6.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b2604c6450f9dd2c42e223b1f5dca9643a23cfecc9fde4a94bb38e0d2693b136"},
{file = "matplotlib-3.6.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e5afe0a7ea0e3a7a257907060bee6724a6002b7eec55d0db16fd32409795f3e1"},
{file = "matplotlib-3.6.2-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca0e7a658fbafcddcaefaa07ba8dae9384be2343468a8e011061791588d839fa"},
{file = "matplotlib-3.6.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:32d29c8c26362169c80c5718ce367e8c64f4dd068a424e7110df1dd2ed7bd428"},
{file = "matplotlib-3.6.2-cp311-cp311-win32.whl", hash = "sha256:5024b8ed83d7f8809982d095d8ab0b179bebc07616a9713f86d30cf4944acb73"},
{file = "matplotlib-3.6.2-cp311-cp311-win_amd64.whl", hash = "sha256:52c2bdd7cd0bf9d5ccdf9c1816568fd4ccd51a4d82419cc5480f548981b47dd0"},
{file = "matplotlib-3.6.2-cp38-cp38-macosx_10_12_universal2.whl", hash = "sha256:8a8dbe2cb7f33ff54b16bb5c500673502a35f18ac1ed48625e997d40c922f9cc"},
{file = "matplotlib-3.6.2-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:380d48c15ec41102a2b70858ab1dedfa33eb77b2c0982cb65a200ae67a48e9cb"},
{file = "matplotlib-3.6.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0844523dfaaff566e39dbfa74e6f6dc42e92f7a365ce80929c5030b84caa563a"},
{file = "matplotlib-3.6.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7f716b6af94dc1b6b97c46401774472f0867e44595990fe80a8ba390f7a0a028"},
{file = "matplotlib-3.6.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:74153008bd24366cf099d1f1e83808d179d618c4e32edb0d489d526523a94d9f"},
{file = "matplotlib-3.6.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f41e57ad63d336fe50d3a67bb8eaa26c09f6dda6a59f76777a99b8ccd8e26aec"},
{file = "matplotlib-3.6.2-cp38-cp38-win32.whl", hash = "sha256:d0e9ac04065a814d4cf2c6791a2ad563f739ae3ae830d716d54245c2b96fead6"},
{file = "matplotlib-3.6.2-cp38-cp38-win_amd64.whl", hash = "sha256:8a9d899953c722b9afd7e88dbefd8fb276c686c3116a43c577cfabf636180558"},
{file = "matplotlib-3.6.2-cp39-cp39-macosx_10_12_universal2.whl", hash = "sha256:f04f97797df35e442ed09f529ad1235d1f1c0f30878e2fe09a2676b71a8801e0"},
{file = "matplotlib-3.6.2-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:3964934731fd7a289a91d315919cf757f293969a4244941ab10513d2351b4e83"},
{file = "matplotlib-3.6.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:168093410b99f647ba61361b208f7b0d64dde1172b5b1796d765cd243cadb501"},
{file = "matplotlib-3.6.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e16dcaecffd55b955aa5e2b8a804379789c15987e8ebd2f32f01398a81e975b"},
{file = "matplotlib-3.6.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:83dc89c5fd728fdb03b76f122f43b4dcee8c61f1489e232d9ad0f58020523e1c"},
{file = "matplotlib-3.6.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:795ad83940732b45d39b82571f87af0081c120feff2b12e748d96bb191169e33"},
{file = "matplotlib-3.6.2-cp39-cp39-win32.whl", hash = "sha256:19d61ee6414c44a04addbe33005ab1f87539d9f395e25afcbe9a3c50ce77c65c"},
{file = "matplotlib-3.6.2-cp39-cp39-win_amd64.whl", hash = "sha256:5ba73aa3aca35d2981e0b31230d58abb7b5d7ca104e543ae49709208d8ce706a"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:1836f366272b1557a613f8265db220eb8dd883202bbbabe01bad5a4eadfd0c95"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0eda9d1b43f265da91fb9ae10d6922b5a986e2234470a524e6b18f14095b20d2"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ec9be0f4826cdb3a3a517509dcc5f87f370251b76362051ab59e42b6b765f8c4"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:3cef89888a466228fc4e4b2954e740ce8e9afde7c4315fdd18caa1b8de58ca17"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:54fa9fe27f5466b86126ff38123261188bed568c1019e4716af01f97a12fe812"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e68be81cd8c22b029924b6d0ee814c337c0e706b8d88495a617319e5dd5441c3"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b0ca2c60d3966dfd6608f5f8c49b8a0fcf76de6654f2eda55fc6ef038d5a6f27"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4426c74761790bff46e3d906c14c7aab727543293eed5a924300a952e1a3a3c1"},
{file = "matplotlib-3.6.2.tar.gz", hash = "sha256:b03fd10a1709d0101c054883b550f7c4c5e974f751e2680318759af005964990"},
]
matplotlib-inline = [
{file = "matplotlib-inline-0.1.6.tar.gz", hash = "sha256:f887e5f10ba98e8d2b150ddcf4702c1e5f8b3a20005eb0f74bfdbd360ee6f304"},
{file = "matplotlib_inline-0.1.6-py3-none-any.whl", hash = "sha256:f1f41aab5328aa5aaea9b16d083b128102f8712542f819fe7e6a420ff581b311"},
]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
]
mistune = [
{file = "mistune-2.0.4-py2.py3-none-any.whl", hash = "sha256:182cc5ee6f8ed1b807de6b7bb50155df7b66495412836b9a74c8fbdfc75fe36d"},
{file = "mistune-2.0.4.tar.gz", hash = "sha256:9ee0a66053e2267aba772c71e06891fa8f1af6d4b01d5e84e267b4570d4d9808"},
]
mpmath = [
{file = "mpmath-1.2.1-py3-none-any.whl", hash = "sha256:604bc21bd22d2322a177c73bdb573994ef76e62edd595d17e00aff24b0667e5c"},
{file = "mpmath-1.2.1.tar.gz", hash = "sha256:79ffb45cf9f4b101a807595bcb3e72e0396202e0b1d25d689134b48c4216a81a"},
]
msgpack = [
{file = "msgpack-1.0.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:4ab251d229d10498e9a2f3b1e68ef64cb393394ec477e3370c457f9430ce9250"},
{file = "msgpack-1.0.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:112b0f93202d7c0fef0b7810d465fde23c746a2d482e1e2de2aafd2ce1492c88"},
{file = "msgpack-1.0.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:002b5c72b6cd9b4bafd790f364b8480e859b4712e91f43014fe01e4f957b8467"},
{file = "msgpack-1.0.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:35bc0faa494b0f1d851fd29129b2575b2e26d41d177caacd4206d81502d4c6a6"},
{file = "msgpack-1.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4733359808c56d5d7756628736061c432ded018e7a1dff2d35a02439043321aa"},
{file = "msgpack-1.0.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb514ad14edf07a1dbe63761fd30f89ae79b42625731e1ccf5e1f1092950eaa6"},
{file = "msgpack-1.0.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:c23080fdeec4716aede32b4e0ef7e213c7b1093eede9ee010949f2a418ced6ba"},
{file = "msgpack-1.0.4-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:49565b0e3d7896d9ea71d9095df15b7f75a035c49be733051c34762ca95bbf7e"},
{file = "msgpack-1.0.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:aca0f1644d6b5a73eb3e74d4d64d5d8c6c3d577e753a04c9e9c87d07692c58db"},
{file = "msgpack-1.0.4-cp310-cp310-win32.whl", hash = "sha256:0dfe3947db5fb9ce52aaea6ca28112a170db9eae75adf9339a1aec434dc954ef"},
{file = "msgpack-1.0.4-cp310-cp310-win_amd64.whl", hash = "sha256:4dea20515f660aa6b7e964433b1808d098dcfcabbebeaaad240d11f909298075"},
{file = "msgpack-1.0.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e83f80a7fec1a62cf4e6c9a660e39c7f878f603737a0cdac8c13131d11d97f52"},
{file = "msgpack-1.0.4-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c11a48cf5e59026ad7cb0dc29e29a01b5a66a3e333dc11c04f7e991fc5510a9"},
{file = "msgpack-1.0.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1276e8f34e139aeff1c77a3cefb295598b504ac5314d32c8c3d54d24fadb94c9"},
{file = "msgpack-1.0.4-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6c9566f2c39ccced0a38d37c26cc3570983b97833c365a6044edef3574a00c08"},
{file = "msgpack-1.0.4-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:fcb8a47f43acc113e24e910399376f7277cf8508b27e5b88499f053de6b115a8"},
{file = "msgpack-1.0.4-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:76ee788122de3a68a02ed6f3a16bbcd97bc7c2e39bd4d94be2f1821e7c4a64e6"},
{file = "msgpack-1.0.4-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:0a68d3ac0104e2d3510de90a1091720157c319ceeb90d74f7b5295a6bee51bae"},
{file = "msgpack-1.0.4-cp36-cp36m-win32.whl", hash = "sha256:85f279d88d8e833ec015650fd15ae5eddce0791e1e8a59165318f371158efec6"},
{file = "msgpack-1.0.4-cp36-cp36m-win_amd64.whl", hash = "sha256:c1683841cd4fa45ac427c18854c3ec3cd9b681694caf5bff04edb9387602d661"},
{file = "msgpack-1.0.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a75dfb03f8b06f4ab093dafe3ddcc2d633259e6c3f74bb1b01996f5d8aa5868c"},
{file = "msgpack-1.0.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9667bdfdf523c40d2511f0e98a6c9d3603be6b371ae9a238b7ef2dc4e7a427b0"},
{file = "msgpack-1.0.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:11184bc7e56fd74c00ead4f9cc9a3091d62ecb96e97653add7a879a14b003227"},
{file = "msgpack-1.0.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ac5bd7901487c4a1dd51a8c58f2632b15d838d07ceedaa5e4c080f7190925bff"},
{file = "msgpack-1.0.4-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:1e91d641d2bfe91ba4c52039adc5bccf27c335356055825c7f88742c8bb900dd"},
{file = "msgpack-1.0.4-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:2a2df1b55a78eb5f5b7d2a4bb221cd8363913830145fad05374a80bf0877cb1e"},
{file = "msgpack-1.0.4-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:545e3cf0cf74f3e48b470f68ed19551ae6f9722814ea969305794645da091236"},
{file = "msgpack-1.0.4-cp37-cp37m-win32.whl", hash = "sha256:2cc5ca2712ac0003bcb625c96368fd08a0f86bbc1a5578802512d87bc592fe44"},
{file = "msgpack-1.0.4-cp37-cp37m-win_amd64.whl", hash = "sha256:eba96145051ccec0ec86611fe9cf693ce55f2a3ce89c06ed307de0e085730ec1"},
{file = "msgpack-1.0.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:7760f85956c415578c17edb39eed99f9181a48375b0d4a94076d84148cf67b2d"},
{file = "msgpack-1.0.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:449e57cc1ff18d3b444eb554e44613cffcccb32805d16726a5494038c3b93dab"},
{file = "msgpack-1.0.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d603de2b8d2ea3f3bcb2efe286849aa7a81531abc52d8454da12f46235092bcb"},
{file = "msgpack-1.0.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:48f5d88c99f64c456413d74a975bd605a9b0526293218a3b77220a2c15458ba9"},
{file = "msgpack-1.0.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6916c78f33602ecf0509cc40379271ba0f9ab572b066bd4bdafd7434dee4bc6e"},
{file = "msgpack-1.0.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:81fc7ba725464651190b196f3cd848e8553d4d510114a954681fd0b9c479d7e1"},
{file = "msgpack-1.0.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:d5b5b962221fa2c5d3a7f8133f9abffc114fe218eb4365e40f17732ade576c8e"},
{file = "msgpack-1.0.4-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:77ccd2af37f3db0ea59fb280fa2165bf1b096510ba9fe0cc2bf8fa92a22fdb43"},
{file = "msgpack-1.0.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:b17be2478b622939e39b816e0aa8242611cc8d3583d1cd8ec31b249f04623243"},
{file = "msgpack-1.0.4-cp38-cp38-win32.whl", hash = "sha256:2bb8cdf50dd623392fa75525cce44a65a12a00c98e1e37bf0fb08ddce2ff60d2"},
{file = "msgpack-1.0.4-cp38-cp38-win_amd64.whl", hash = "sha256:26b8feaca40a90cbe031b03d82b2898bf560027160d3eae1423f4a67654ec5d6"},
{file = "msgpack-1.0.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:462497af5fd4e0edbb1559c352ad84f6c577ffbbb708566a0abaaa84acd9f3ae"},
{file = "msgpack-1.0.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2999623886c5c02deefe156e8f869c3b0aaeba14bfc50aa2486a0415178fce55"},
{file = "msgpack-1.0.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f0029245c51fd9473dc1aede1160b0a29f4a912e6b1dd353fa6d317085b219da"},
{file = "msgpack-1.0.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ed6f7b854a823ea44cf94919ba3f727e230da29feb4a99711433f25800cf747f"},
{file = "msgpack-1.0.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0df96d6eaf45ceca04b3f3b4b111b86b33785683d682c655063ef8057d61fd92"},
{file = "msgpack-1.0.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6a4192b1ab40f8dca3f2877b70e63799d95c62c068c84dc028b40a6cb03ccd0f"},
{file = "msgpack-1.0.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0e3590f9fb9f7fbc36df366267870e77269c03172d086fa76bb4eba8b2b46624"},
{file = "msgpack-1.0.4-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:1576bd97527a93c44fa856770197dec00d223b0b9f36ef03f65bac60197cedf8"},
{file = "msgpack-1.0.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:63e29d6e8c9ca22b21846234913c3466b7e4ee6e422f205a2988083de3b08cae"},
{file = "msgpack-1.0.4-cp39-cp39-win32.whl", hash = "sha256:fb62ea4b62bfcb0b380d5680f9a4b3f9a2d166d9394e9bbd9666c0ee09a3645c"},
{file = "msgpack-1.0.4-cp39-cp39-win_amd64.whl", hash = "sha256:4d5834a2a48965a349da1c5a79760d94a1a0172fbb5ab6b5b33cbf8447e109ce"},
{file = "msgpack-1.0.4.tar.gz", hash = "sha256:f5d869c18f030202eb412f08b28d2afeea553d6613aee89e200d7aca7ef01f5f"},
]
multiprocess = [
{file = "multiprocess-0.70.14-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:560a27540daef4ce8b24ed3cc2496a3c670df66c96d02461a4da67473685adf3"},
{file = "multiprocess-0.70.14-pp37-pypy37_pp73-manylinux_2_24_i686.whl", hash = "sha256:bfbbfa36f400b81d1978c940616bc77776424e5e34cb0c94974b178d727cfcd5"},
{file = "multiprocess-0.70.14-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:89fed99553a04ec4f9067031f83a886d7fdec5952005551a896a4b6a59575bb9"},
{file = "multiprocess-0.70.14-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:40a5e3685462079e5fdee7c6789e3ef270595e1755199f0d50685e72523e1d2a"},
{file = "multiprocess-0.70.14-pp38-pypy38_pp73-manylinux_2_24_i686.whl", hash = "sha256:44936b2978d3f2648727b3eaeab6d7fa0bedf072dc5207bf35a96d5ee7c004cf"},
{file = "multiprocess-0.70.14-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:e628503187b5d494bf29ffc52d3e1e57bb770ce7ce05d67c4bbdb3a0c7d3b05f"},
{file = "multiprocess-0.70.14-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:0d5da0fc84aacb0e4bd69c41b31edbf71b39fe2fb32a54eaedcaea241050855c"},
{file = "multiprocess-0.70.14-pp39-pypy39_pp73-manylinux_2_24_i686.whl", hash = "sha256:6a7b03a5b98e911a7785b9116805bd782815c5e2bd6c91c6a320f26fd3e7b7ad"},
{file = "multiprocess-0.70.14-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:cea5bdedd10aace3c660fedeac8b087136b4366d4ee49a30f1ebf7409bce00ae"},
{file = "multiprocess-0.70.14-py310-none-any.whl", hash = "sha256:7dc1f2f6a1d34894c8a9a013fbc807971e336e7cc3f3ff233e61b9dc679b3b5c"},
{file = "multiprocess-0.70.14-py37-none-any.whl", hash = "sha256:93a8208ca0926d05cdbb5b9250a604c401bed677579e96c14da3090beb798193"},
{file = "multiprocess-0.70.14-py38-none-any.whl", hash = "sha256:6725bc79666bbd29a73ca148a0fb5f4ea22eed4a8f22fce58296492a02d18a7b"},
{file = "multiprocess-0.70.14-py39-none-any.whl", hash = "sha256:63cee628b74a2c0631ef15da5534c8aedbc10c38910b9c8b18dcd327528d1ec7"},
{file = "multiprocess-0.70.14.tar.gz", hash = "sha256:3eddafc12f2260d27ae03fe6069b12570ab4764ab59a75e81624fac453fbf46a"},
]
murmurhash = [
{file = "murmurhash-1.0.9-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:697ed01454d92681c7ae26eb1adcdc654b54062bcc59db38ed03cad71b23d449"},
{file = "murmurhash-1.0.9-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5ef31b5c11be2c064dbbdd0e22ab3effa9ceb5b11ae735295c717c120087dd94"},
{file = "murmurhash-1.0.9-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d7a2bd203377a31bbb2d83fe3f968756d6c9bbfa36c64c6ebfc3c6494fc680bc"},
{file = "murmurhash-1.0.9-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0eb0f8e652431ea238c11bcb671fef5c03aff0544bf7e098df81ea4b6d495405"},
{file = "murmurhash-1.0.9-cp310-cp310-win_amd64.whl", hash = "sha256:cf0b3fe54dca598f5b18c9951e70812e070ecb4c0672ad2cc32efde8a33b3df6"},
{file = "murmurhash-1.0.9-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5dc41be79ba4d09aab7e9110a8a4d4b37b184b63767b1b247411667cdb1057a3"},
{file = "murmurhash-1.0.9-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c0f84ecdf37c06eda0222f2f9e81c0974e1a7659c35b755ab2fdc642ebd366db"},
{file = "murmurhash-1.0.9-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:241693c1c819148eac29d7882739b1099c891f1f7431127b2652c23f81722cec"},
{file = "murmurhash-1.0.9-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:47f5ca56c430230d3b581dfdbc54eb3ad8b0406dcc9afdd978da2e662c71d370"},
{file = "murmurhash-1.0.9-cp311-cp311-win_amd64.whl", hash = "sha256:660ae41fc6609abc05130543011a45b33ca5d8318ae5c70e66bbd351ca936063"},
{file = "murmurhash-1.0.9-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:01137d688a6b259bde642513506b062364ea4e1609f886d9bd095c3ae6da0b94"},
{file = "murmurhash-1.0.9-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b70bbf55d89713873a35bd4002bc231d38e530e1051d57ca5d15f96c01fd778"},
{file = "murmurhash-1.0.9-cp36-cp36m-win_amd64.whl", hash = "sha256:3e802fa5b0e618ee99e8c114ce99fc91677f14e9de6e18b945d91323a93c84e8"},
{file = "murmurhash-1.0.9-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:213d0248e586082e1cab6157d9945b846fd2b6be34357ad5ea0d03a1931d82ba"},
{file = "murmurhash-1.0.9-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94b89d02aeab5e6bad5056f9d08df03ac7cfe06e61ff4b6340feb227fda80ce8"},
{file = "murmurhash-1.0.9-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0c2e2ee2d91a87952fe0f80212e86119aa1fd7681f03e6c99b279e50790dc2b3"},
{file = "murmurhash-1.0.9-cp37-cp37m-win_amd64.whl", hash = "sha256:8c3d69fb649c77c74a55624ebf7a0df3c81629e6ea6e80048134f015da57b2ea"},
{file = "murmurhash-1.0.9-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ab78675510f83e7a3c6bd0abdc448a9a2b0b385b0d7ee766cbbfc5cc278a3042"},
{file = "murmurhash-1.0.9-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0ac5530c250d2b0073ed058555847c8d88d2d00229e483d45658c13b32398523"},
{file = "murmurhash-1.0.9-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69157e8fa6b25c4383645227069f6a1f8738d32ed2a83558961019ca3ebef56a"},
{file = "murmurhash-1.0.9-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2aebe2ae016525a662ff772b72a2c9244a673e3215fcd49897f494258b96f3e7"},
{file = "murmurhash-1.0.9-cp38-cp38-win_amd64.whl", hash = "sha256:a5952f9c18a717fa17579e27f57bfa619299546011a8378a8f73e14eece332f6"},
{file = "murmurhash-1.0.9-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ef79202feeac68e83971239169a05fa6514ecc2815ce04c8302076d267870f6e"},
{file = "murmurhash-1.0.9-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:799fcbca5693ad6a40f565ae6b8e9718e5875a63deddf343825c0f31c32348fa"},
{file = "murmurhash-1.0.9-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9b995bc82eaf9223e045210207b8878fdfe099a788dd8abd708d9ee58459a9d"},
{file = "murmurhash-1.0.9-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b129e1c5ebd772e6ff5ef925bcce695df13169bd885337e6074b923ab6edcfc8"},
{file = "murmurhash-1.0.9-cp39-cp39-win_amd64.whl", hash = "sha256:379bf6b414bd27dd36772dd1570565a7d69918e980457370838bd514df0d91e9"},
{file = "murmurhash-1.0.9.tar.gz", hash = "sha256:fe7a38cb0d3d87c14ec9dddc4932ffe2dbc77d75469ab80fd5014689b0e07b58"},
]
mypy = [
{file = "mypy-0.971-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f2899a3cbd394da157194f913a931edfd4be5f274a88041c9dc2d9cdcb1c315c"},
{file = "mypy-0.971-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:98e02d56ebe93981c41211c05adb630d1d26c14195d04d95e49cd97dbc046dc5"},
{file = "mypy-0.971-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:19830b7dba7d5356d3e26e2427a2ec91c994cd92d983142cbd025ebe81d69cf3"},
{file = "mypy-0.971-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:02ef476f6dcb86e6f502ae39a16b93285fef97e7f1ff22932b657d1ef1f28655"},
{file = "mypy-0.971-cp310-cp310-win_amd64.whl", hash = "sha256:25c5750ba5609a0c7550b73a33deb314ecfb559c350bb050b655505e8aed4103"},
{file = "mypy-0.971-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d3348e7eb2eea2472db611486846742d5d52d1290576de99d59edeb7cd4a42ca"},
{file = "mypy-0.971-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3fa7a477b9900be9b7dd4bab30a12759e5abe9586574ceb944bc29cddf8f0417"},
{file = "mypy-0.971-cp36-cp36m-win_amd64.whl", hash = "sha256:2ad53cf9c3adc43cf3bea0a7d01a2f2e86db9fe7596dfecb4496a5dda63cbb09"},
{file = "mypy-0.971-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:855048b6feb6dfe09d3353466004490b1872887150c5bb5caad7838b57328cc8"},
{file = "mypy-0.971-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:23488a14a83bca6e54402c2e6435467a4138785df93ec85aeff64c6170077fb0"},
{file = "mypy-0.971-cp37-cp37m-win_amd64.whl", hash = "sha256:4b21e5b1a70dfb972490035128f305c39bc4bc253f34e96a4adf9127cf943eb2"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:9796a2ba7b4b538649caa5cecd398d873f4022ed2333ffde58eaf604c4d2cb27"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a361d92635ad4ada1b1b2d3630fc2f53f2127d51cf2def9db83cba32e47c856"},
{file = "mypy-0.971-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b793b899f7cf563b1e7044a5c97361196b938e92f0a4343a5d27966a53d2ec71"},
{file = "mypy-0.971-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d1ea5d12c8e2d266b5fb8c7a5d2e9c0219fedfeb493b7ed60cd350322384ac27"},
{file = "mypy-0.971-cp38-cp38-win_amd64.whl", hash = "sha256:23c7ff43fff4b0df93a186581885c8512bc50fc4d4910e0f838e35d6bb6b5e58"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1f7656b69974a6933e987ee8ffb951d836272d6c0f81d727f1d0e2696074d9e6"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d2022bfadb7a5c2ef410d6a7c9763188afdb7f3533f22a0a32be10d571ee4bbe"},
{file = "mypy-0.971-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ef943c72a786b0f8d90fd76e9b39ce81fb7171172daf84bf43eaf937e9f220a9"},
{file = "mypy-0.971-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d744f72eb39f69312bc6c2abf8ff6656973120e2eb3f3ec4f758ed47e414a4bf"},
{file = "mypy-0.971-cp39-cp39-win_amd64.whl", hash = "sha256:77a514ea15d3007d33a9e2157b0ba9c267496acf12a7f2b9b9f8446337aac5b0"},
{file = "mypy-0.971-py3-none-any.whl", hash = "sha256:0d054ef16b071149917085f51f89555a576e2618d5d9dd70bd6eea6410af3ac9"},
{file = "mypy-0.971.tar.gz", hash = "sha256:40b0f21484238269ae6a57200c807d80debc6459d444c0489a102d7c6a75fa56"},
]
mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
nbclassic = [
{file = "nbclassic-0.4.8-py3-none-any.whl", hash = "sha256:cbf05df5842b420d5cece0143462380ea9d308ff57c2dc0eb4d6e035b18fbfb3"},
{file = "nbclassic-0.4.8.tar.gz", hash = "sha256:c74d8a500f8e058d46b576a41e5bc640711e1032cf7541dde5f73ea49497e283"},
]
nbclient = [
{file = "nbclient-0.7.0-py3-none-any.whl", hash = "sha256:434c91385cf3e53084185334d675a0d33c615108b391e260915d1aa8e86661b8"},
{file = "nbclient-0.7.0.tar.gz", hash = "sha256:a1d844efd6da9bc39d2209bf996dbd8e07bf0f36b796edfabaa8f8a9ab77c3aa"},
]
nbconvert = [
{file = "nbconvert-7.0.0rc3-py3-none-any.whl", hash = "sha256:6774a0bf293d76fa2e886255812d953b750059330c3d7305ad271c02590f1957"},
{file = "nbconvert-7.0.0rc3.tar.gz", hash = "sha256:efb9aae47dad2eae02dd9e7d2cc8add6b7e8f15c6548c0de3363f6d2f8a39146"},
]
nbformat = [
{file = "nbformat-5.7.0-py3-none-any.whl", hash = "sha256:1b05ec2c552c2f1adc745f4eddce1eac8ca9ffd59bb9fd859e827eaa031319f9"},
{file = "nbformat-5.7.0.tar.gz", hash = "sha256:1d4760c15c1a04269ef5caf375be8b98dd2f696e5eb9e603ec2bf091f9b0d3f3"},
]
nbsphinx = [
{file = "nbsphinx-0.8.10-py3-none-any.whl", hash = "sha256:6076fba58020420927899362579f12779a43091eb238f414519ec25b4a8cfc96"},
{file = "nbsphinx-0.8.10.tar.gz", hash = "sha256:a8d68046f8aab916e2940b9b3819bd3ef9ddce868aa38845ea366645cabb6254"},
]
nest-asyncio = [
{file = "nest_asyncio-1.5.6-py3-none-any.whl", hash = "sha256:b9a953fb40dceaa587d109609098db21900182b16440652454a146cffb06e8b8"},
{file = "nest_asyncio-1.5.6.tar.gz", hash = "sha256:d267cc1ff794403f7df692964d1d2a3fa9418ffea2a3f6859a439ff482fef290"},
]
networkx = [
{file = "networkx-2.8.8-py3-none-any.whl", hash = "sha256:e435dfa75b1d7195c7b8378c3859f0445cd88c6b0375c181ed66823a9ceb7524"},
{file = "networkx-2.8.8.tar.gz", hash = "sha256:230d388117af870fce5647a3c52401fcf753e94720e6ea6b4197a5355648885e"},
]
notebook = [
{file = "notebook-6.5.2-py3-none-any.whl", hash = "sha256:e04f9018ceb86e4fa841e92ea8fb214f8d23c1cedfde530cc96f92446924f0e4"},
{file = "notebook-6.5.2.tar.gz", hash = "sha256:c1897e5317e225fc78b45549a6ab4b668e4c996fd03a04e938fe5e7af2bfffd0"},
]
notebook-shim = [
{file = "notebook_shim-0.2.2-py3-none-any.whl", hash = "sha256:9c6c30f74c4fbea6fce55c1be58e7fd0409b1c681b075dcedceb005db5026949"},
{file = "notebook_shim-0.2.2.tar.gz", hash = "sha256:090e0baf9a5582ff59b607af523ca2db68ff216da0c69956b62cab2ef4fc9c3f"},
]
numba = [
{file = "numba-0.53.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:b23de6b6837c132087d06b8b92d343edb54b885873b824a037967fbd5272ebb7"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:6545b9e9b0c112b81de7f88a3c787469a357eeff8211e90b8f45ee243d521cc2"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:8fa5c963a43855050a868106a87cd614f3c3f459951c8fc468aec263ef80d063"},
{file = "numba-0.53.1-cp36-cp36m-win32.whl", hash = "sha256:aaa6ebf56afb0b6752607b9f3bf39e99b0efe3c1fa6849698373925ee6838fd7"},
{file = "numba-0.53.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b08b3df38aab769df79ed948d70f0a54a3cdda49d58af65369235c204ec5d0f3"},
{file = "numba-0.53.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:bf5c463b62d013e3f709cc8277adf2f4f4d8cc6757293e29c6db121b77e6b760"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:74df02e73155f669e60dcff07c4eef4a03dbf5b388594db74142ab40914fe4f5"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:5165709bf62f28667e10b9afe6df0ce1037722adab92d620f59cb8bbb8104641"},
{file = "numba-0.53.1-cp37-cp37m-win32.whl", hash = "sha256:2e96958ed2ca7e6d967b2ce29c8da0ca47117e1de28e7c30b2c8c57386506fa5"},
{file = "numba-0.53.1-cp37-cp37m-win_amd64.whl", hash = "sha256:276f9d1674fe08d95872d81b97267c6b39dd830f05eb992608cbede50fcf48a9"},
{file = "numba-0.53.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:4c4c8d102512ae472af52c76ad9522da718c392cb59f4cd6785d711fa5051a2a"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:691adbeac17dbdf6ed7c759e9e33a522351f07d2065fe926b264b6b2c15fd89b"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:94aab3e0e9e8754116325ce026e1b29ae72443c706a3104cf7f3368dc3012912"},
{file = "numba-0.53.1-cp38-cp38-win32.whl", hash = "sha256:aabeec89bb3e3162136eea492cea7ee8882ddcda2201f05caecdece192c40896"},
{file = "numba-0.53.1-cp38-cp38-win_amd64.whl", hash = "sha256:1895ebd256819ff22256cd6fe24aa8f7470b18acc73e7917e8e93c9ac7f565dc"},
{file = "numba-0.53.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:224d197a46a9e602a16780d87636e199e2cdef528caef084a4d8fd8909c2455c"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:aba7acb247a09d7f12bd17a8e28bbb04e8adef9fc20ca29835d03b7894e1b49f"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:bd126f1f49da6fc4b3169cf1d96f1c3b3f84a7badd11fe22da344b923a00e744"},
{file = "numba-0.53.1-cp39-cp39-win32.whl", hash = "sha256:0ef9d1f347b251282ae46e5a5033600aa2d0dfa1ee8c16cb8137b8cd6f79e221"},
{file = "numba-0.53.1-cp39-cp39-win_amd64.whl", hash = "sha256:17146885cbe4e89c9d4abd4fcb8886dee06d4591943dc4343500c36ce2fcfa69"},
{file = "numba-0.53.1.tar.gz", hash = "sha256:9cd4e5216acdc66c4e9dab2dfd22ddb5bef151185c070d4a3cd8e78638aff5b0"},
]
numpy = [
{file = "numpy-1.23.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9c88793f78fca17da0145455f0d7826bcb9f37da4764af27ac945488116efe63"},
{file = "numpy-1.23.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e9f4c4e51567b616be64e05d517c79a8a22f3606499941d97bb76f2ca59f982d"},
{file = "numpy-1.23.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7903ba8ab592b82014713c491f6c5d3a1cde5b4a3bf116404e08f5b52f6daf43"},
{file = "numpy-1.23.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5e05b1c973a9f858c74367553e236f287e749465f773328c8ef31abe18f691e1"},
{file = "numpy-1.23.5-cp310-cp310-win32.whl", hash = "sha256:522e26bbf6377e4d76403826ed689c295b0b238f46c28a7251ab94716da0b280"},
{file = "numpy-1.23.5-cp310-cp310-win_amd64.whl", hash = "sha256:dbee87b469018961d1ad79b1a5d50c0ae850000b639bcb1b694e9981083243b6"},
{file = "numpy-1.23.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ce571367b6dfe60af04e04a1834ca2dc5f46004ac1cc756fb95319f64c095a96"},
{file = "numpy-1.23.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:56e454c7833e94ec9769fa0f86e6ff8e42ee38ce0ce1fa4cbb747ea7e06d56aa"},
{file = "numpy-1.23.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5039f55555e1eab31124a5768898c9e22c25a65c1e0037f4d7c495a45778c9f2"},
{file = "numpy-1.23.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:58f545efd1108e647604a1b5aa809591ccd2540f468a880bedb97247e72db387"},
{file = "numpy-1.23.5-cp311-cp311-win32.whl", hash = "sha256:b2a9ab7c279c91974f756c84c365a669a887efa287365a8e2c418f8b3ba73fb0"},
{file = "numpy-1.23.5-cp311-cp311-win_amd64.whl", hash = "sha256:0cbe9848fad08baf71de1a39e12d1b6310f1d5b2d0ea4de051058e6e1076852d"},
{file = "numpy-1.23.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f063b69b090c9d918f9df0a12116029e274daf0181df392839661c4c7ec9018a"},
{file = "numpy-1.23.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0aaee12d8883552fadfc41e96b4c82ee7d794949e2a7c3b3a7201e968c7ecab9"},
{file = "numpy-1.23.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:92c8c1e89a1f5028a4c6d9e3ccbe311b6ba53694811269b992c0b224269e2398"},
{file = "numpy-1.23.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d208a0f8729f3fb790ed18a003f3a57895b989b40ea4dce4717e9cf4af62c6bb"},
{file = "numpy-1.23.5-cp38-cp38-win32.whl", hash = "sha256:06005a2ef6014e9956c09ba07654f9837d9e26696a0470e42beedadb78c11b07"},
{file = "numpy-1.23.5-cp38-cp38-win_amd64.whl", hash = "sha256:ca51fcfcc5f9354c45f400059e88bc09215fb71a48d3768fb80e357f3b457e1e"},
{file = "numpy-1.23.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8969bfd28e85c81f3f94eb4a66bc2cf1dbdc5c18efc320af34bffc54d6b1e38f"},
{file = "numpy-1.23.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a7ac231a08bb37f852849bbb387a20a57574a97cfc7b6cabb488a4fc8be176de"},
{file = "numpy-1.23.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bf837dc63ba5c06dc8797c398db1e223a466c7ece27a1f7b5232ba3466aafe3d"},
{file = "numpy-1.23.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:33161613d2269025873025b33e879825ec7b1d831317e68f4f2f0f84ed14c719"},
{file = "numpy-1.23.5-cp39-cp39-win32.whl", hash = "sha256:af1da88f6bc3d2338ebbf0e22fe487821ea4d8e89053e25fa59d1d79786e7481"},
{file = "numpy-1.23.5-cp39-cp39-win_amd64.whl", hash = "sha256:09b7847f7e83ca37c6e627682f145856de331049013853f344f37b0c9690e3df"},
{file = "numpy-1.23.5-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:abdde9f795cf292fb9651ed48185503a2ff29be87770c3b8e2a14b0cd7aa16f8"},
{file = "numpy-1.23.5-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f9a909a8bae284d46bbfdefbdd4a262ba19d3bc9921b1e76126b1d21c3c34135"},
{file = "numpy-1.23.5-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:01dd17cbb340bf0fc23981e52e1d18a9d4050792e8fb8363cecbf066a84b827d"},
{file = "numpy-1.23.5.tar.gz", hash = "sha256:1b1766d6f397c18153d40015ddfc79ddb715cabadc04d2d228d4e5a8bc4ded1a"},
]
oauthlib = [
{file = "oauthlib-3.2.2-py3-none-any.whl", hash = "sha256:8139f29aac13e25d502680e9e19963e83f16838d48a0d71c287fe40e7067fbca"},
{file = "oauthlib-3.2.2.tar.gz", hash = "sha256:9859c40929662bec5d64f34d01c99e093149682a3f38915dc0655d5a633dd918"},
]
opt-einsum = [
{file = "opt_einsum-3.3.0-py3-none-any.whl", hash = "sha256:2455e59e3947d3c275477df7f5205b30635e266fe6dc300e3d9f9646bfcea147"},
{file = "opt_einsum-3.3.0.tar.gz", hash = "sha256:59f6475f77bbc37dcf7cd748519c0ec60722e91e63ca114e68821c0c54a46549"},
]
packaging = [
{file = "packaging-21.3-py3-none-any.whl", hash = "sha256:ef103e05f519cdc783ae24ea4e2e0f508a9c99b2d4969652eed6a2e1ea5bd522"},
{file = "packaging-21.3.tar.gz", hash = "sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb"},
]
pandas = [
{file = "pandas-1.5.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e9dbacd22555c2d47f262ef96bb4e30880e5956169741400af8b306bbb24a273"},
{file = "pandas-1.5.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e2b83abd292194f350bb04e188f9379d36b8dfac24dd445d5c87575f3beaf789"},
{file = "pandas-1.5.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2552bffc808641c6eb471e55aa6899fa002ac94e4eebfa9ec058649122db5824"},
{file = "pandas-1.5.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fc87eac0541a7d24648a001d553406f4256e744d92df1df8ebe41829a915028"},
{file = "pandas-1.5.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d0d8fd58df5d17ddb8c72a5075d87cd80d71b542571b5f78178fb067fa4e9c72"},
{file = "pandas-1.5.2-cp310-cp310-win_amd64.whl", hash = "sha256:4aed257c7484d01c9a194d9a94758b37d3d751849c05a0050c087a358c41ad1f"},
{file = "pandas-1.5.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:375262829c8c700c3e7cbb336810b94367b9c4889818bbd910d0ecb4e45dc261"},
{file = "pandas-1.5.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:cc3cd122bea268998b79adebbb8343b735a5511ec14efb70a39e7acbc11ccbdc"},
{file = "pandas-1.5.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b4f5a82afa4f1ff482ab8ded2ae8a453a2cdfde2001567b3ca24a4c5c5ca0db3"},
{file = "pandas-1.5.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8092a368d3eb7116e270525329a3e5c15ae796ccdf7ccb17839a73b4f5084a39"},
{file = "pandas-1.5.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6257b314fc14958f8122779e5a1557517b0f8e500cfb2bd53fa1f75a8ad0af2"},
{file = "pandas-1.5.2-cp311-cp311-win_amd64.whl", hash = "sha256:82ae615826da838a8e5d4d630eb70c993ab8636f0eff13cb28aafc4291b632b5"},
{file = "pandas-1.5.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:457d8c3d42314ff47cc2d6c54f8fc0d23954b47977b2caed09cd9635cb75388b"},
{file = "pandas-1.5.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c009a92e81ce836212ce7aa98b219db7961a8b95999b97af566b8dc8c33e9519"},
{file = "pandas-1.5.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:71f510b0efe1629bf2f7c0eadb1ff0b9cf611e87b73cd017e6b7d6adb40e2b3a"},
{file = "pandas-1.5.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a40dd1e9f22e01e66ed534d6a965eb99546b41d4d52dbdb66565608fde48203f"},
{file = "pandas-1.5.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ae7e989f12628f41e804847a8cc2943d362440132919a69429d4dea1f164da0"},
{file = "pandas-1.5.2-cp38-cp38-win32.whl", hash = "sha256:530948945e7b6c95e6fa7aa4be2be25764af53fba93fe76d912e35d1c9ee46f5"},
{file = "pandas-1.5.2-cp38-cp38-win_amd64.whl", hash = "sha256:73f219fdc1777cf3c45fde7f0708732ec6950dfc598afc50588d0d285fddaefc"},
{file = "pandas-1.5.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:9608000a5a45f663be6af5c70c3cbe634fa19243e720eb380c0d378666bc7702"},
{file = "pandas-1.5.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:315e19a3e5c2ab47a67467fc0362cb36c7c60a93b6457f675d7d9615edad2ebe"},
{file = "pandas-1.5.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e18bc3764cbb5e118be139b3b611bc3fbc5d3be42a7e827d1096f46087b395eb"},
{file = "pandas-1.5.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0183cb04a057cc38fde5244909fca9826d5d57c4a5b7390c0cc3fa7acd9fa883"},
{file = "pandas-1.5.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:344021ed3e639e017b452aa8f5f6bf38a8806f5852e217a7594417fb9bbfa00e"},
{file = "pandas-1.5.2-cp39-cp39-win32.whl", hash = "sha256:e7469271497960b6a781eaa930cba8af400dd59b62ec9ca2f4d31a19f2f91090"},
{file = "pandas-1.5.2-cp39-cp39-win_amd64.whl", hash = "sha256:c218796d59d5abd8780170c937b812c9637e84c32f8271bbf9845970f8c1351f"},
{file = "pandas-1.5.2.tar.gz", hash = "sha256:220b98d15cee0b2cd839a6358bd1f273d0356bf964c1a1aeb32d47db0215488b"},
]
pandocfilters = [
{file = "pandocfilters-1.5.0-py2.py3-none-any.whl", hash = "sha256:33aae3f25fd1a026079f5d27bdd52496f0e0803b3469282162bafdcbdf6ef14f"},
{file = "pandocfilters-1.5.0.tar.gz", hash = "sha256:0b679503337d233b4339a817bfc8c50064e2eff681314376a47cb582305a7a38"},
]
parso = [
{file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"},
{file = "parso-0.8.3.tar.gz", hash = "sha256:8c07be290bb59f03588915921e29e8a50002acaf2cdc5fa0e0114f91709fafa0"},
]
partd = [
{file = "partd-1.3.0-py3-none-any.whl", hash = "sha256:6393a0c898a0ad945728e34e52de0df3ae295c5aff2e2926ba7cc3c60a734a15"},
{file = "partd-1.3.0.tar.gz", hash = "sha256:ce91abcdc6178d668bcaa431791a5a917d902341cb193f543fe445d494660485"},
]
pastel = [
{file = "pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364"},
{file = "pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d"},
]
pathos = [
{file = "pathos-0.2.9-py2-none-any.whl", hash = "sha256:6a6ddb514ce2719f63fb88d5ec4f4490e436b636b54f1102d952c9f7c52f18e2"},
{file = "pathos-0.2.9-py3-none-any.whl", hash = "sha256:1c44373d8692897d5d15a8aa3b3a442ddc0814c5e848f4ff0ded5491f34b1dac"},
{file = "pathos-0.2.9.tar.gz", hash = "sha256:a8dbddcd3d9af32ada7c6dc088d845588c513a29a0ba19ab9f64c5cd83692934"},
]
pathspec = [
{file = "pathspec-0.10.2-py3-none-any.whl", hash = "sha256:88c2606f2c1e818b978540f73ecc908e13999c6c3a383daf3705652ae79807a5"},
{file = "pathspec-0.10.2.tar.gz", hash = "sha256:8f6bf73e5758fd365ef5d58ce09ac7c27d2833a8d7da51712eac6e27e35141b0"},
]
pathy = [
{file = "pathy-0.9.0-py3-none-any.whl", hash = "sha256:7ac1ddae1d3013b83e693a2236f29661983cc8c0bcc52efca683f48d3663adae"},
{file = "pathy-0.9.0.tar.gz", hash = "sha256:5a9bd1d33b6a7980e6616e055814445b4646443151ef08fdd130fcbc7a2579c4"},
]
patsy = [
{file = "patsy-0.5.3-py2.py3-none-any.whl", hash = "sha256:7eb5349754ed6aa982af81f636479b1b8db9d5b1a6e957a6016ec0534b5c86b7"},
{file = "patsy-0.5.3.tar.gz", hash = "sha256:bdc18001875e319bc91c812c1eb6a10be4bb13cb81eb763f466179dca3b67277"},
]
pexpect = [
{file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
{file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
]
pickleshare = [
{file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
{file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
]
pillow = [
{file = "Pillow-9.3.0-1-cp37-cp37m-win32.whl", hash = "sha256:e6ea6b856a74d560d9326c0f5895ef8050126acfdc7ca08ad703eb0081e82b74"},
{file = "Pillow-9.3.0-1-cp37-cp37m-win_amd64.whl", hash = "sha256:32a44128c4bdca7f31de5be641187367fe2a450ad83b833ef78910397db491aa"},
{file = "Pillow-9.3.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:0b7257127d646ff8676ec8a15520013a698d1fdc48bc2a79ba4e53df792526f2"},
{file = "Pillow-9.3.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b90f7616ea170e92820775ed47e136208e04c967271c9ef615b6fbd08d9af0e3"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68943d632f1f9e3dce98908e873b3a090f6cba1cbb1b892a9e8d97c938871fbe"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:be55f8457cd1eac957af0c3f5ece7bc3f033f89b114ef30f710882717670b2a8"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d77adcd56a42d00cc1be30843d3426aa4e660cab4a61021dc84467123f7a00c"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:829f97c8e258593b9daa80638aee3789b7df9da5cf1336035016d76f03b8860c"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:801ec82e4188e935c7f5e22e006d01611d6b41661bba9fe45b60e7ac1a8f84de"},
{file = "Pillow-9.3.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:871b72c3643e516db4ecf20efe735deb27fe30ca17800e661d769faab45a18d7"},
{file = "Pillow-9.3.0-cp310-cp310-win32.whl", hash = "sha256:655a83b0058ba47c7c52e4e2df5ecf484c1b0b0349805896dd350cbc416bdd91"},
{file = "Pillow-9.3.0-cp310-cp310-win_amd64.whl", hash = "sha256:9f47eabcd2ded7698106b05c2c338672d16a6f2a485e74481f524e2a23c2794b"},
{file = "Pillow-9.3.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:57751894f6618fd4308ed8e0c36c333e2f5469744c34729a27532b3db106ee20"},
{file = "Pillow-9.3.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7db8b751ad307d7cf238f02101e8e36a128a6cb199326e867d1398067381bff4"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3033fbe1feb1b59394615a1cafaee85e49d01b51d54de0cbf6aa8e64182518a1"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:22b012ea2d065fd163ca096f4e37e47cd8b59cf4b0fd47bfca6abb93df70b34c"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b9a65733d103311331875c1dca05cb4606997fd33d6acfed695b1232ba1df193"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:502526a2cbfa431d9fc2a079bdd9061a2397b842bb6bc4239bb176da00993812"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:90fb88843d3902fe7c9586d439d1e8c05258f41da473952aa8b328d8b907498c"},
{file = "Pillow-9.3.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:89dca0ce00a2b49024df6325925555d406b14aa3efc2f752dbb5940c52c56b11"},
{file = "Pillow-9.3.0-cp311-cp311-win32.whl", hash = "sha256:3168434d303babf495d4ba58fc22d6604f6e2afb97adc6a423e917dab828939c"},
{file = "Pillow-9.3.0-cp311-cp311-win_amd64.whl", hash = "sha256:18498994b29e1cf86d505edcb7edbe814d133d2232d256db8c7a8ceb34d18cef"},
{file = "Pillow-9.3.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:772a91fc0e03eaf922c63badeca75e91baa80fe2f5f87bdaed4280662aad25c9"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa4107d1b306cdf8953edde0534562607fe8811b6c4d9a486298ad31de733b2"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b4012d06c846dc2b80651b120e2cdd787b013deb39c09f407727ba90015c684f"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77ec3e7be99629898c9a6d24a09de089fa5356ee408cdffffe62d67bb75fdd72"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:6c738585d7a9961d8c2821a1eb3dcb978d14e238be3d70f0a706f7fa9316946b"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:828989c45c245518065a110434246c44a56a8b2b2f6347d1409c787e6e4651ee"},
{file = "Pillow-9.3.0-cp37-cp37m-win32.whl", hash = "sha256:82409ffe29d70fd733ff3c1025a602abb3e67405d41b9403b00b01debc4c9a29"},
{file = "Pillow-9.3.0-cp37-cp37m-win_amd64.whl", hash = "sha256:41e0051336807468be450d52b8edd12ac60bebaa97fe10c8b660f116e50b30e4"},
{file = "Pillow-9.3.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:b03ae6f1a1878233ac620c98f3459f79fd77c7e3c2b20d460284e1fb370557d4"},
{file = "Pillow-9.3.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4390e9ce199fc1951fcfa65795f239a8a4944117b5935a9317fb320e7767b40f"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40e1ce476a7804b0fb74bcfa80b0a2206ea6a882938eaba917f7a0f004b42502"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a0a06a052c5f37b4ed81c613a455a81f9a3a69429b4fd7bb913c3fa98abefc20"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:03150abd92771742d4a8cd6f2fa6246d847dcd2e332a18d0c15cc75bf6703040"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:15c42fb9dea42465dfd902fb0ecf584b8848ceb28b41ee2b58f866411be33f07"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:51e0e543a33ed92db9f5ef69a0356e0b1a7a6b6a71b80df99f1d181ae5875636"},
{file = "Pillow-9.3.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:3dd6caf940756101205dffc5367babf288a30043d35f80936f9bfb37f8355b32"},
{file = "Pillow-9.3.0-cp38-cp38-win32.whl", hash = "sha256:f1ff2ee69f10f13a9596480335f406dd1f70c3650349e2be67ca3139280cade0"},
{file = "Pillow-9.3.0-cp38-cp38-win_amd64.whl", hash = "sha256:276a5ca930c913f714e372b2591a22c4bd3b81a418c0f6635ba832daec1cbcfc"},
{file = "Pillow-9.3.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:73bd195e43f3fadecfc50c682f5055ec32ee2c933243cafbfdec69ab1aa87cad"},
{file = "Pillow-9.3.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1c7c8ae3864846fc95f4611c78129301e203aaa2af813b703c55d10cc1628535"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2e0918e03aa0c72ea56edbb00d4d664294815aa11291a11504a377ea018330d3"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b0915e734b33a474d76c28e07292f196cdf2a590a0d25bcc06e64e545f2d146c"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:af0372acb5d3598f36ec0914deed2a63f6bcdb7b606da04dc19a88d31bf0c05b"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:ad58d27a5b0262c0c19b47d54c5802db9b34d38bbf886665b626aff83c74bacd"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:97aabc5c50312afa5e0a2b07c17d4ac5e865b250986f8afe2b02d772567a380c"},
{file = "Pillow-9.3.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:9aaa107275d8527e9d6e7670b64aabaaa36e5b6bd71a1015ddd21da0d4e06448"},
{file = "Pillow-9.3.0-cp39-cp39-win32.whl", hash = "sha256:bac18ab8d2d1e6b4ce25e3424f709aceef668347db8637c2296bcf41acb7cf48"},
{file = "Pillow-9.3.0-cp39-cp39-win_amd64.whl", hash = "sha256:b472b5ea442148d1c3e2209f20f1e0bb0eb556538690fa70b5e1f79fa0ba8dc2"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:ab388aaa3f6ce52ac1cb8e122c4bd46657c15905904b3120a6248b5b8b0bc228"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dbb8e7f2abee51cef77673be97760abff1674ed32847ce04b4af90f610144c7b"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bca31dd6014cb8b0b2db1e46081b0ca7d936f856da3b39744aef499db5d84d02"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c7025dce65566eb6e89f56c9509d4f628fddcedb131d9465cacd3d8bac337e7e"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ebf2029c1f464c59b8bdbe5143c79fa2045a581ac53679733d3a91d400ff9efb"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b59430236b8e58840a0dfb4099a0e8717ffb779c952426a69ae435ca1f57210c"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:12ce4932caf2ddf3e41d17fc9c02d67126935a44b86df6a206cf0d7161548627"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ae5331c23ce118c53b172fa64a4c037eb83c9165aba3a7ba9ddd3ec9fa64a699"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:0b07fffc13f474264c336298d1b4ce01d9c5a011415b79d4ee5527bb69ae6f65"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:073adb2ae23431d3b9bcbcff3fe698b62ed47211d0716b067385538a1b0f28b8"},
{file = "Pillow-9.3.0.tar.gz", hash = "sha256:c935a22a557a560108d780f9a0fc426dd7459940dc54faa49d83249c8d3e760f"},
]
pip = [
{file = "pip-22.3.1-py3-none-any.whl", hash = "sha256:908c78e6bc29b676ede1c4d57981d490cb892eb45cd8c214ab6298125119e077"},
{file = "pip-22.3.1.tar.gz", hash = "sha256:65fd48317359f3af8e593943e6ae1506b66325085ea64b706a998c6e83eeaf38"},
]
pkgutil-resolve-name = [
{file = "pkgutil_resolve_name-1.3.10-py3-none-any.whl", hash = "sha256:ca27cc078d25c5ad71a9de0a7a330146c4e014c2462d9af19c6b828280649c5e"},
{file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
]
platformdirs = [
{file = "platformdirs-2.5.4-py3-none-any.whl", hash = "sha256:af0276409f9a02373d540bf8480021a048711d572745aef4b7842dad245eba10"},
{file = "platformdirs-2.5.4.tar.gz", hash = "sha256:1006647646d80f16130f052404c6b901e80ee4ed6bef6792e1f238a8969106f7"},
]
plotly = [
{file = "plotly-5.11.0-py2.py3-none-any.whl", hash = "sha256:52fd74b08aa4fd5a55b9d3034a30dbb746e572d7ed84897422f927fdf687ea5f"},
{file = "plotly-5.11.0.tar.gz", hash = "sha256:4efef479c2ec1d86dcdac8405b6ca70ca65649a77408e39a7e84a1ea2db6c787"},
]
pluggy = [
{file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"},
{file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"},
]
poethepoet = [
{file = "poethepoet-0.16.4-py3-none-any.whl", hash = "sha256:1f05dce92ca6457d018696b614ba2149261380f30ceb21c196daf19c0c2e1fcd"},
{file = "poethepoet-0.16.4.tar.gz", hash = "sha256:a80f6bba64812515c406ffc218aff833951b17854eb111f724b48c44f9759af5"},
]
pox = [
{file = "pox-0.3.2-py3-none-any.whl", hash = "sha256:56fe2f099ecd8a557b8948082504492de90e8598c34733c9b1fdeca8f7b6de61"},
{file = "pox-0.3.2.tar.gz", hash = "sha256:e825225297638d6e3d49415f8cfb65407a5d15e56f2fb7fe9d9b9e3050c65ee1"},
]
ppft = [
{file = "ppft-1.7.6.6-py3-none-any.whl", hash = "sha256:f355d2caeed8bd7c9e4a860c471f31f7e66d1ada2791ab5458ea7dca15a51e41"},
{file = "ppft-1.7.6.6.tar.gz", hash = "sha256:f933f0404f3e808bc860745acb3b79cd4fe31ea19a20889a645f900415be60f1"},
]
preshed = [
{file = "preshed-3.0.8-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ea4b6df8ef7af38e864235256793bc3056e9699d991afcf6256fa298858582fc"},
{file = "preshed-3.0.8-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8e945fc814bdc29564a2ce137c237b3a9848aa1e76a1160369b6e0d328151fdd"},
{file = "preshed-3.0.8-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9a4833530fe53001c351974e0c8bb660211b8d0358e592af185fec1ae12b2d0"},
{file = "preshed-3.0.8-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e1472ee231f323b4f4368b1b5f8f08481ed43af89697d45450c6ae4af46ac08a"},
{file = "preshed-3.0.8-cp310-cp310-win_amd64.whl", hash = "sha256:c8a2e2931eea7e500fbf8e014b69022f3fab2e35a70da882e2fc753e5e487ae3"},
{file = "preshed-3.0.8-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0e1bb8701df7861af26a312225bdf7c4822ac06fcf75aeb60fe2b0a20e64c222"},
{file = "preshed-3.0.8-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e9aef2b0b7687aecef48b1c6ff657d407ff24e75462877dcb888fa904c4a9c6d"},
{file = "preshed-3.0.8-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:854d58a8913ebf3b193b0dc8064155b034e8987de25f26838dfeca09151fda8a"},
{file = "preshed-3.0.8-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:135e2ac0db1a3948d6ec295598c7e182b52c394663f2fcfe36a97ae51186be21"},
{file = "preshed-3.0.8-cp311-cp311-win_amd64.whl", hash = "sha256:019d8fa4161035811fb2804d03214143298739e162d0ad24e087bd46c50970f5"},
{file = "preshed-3.0.8-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6a49ce52856fbb3ef4f1cc744c53f5d7e1ca370b1939620ac2509a6d25e02a50"},
{file = "preshed-3.0.8-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fdbc2957b36115a576c515ffe963919f19d2683f3c76c9304ae88ef59f6b5ca6"},
{file = "preshed-3.0.8-cp36-cp36m-win_amd64.whl", hash = "sha256:09cc9da2ac1b23010ce7d88a5e20f1033595e6dd80be14318e43b9409f4c7697"},
{file = "preshed-3.0.8-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e19c8069f1a1450f835f23d47724530cf716d581fcafb398f534d044f806b8c2"},
{file = "preshed-3.0.8-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25b5ef5e387a0e17ff41202a8c1816184ab6fb3c0d0b847bf8add0ed5941eb8d"},
{file = "preshed-3.0.8-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:53d3e2456a085425c66af7baba62d7eaa24aa5e460e1a9e02c401a2ed59abd7b"},
{file = "preshed-3.0.8-cp37-cp37m-win_amd64.whl", hash = "sha256:85e98a618fb36cdcc37501d8b9b8c1246651cc2f2db3a70702832523e0ae12f4"},
{file = "preshed-3.0.8-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7f8837bf616335464f3713cbf562a3dcaad22c3ca9193f957018964ef871a68b"},
{file = "preshed-3.0.8-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:720593baf2c2e295f855192974799e486da5f50d4548db93c44f5726a43cefb9"},
{file = "preshed-3.0.8-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e0ad3d860b9ce88a74cf7414bb4b1c6fd833813e7b818e76f49272c4974b19ce"},
{file = "preshed-3.0.8-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fd19d48440b152657966a52e627780c0ddbe9d907b8d7ee4598505e80a3c55c7"},
{file = "preshed-3.0.8-cp38-cp38-win_amd64.whl", hash = "sha256:246e7c6890dc7fe9b10f0e31de3346b906e3862b6ef42fcbede37968f46a73bf"},
{file = "preshed-3.0.8-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:67643e66691770dc3434b01671648f481e3455209ce953727ef2330b16790aaa"},
{file = "preshed-3.0.8-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0ae25a010c9f551aa2247ee621457f679e07c57fc99d3fd44f84cb40b925f12c"},
{file = "preshed-3.0.8-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5a6a7fcf7dd2e7711051b3f0432da9ec9c748954c989f49d2cd8eabf8c2d953e"},
{file = "preshed-3.0.8-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5942858170c4f53d9afc6352a86bbc72fc96cc4d8964b6415492114a5920d3ed"},
{file = "preshed-3.0.8-cp39-cp39-win_amd64.whl", hash = "sha256:06793022a56782ef51d74f1399925a2ba958e50c5cfbc6fa5b25c4945e158a07"},
{file = "preshed-3.0.8.tar.gz", hash = "sha256:6c74c70078809bfddda17be96483c41d06d717934b07cab7921011d81758b357"},
]
progressbar2 = [
{file = "progressbar2-4.2.0-py2.py3-none-any.whl", hash = "sha256:1a8e201211f99a85df55f720b3b6da7fb5c8cdef56792c4547205be2de5ea606"},
{file = "progressbar2-4.2.0.tar.gz", hash = "sha256:1393922fcb64598944ad457569fbeb4b3ac189ef50b5adb9cef3284e87e394ce"},
]
prometheus-client = [
{file = "prometheus_client-0.15.0-py3-none-any.whl", hash = "sha256:db7c05cbd13a0f79975592d112320f2605a325969b270a94b71dcabc47b931d2"},
{file = "prometheus_client-0.15.0.tar.gz", hash = "sha256:be26aa452490cfcf6da953f9436e95a9f2b4d578ca80094b4458930e5f584ab1"},
]
prompt-toolkit = [
{file = "prompt_toolkit-3.0.33-py3-none-any.whl", hash = "sha256:ced598b222f6f4029c0800cefaa6a17373fb580cd093223003475ce32805c35b"},
{file = "prompt_toolkit-3.0.33.tar.gz", hash = "sha256:535c29c31216c77302877d5120aef6c94ff573748a5b5ca5b1b1f76f5e700c73"},
]
protobuf = [
{file = "protobuf-3.19.6-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:010be24d5a44be7b0613750ab40bc8b8cedc796db468eae6c779b395f50d1fa1"},
{file = "protobuf-3.19.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:11478547958c2dfea921920617eb457bc26867b0d1aa065ab05f35080c5d9eb6"},
{file = "protobuf-3.19.6-cp310-cp310-win32.whl", hash = "sha256:559670e006e3173308c9254d63facb2c03865818f22204037ab76f7a0ff70b5f"},
{file = "protobuf-3.19.6-cp310-cp310-win_amd64.whl", hash = "sha256:347b393d4dd06fb93a77620781e11c058b3b0a5289262f094379ada2920a3730"},
{file = "protobuf-3.19.6-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:a8ce5ae0de28b51dff886fb922012dad885e66176663950cb2344c0439ecb473"},
{file = "protobuf-3.19.6-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90b0d02163c4e67279ddb6dc25e063db0130fc299aefabb5d481053509fae5c8"},
{file = "protobuf-3.19.6-cp36-cp36m-win32.whl", hash = "sha256:30f5370d50295b246eaa0296533403961f7e64b03ea12265d6dfce3a391d8992"},
{file = "protobuf-3.19.6-cp36-cp36m-win_amd64.whl", hash = "sha256:0c0714b025ec057b5a7600cb66ce7c693815f897cfda6d6efb58201c472e3437"},
{file = "protobuf-3.19.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5057c64052a1f1dd7d4450e9aac25af6bf36cfbfb3a1cd89d16393a036c49157"},
{file = "protobuf-3.19.6-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:bb6776bd18f01ffe9920e78e03a8676530a5d6c5911934c6a1ac6eb78973ecb6"},
{file = "protobuf-3.19.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:84a04134866861b11556a82dd91ea6daf1f4925746b992f277b84013a7cc1229"},
{file = "protobuf-3.19.6-cp37-cp37m-win32.whl", hash = "sha256:4bc98de3cdccfb5cd769620d5785b92c662b6bfad03a202b83799b6ed3fa1fa7"},
{file = "protobuf-3.19.6-cp37-cp37m-win_amd64.whl", hash = "sha256:aa3b82ca1f24ab5326dcf4ea00fcbda703e986b22f3d27541654f749564d778b"},
{file = "protobuf-3.19.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2b2d2913bcda0e0ec9a784d194bc490f5dc3d9d71d322d070b11a0ade32ff6ba"},
{file = "protobuf-3.19.6-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:d0b635cefebd7a8a0f92020562dead912f81f401af7e71f16bf9506ff3bdbb38"},
{file = "protobuf-3.19.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7a552af4dc34793803f4e735aabe97ffc45962dfd3a237bdde242bff5a3de684"},
{file = "protobuf-3.19.6-cp38-cp38-win32.whl", hash = "sha256:0469bc66160180165e4e29de7f445e57a34ab68f49357392c5b2f54c656ab25e"},
{file = "protobuf-3.19.6-cp38-cp38-win_amd64.whl", hash = "sha256:91d5f1e139ff92c37e0ff07f391101df77e55ebb97f46bbc1535298d72019462"},
{file = "protobuf-3.19.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c0ccd3f940fe7f3b35a261b1dd1b4fc850c8fde9f74207015431f174be5976b3"},
{file = "protobuf-3.19.6-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:30a15015d86b9c3b8d6bf78d5b8c7749f2512c29f168ca259c9d7727604d0e39"},
{file = "protobuf-3.19.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:878b4cd080a21ddda6ac6d1e163403ec6eea2e206cf225982ae04567d39be7b0"},
{file = "protobuf-3.19.6-cp39-cp39-win32.whl", hash = "sha256:5a0d7539a1b1fb7e76bf5faa0b44b30f812758e989e59c40f77a7dab320e79b9"},
{file = "protobuf-3.19.6-cp39-cp39-win_amd64.whl", hash = "sha256:bbf5cea5048272e1c60d235c7bd12ce1b14b8a16e76917f371c718bd3005f045"},
{file = "protobuf-3.19.6-py2.py3-none-any.whl", hash = "sha256:14082457dc02be946f60b15aad35e9f5c69e738f80ebbc0900a19bc83734a5a4"},
{file = "protobuf-3.19.6.tar.gz", hash = "sha256:5f5540d57a43042389e87661c6eaa50f47c19c6176e8cf1c4f287aeefeccb5c4"},
]
psutil = [
{file = "psutil-5.9.4-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:c1ca331af862803a42677c120aff8a814a804e09832f166f226bfd22b56feee8"},
{file = "psutil-5.9.4-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:68908971daf802203f3d37e78d3f8831b6d1014864d7a85937941bb35f09aefe"},
{file = "psutil-5.9.4-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:3ff89f9b835100a825b14c2808a106b6fdcc4b15483141482a12c725e7f78549"},
{file = "psutil-5.9.4-cp27-cp27m-win32.whl", hash = "sha256:852dd5d9f8a47169fe62fd4a971aa07859476c2ba22c2254d4a1baa4e10b95ad"},
{file = "psutil-5.9.4-cp27-cp27m-win_amd64.whl", hash = "sha256:9120cd39dca5c5e1c54b59a41d205023d436799b1c8c4d3ff71af18535728e94"},
{file = "psutil-5.9.4-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:6b92c532979bafc2df23ddc785ed116fced1f492ad90a6830cf24f4d1ea27d24"},
{file = "psutil-5.9.4-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:efeae04f9516907be44904cc7ce08defb6b665128992a56957abc9b61dca94b7"},
{file = "psutil-5.9.4-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:54d5b184728298f2ca8567bf83c422b706200bcbbfafdc06718264f9393cfeb7"},
{file = "psutil-5.9.4-cp36-abi3-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:16653106f3b59386ffe10e0bad3bb6299e169d5327d3f187614b1cb8f24cf2e1"},
{file = "psutil-5.9.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:54c0d3d8e0078b7666984e11b12b88af2db11d11249a8ac8920dd5ef68a66e08"},
{file = "psutil-5.9.4-cp36-abi3-win32.whl", hash = "sha256:149555f59a69b33f056ba1c4eb22bb7bf24332ce631c44a319cec09f876aaeff"},
{file = "psutil-5.9.4-cp36-abi3-win_amd64.whl", hash = "sha256:fd8522436a6ada7b4aad6638662966de0d61d241cb821239b2ae7013d41a43d4"},
{file = "psutil-5.9.4-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:6001c809253a29599bc0dfd5179d9f8a5779f9dffea1da0f13c53ee568115e1e"},
{file = "psutil-5.9.4.tar.gz", hash = "sha256:3d7f9739eb435d4b1338944abe23f49584bde5395f27487d2ee25ad9a8774a62"},
]
ptyprocess = [
{file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
{file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
]
pure-eval = [
{file = "pure_eval-0.2.2-py3-none-any.whl", hash = "sha256:01eaab343580944bc56080ebe0a674b39ec44a945e6d09ba7db3cb8cec289350"},
{file = "pure_eval-0.2.2.tar.gz", hash = "sha256:2b45320af6dfaa1750f543d714b6d1c520a1688dec6fd24d339063ce0aaa9ac3"},
]
py = [
{file = "py-1.11.0-py2.py3-none-any.whl", hash = "sha256:607c53218732647dff4acdfcd50cb62615cedf612e72d1724fb1a0cc6405b378"},
{file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"},
]
pyasn1 = [
{file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"},
{file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"},
]
pyasn1-modules = [
{file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"},
{file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"},
]
pycodestyle = [
{file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"},
{file = "pycodestyle-2.8.0.tar.gz", hash = "sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"},
]
pycparser = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pydantic = [
{file = "pydantic-1.10.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bb6ad4489af1bac6955d38ebcb95079a836af31e4c4f74aba1ca05bb9f6027bd"},
{file = "pydantic-1.10.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a1f5a63a6dfe19d719b1b6e6106561869d2efaca6167f84f5ab9347887d78b98"},
{file = "pydantic-1.10.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:352aedb1d71b8b0736c6d56ad2bd34c6982720644b0624462059ab29bd6e5912"},
{file = "pydantic-1.10.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:19b3b9ccf97af2b7519c42032441a891a5e05c68368f40865a90eb88833c2559"},
{file = "pydantic-1.10.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e9069e1b01525a96e6ff49e25876d90d5a563bc31c658289a8772ae186552236"},
{file = "pydantic-1.10.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:355639d9afc76bcb9b0c3000ddcd08472ae75318a6eb67a15866b87e2efa168c"},
{file = "pydantic-1.10.2-cp310-cp310-win_amd64.whl", hash = "sha256:ae544c47bec47a86bc7d350f965d8b15540e27e5aa4f55170ac6a75e5f73b644"},
{file = "pydantic-1.10.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a4c805731c33a8db4b6ace45ce440c4ef5336e712508b4d9e1aafa617dc9907f"},
{file = "pydantic-1.10.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d49f3db871575e0426b12e2f32fdb25e579dea16486a26e5a0474af87cb1ab0a"},
{file = "pydantic-1.10.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:37c90345ec7dd2f1bcef82ce49b6235b40f282b94d3eec47e801baf864d15525"},
{file = "pydantic-1.10.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b5ba54d026c2bd2cb769d3468885f23f43710f651688e91f5fb1edcf0ee9283"},
{file = "pydantic-1.10.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:05e00dbebbe810b33c7a7362f231893183bcc4251f3f2ff991c31d5c08240c42"},
{file = "pydantic-1.10.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:2d0567e60eb01bccda3a4df01df677adf6b437958d35c12a3ac3e0f078b0ee52"},
{file = "pydantic-1.10.2-cp311-cp311-win_amd64.whl", hash = "sha256:c6f981882aea41e021f72779ce2a4e87267458cc4d39ea990729e21ef18f0f8c"},
{file = "pydantic-1.10.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c4aac8e7103bf598373208f6299fa9a5cfd1fc571f2d40bf1dd1955a63d6eeb5"},
{file = "pydantic-1.10.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:81a7b66c3f499108b448f3f004801fcd7d7165fb4200acb03f1c2402da73ce4c"},
{file = "pydantic-1.10.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bedf309630209e78582ffacda64a21f96f3ed2e51fbf3962d4d488e503420254"},
{file = "pydantic-1.10.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:9300fcbebf85f6339a02c6994b2eb3ff1b9c8c14f502058b5bf349d42447dcf5"},
{file = "pydantic-1.10.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:216f3bcbf19c726b1cc22b099dd409aa371f55c08800bcea4c44c8f74b73478d"},
{file = "pydantic-1.10.2-cp37-cp37m-win_amd64.whl", hash = "sha256:dd3f9a40c16daf323cf913593083698caee97df2804aa36c4b3175d5ac1b92a2"},
{file = "pydantic-1.10.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b97890e56a694486f772d36efd2ba31612739bc6f3caeee50e9e7e3ebd2fdd13"},
{file = "pydantic-1.10.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:9cabf4a7f05a776e7793e72793cd92cc865ea0e83a819f9ae4ecccb1b8aa6116"},
{file = "pydantic-1.10.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:06094d18dd5e6f2bbf93efa54991c3240964bb663b87729ac340eb5014310624"},
{file = "pydantic-1.10.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc78cc83110d2f275ec1970e7a831f4e371ee92405332ebfe9860a715f8336e1"},
{file = "pydantic-1.10.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:1ee433e274268a4b0c8fde7ad9d58ecba12b069a033ecc4645bb6303c062d2e9"},
{file = "pydantic-1.10.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:7c2abc4393dea97a4ccbb4ec7d8658d4e22c4765b7b9b9445588f16c71ad9965"},
{file = "pydantic-1.10.2-cp38-cp38-win_amd64.whl", hash = "sha256:0b959f4d8211fc964772b595ebb25f7652da3f22322c007b6fed26846a40685e"},
{file = "pydantic-1.10.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c33602f93bfb67779f9c507e4d69451664524389546bacfe1bee13cae6dc7488"},
{file = "pydantic-1.10.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5760e164b807a48a8f25f8aa1a6d857e6ce62e7ec83ea5d5c5a802eac81bad41"},
{file = "pydantic-1.10.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6eb843dcc411b6a2237a694f5e1d649fc66c6064d02b204a7e9d194dff81eb4b"},
{file = "pydantic-1.10.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4b8795290deaae348c4eba0cebb196e1c6b98bdbe7f50b2d0d9a4a99716342fe"},
{file = "pydantic-1.10.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:e0bedafe4bc165ad0a56ac0bd7695df25c50f76961da29c050712596cf092d6d"},
{file = "pydantic-1.10.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:2e05aed07fa02231dbf03d0adb1be1d79cabb09025dd45aa094aa8b4e7b9dcda"},
{file = "pydantic-1.10.2-cp39-cp39-win_amd64.whl", hash = "sha256:c1ba1afb396148bbc70e9eaa8c06c1716fdddabaf86e7027c5988bae2a829ab6"},
{file = "pydantic-1.10.2-py3-none-any.whl", hash = "sha256:1b6ee725bd6e83ec78b1aa32c5b1fa67a3a65badddde3976bca5fe4568f27709"},
{file = "pydantic-1.10.2.tar.gz", hash = "sha256:91b8e218852ef6007c2b98cd861601c6a09f1aa32bbbb74fab5b1c33d4a1e410"},
]
pydata-sphinx-theme = [
{file = "pydata_sphinx_theme-0.9.0-py3-none-any.whl", hash = "sha256:b22b442a6d6437e5eaf0a1f057169ffcb31eaa9f10be7d5481a125e735c71c12"},
{file = "pydata_sphinx_theme-0.9.0.tar.gz", hash = "sha256:03598a86915b596f4bf80bef79a4d33276a83e670bf360def699dbb9f99dc57a"},
]
pydot = [
{file = "pydot-1.4.2-py2.py3-none-any.whl", hash = "sha256:66c98190c65b8d2e2382a441b4c0edfdb4f4c025ef9cb9874de478fb0793a451"},
{file = "pydot-1.4.2.tar.gz", hash = "sha256:248081a39bcb56784deb018977e428605c1c758f10897a339fce1dd728ff007d"},
]
pydotplus = [
{file = "pydotplus-2.0.2.tar.gz", hash = "sha256:91e85e9ee9b85d2391ead7d635e3d9c7f5f44fd60a60e59b13e2403fa66505c4"},
]
pyflakes = [
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
]
pygam = [
{file = "pygam-0.8.0-py2.py3-none-any.whl", hash = "sha256:198bd478700520b7c399cc4bcbc011e46850969c32fb09ef0b7a4bbb14e842a5"},
{file = "pygam-0.8.0.tar.gz", hash = "sha256:5cae01aea8b2fede72a6da0aba1490213af54b3476745666af26bbe700479166"},
]
pygments = [
{file = "Pygments-2.13.0-py3-none-any.whl", hash = "sha256:f643f331ab57ba3c9d89212ee4a2dabc6e94f117cf4eefde99a0574720d14c42"},
{file = "Pygments-2.13.0.tar.gz", hash = "sha256:56a8508ae95f98e2b9bdf93a6be5ae3f7d8af858b43e02c5a2ff083726be40c1"},
]
pygraphviz = [
{file = "pygraphviz-1.10.zip", hash = "sha256:457e093a888128903251a266a8cc16b4ba93f3f6334b3ebfed92c7471a74d867"},
]
pyparsing = [
{file = "pyparsing-3.0.9-py3-none-any.whl", hash = "sha256:5026bae9a10eeaefb61dab2f09052b9f4307d44aee4eda64b309723d8d206bbc"},
{file = "pyparsing-3.0.9.tar.gz", hash = "sha256:2b020ecf7d21b687f219b71ecad3631f644a47f01403fa1d1036b0c6416d70fb"},
]
pyro-api = [
{file = "pyro-api-0.1.2.tar.gz", hash = "sha256:a1b900d9580aa1c2fab3b123ab7ff33413744da7c5f440bd4aadc4d40d14d920"},
{file = "pyro_api-0.1.2-py3-none-any.whl", hash = "sha256:10e0e42e9e4401ce464dab79c870e50dfb4f413d326fa777f3582928ef9caf8f"},
]
pyro-ppl = [
{file = "pyro-ppl-1.8.3.tar.gz", hash = "sha256:3edd4381b020d12e8ab50ebe0298c7a68d150b8a024f998ad86fdac7a308d50e"},
{file = "pyro_ppl-1.8.3-py3-none-any.whl", hash = "sha256:cf642cb8bd1a54ad9c69960a5910e423b33f5de3480589b5dcc5f11236b403fb"},
]
pyrsistent = [
{file = "pyrsistent-0.19.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d6982b5a0237e1b7d876b60265564648a69b14017f3b5f908c5be2de3f9abb7a"},
{file = "pyrsistent-0.19.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:187d5730b0507d9285a96fca9716310d572e5464cadd19f22b63a6976254d77a"},
{file = "pyrsistent-0.19.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:055ab45d5911d7cae397dc418808d8802fb95262751872c841c170b0dbf51eed"},
{file = "pyrsistent-0.19.2-cp310-cp310-win32.whl", hash = "sha256:456cb30ca8bff00596519f2c53e42c245c09e1a4543945703acd4312949bfd41"},
{file = "pyrsistent-0.19.2-cp310-cp310-win_amd64.whl", hash = "sha256:b39725209e06759217d1ac5fcdb510e98670af9e37223985f330b611f62e7425"},
{file = "pyrsistent-0.19.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:2aede922a488861de0ad00c7630a6e2d57e8023e4be72d9d7147a9fcd2d30712"},
{file = "pyrsistent-0.19.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:879b4c2f4d41585c42df4d7654ddffff1239dc4065bc88b745f0341828b83e78"},
{file = "pyrsistent-0.19.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c43bec251bbd10e3cb58ced80609c5c1eb238da9ca78b964aea410fb820d00d6"},
{file = "pyrsistent-0.19.2-cp37-cp37m-win32.whl", hash = "sha256:d690b18ac4b3e3cab73b0b7aa7dbe65978a172ff94970ff98d82f2031f8971c2"},
{file = "pyrsistent-0.19.2-cp37-cp37m-win_amd64.whl", hash = "sha256:3ba4134a3ff0fc7ad225b6b457d1309f4698108fb6b35532d015dca8f5abed73"},
{file = "pyrsistent-0.19.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a178209e2df710e3f142cbd05313ba0c5ebed0a55d78d9945ac7a4e09d923308"},
{file = "pyrsistent-0.19.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e371b844cec09d8dc424d940e54bba8f67a03ebea20ff7b7b0d56f526c71d584"},
{file = "pyrsistent-0.19.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:111156137b2e71f3a9936baf27cb322e8024dac3dc54ec7fb9f0bcf3249e68bb"},
{file = "pyrsistent-0.19.2-cp38-cp38-win32.whl", hash = "sha256:e5d8f84d81e3729c3b506657dddfe46e8ba9c330bf1858ee33108f8bb2adb38a"},
{file = "pyrsistent-0.19.2-cp38-cp38-win_amd64.whl", hash = "sha256:9cd3e9978d12b5d99cbdc727a3022da0430ad007dacf33d0bf554b96427f33ab"},
{file = "pyrsistent-0.19.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f1258f4e6c42ad0b20f9cfcc3ada5bd6b83374516cd01c0960e3cb75fdca6770"},
{file = "pyrsistent-0.19.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21455e2b16000440e896ab99e8304617151981ed40c29e9507ef1c2e4314ee95"},
{file = "pyrsistent-0.19.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bfd880614c6237243ff53a0539f1cb26987a6dc8ac6e66e0c5a40617296a045e"},
{file = "pyrsistent-0.19.2-cp39-cp39-win32.whl", hash = "sha256:71d332b0320642b3261e9fee47ab9e65872c2bd90260e5d225dabeed93cbd42b"},
{file = "pyrsistent-0.19.2-cp39-cp39-win_amd64.whl", hash = "sha256:dec3eac7549869365fe263831f576c8457f6c833937c68542d08fde73457d291"},
{file = "pyrsistent-0.19.2-py3-none-any.whl", hash = "sha256:ea6b79a02a28550c98b6ca9c35b9f492beaa54d7c5c9e9949555893c8a9234d0"},
{file = "pyrsistent-0.19.2.tar.gz", hash = "sha256:bfa0351be89c9fcbcb8c9879b826f4353be10f58f8a677efab0c017bf7137ec2"},
]
pytest = [
{file = "pytest-7.2.0-py3-none-any.whl", hash = "sha256:892f933d339f068883b6fd5a459f03d85bfcb355e4981e146d2c7616c21fef71"},
{file = "pytest-7.2.0.tar.gz", hash = "sha256:c4014eb40e10f11f355ad4e3c2fb2c6c6d1919c73f3b5a433de4708202cade59"},
]
pytest-cov = [
{file = "pytest-cov-3.0.0.tar.gz", hash = "sha256:e7f0f5b1617d2210a2cabc266dfe2f4c75a8d32fb89eafb7ad9d06f6d076d470"},
{file = "pytest_cov-3.0.0-py3-none-any.whl", hash = "sha256:578d5d15ac4a25e5f961c938b85a05b09fdaae9deef3bb6de9a6e766622ca7a6"},
]
pytest-split = [
{file = "pytest-split-0.8.0.tar.gz", hash = "sha256:8571a3f60ca8656c698ed86b0a3212bb9e79586ecb201daef9988c336ff0e6ff"},
{file = "pytest_split-0.8.0-py3-none-any.whl", hash = "sha256:2e06b8b1ab7ceb19d0b001548271abaf91d12415a8687086cf40581c555d309f"},
]
python-dateutil = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
]
python-utils = [
{file = "python-utils-3.4.5.tar.gz", hash = "sha256:7e329c427a6d23036cfcc4501638afb31b2ddc8896f25393562833874b8c6e0a"},
{file = "python_utils-3.4.5-py2.py3-none-any.whl", hash = "sha256:22990259324eae88faa3389d302861a825dbdd217ab40e3ec701851b3337d592"},
]
pytz = [
{file = "pytz-2022.6-py2.py3-none-any.whl", hash = "sha256:222439474e9c98fced559f1709d89e6c9cbf8d79c794ff3eb9f8800064291427"},
{file = "pytz-2022.6.tar.gz", hash = "sha256:e89512406b793ca39f5971bc999cc538ce125c0e51c27941bef4568b460095e2"},
]
pytz-deprecation-shim = [
{file = "pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl", hash = "sha256:8314c9692a636c8eb3bda879b9f119e350e93223ae83e70e80c31675a0fdc1a6"},
{file = "pytz_deprecation_shim-0.1.0.post0.tar.gz", hash = "sha256:af097bae1b616dde5c5744441e2ddc69e74dfdcb0c263129610d85b87445a59d"},
]
pywin32 = [
{file = "pywin32-305-cp310-cp310-win32.whl", hash = "sha256:421f6cd86e84bbb696d54563c48014b12a23ef95a14e0bdba526be756d89f116"},
{file = "pywin32-305-cp310-cp310-win_amd64.whl", hash = "sha256:73e819c6bed89f44ff1d690498c0a811948f73777e5f97c494c152b850fad478"},
{file = "pywin32-305-cp310-cp310-win_arm64.whl", hash = "sha256:742eb905ce2187133a29365b428e6c3b9001d79accdc30aa8969afba1d8470f4"},
{file = "pywin32-305-cp311-cp311-win32.whl", hash = "sha256:19ca459cd2e66c0e2cc9a09d589f71d827f26d47fe4a9d09175f6aa0256b51c2"},
{file = "pywin32-305-cp311-cp311-win_amd64.whl", hash = "sha256:326f42ab4cfff56e77e3e595aeaf6c216712bbdd91e464d167c6434b28d65990"},
{file = "pywin32-305-cp311-cp311-win_arm64.whl", hash = "sha256:4ecd404b2c6eceaca52f8b2e3e91b2187850a1ad3f8b746d0796a98b4cea04db"},
{file = "pywin32-305-cp36-cp36m-win32.whl", hash = "sha256:48d8b1659284f3c17b68587af047d110d8c44837736b8932c034091683e05863"},
{file = "pywin32-305-cp36-cp36m-win_amd64.whl", hash = "sha256:13362cc5aa93c2beaf489c9c9017c793722aeb56d3e5166dadd5ef82da021fe1"},
{file = "pywin32-305-cp37-cp37m-win32.whl", hash = "sha256:a55db448124d1c1484df22fa8bbcbc45c64da5e6eae74ab095b9ea62e6d00496"},
{file = "pywin32-305-cp37-cp37m-win_amd64.whl", hash = "sha256:109f98980bfb27e78f4df8a51a8198e10b0f347257d1e265bb1a32993d0c973d"},
{file = "pywin32-305-cp38-cp38-win32.whl", hash = "sha256:9dd98384da775afa009bc04863426cb30596fd78c6f8e4e2e5bbf4edf8029504"},
{file = "pywin32-305-cp38-cp38-win_amd64.whl", hash = "sha256:56d7a9c6e1a6835f521788f53b5af7912090674bb84ef5611663ee1595860fc7"},
{file = "pywin32-305-cp39-cp39-win32.whl", hash = "sha256:9d968c677ac4d5cbdaa62fd3014ab241718e619d8e36ef8e11fb930515a1e918"},
{file = "pywin32-305-cp39-cp39-win_amd64.whl", hash = "sha256:50768c6b7c3f0b38b7fb14dd4104da93ebced5f1a50dc0e834594bff6fbe1271"},
]
pywinpty = [
{file = "pywinpty-2.0.9-cp310-none-win_amd64.whl", hash = "sha256:30a7b371446a694a6ce5ef906d70ac04e569de5308c42a2bdc9c3bc9275ec51f"},
{file = "pywinpty-2.0.9-cp311-none-win_amd64.whl", hash = "sha256:d78ef6f4bd7a6c6f94dc1a39ba8fb028540cc39f5cb593e756506db17843125f"},
{file = "pywinpty-2.0.9-cp37-none-win_amd64.whl", hash = "sha256:5ed36aa087e35a3a183f833631b3e4c1ae92fe2faabfce0fa91b77ed3f0f1382"},
{file = "pywinpty-2.0.9-cp38-none-win_amd64.whl", hash = "sha256:2352f44ee913faaec0a02d3c112595e56b8af7feeb8100efc6dc1a8685044199"},
{file = "pywinpty-2.0.9-cp39-none-win_amd64.whl", hash = "sha256:ba75ec55f46c9e17db961d26485b033deb20758b1731e8e208e1e8a387fcf70c"},
{file = "pywinpty-2.0.9.tar.gz", hash = "sha256:01b6400dd79212f50a2f01af1c65b781290ff39610853db99bf03962eb9a615f"},
]
pyyaml = [
{file = "PyYAML-6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4db7c7aef085872ef65a8fd7d6d09a14ae91f691dec3e87ee5ee0539d516f53"},
{file = "PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9df7ed3b3d2e0ecfe09e14741b857df43adb5a3ddadc919a2d94fbdf78fea53c"},
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77f396e6ef4c73fdc33a9157446466f1cff553d979bd00ecb64385760c6babdc"},
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a80a78046a72361de73f8f395f1f1e49f956c6be882eed58505a15f3e430962b"},
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f84fbc98b019fef2ee9a1cb3ce93e3187a6df0b2538a651bfb890254ba9f90b5"},
{file = "PyYAML-6.0-cp310-cp310-win32.whl", hash = "sha256:2cd5df3de48857ed0544b34e2d40e9fac445930039f3cfe4bcc592a1f836d513"},
{file = "PyYAML-6.0-cp310-cp310-win_amd64.whl", hash = "sha256:daf496c58a8c52083df09b80c860005194014c3698698d1a57cbcfa182142a3a"},
{file = "PyYAML-6.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d4b0ba9512519522b118090257be113b9468d804b19d63c71dbcf4a48fa32358"},
{file = "PyYAML-6.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:81957921f441d50af23654aa6c5e5eaf9b06aba7f0a19c18a538dc7ef291c5a1"},
{file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa17f5bc4d1b10afd4466fd3a44dc0e245382deca5b3c353d8b757f9e3ecb8d"},
{file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dbad0e9d368bb989f4515da330b88a057617d16b6a8245084f1b05400f24609f"},
{file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:432557aa2c09802be39460360ddffd48156e30721f5e8d917f01d31694216782"},
{file = "PyYAML-6.0-cp311-cp311-win32.whl", hash = "sha256:bfaef573a63ba8923503d27530362590ff4f576c626d86a9fed95822a8255fd7"},
{file = "PyYAML-6.0-cp311-cp311-win_amd64.whl", hash = "sha256:01b45c0191e6d66c470b6cf1b9531a771a83c1c4208272ead47a3ae4f2f603bf"},
{file = "PyYAML-6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:897b80890765f037df3403d22bab41627ca8811ae55e9a722fd0392850ec4d86"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50602afada6d6cbfad699b0c7bb50d5ccffa7e46a3d738092afddc1f9758427f"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:48c346915c114f5fdb3ead70312bd042a953a8ce5c7106d5bfb1a5254e47da92"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:98c4d36e99714e55cfbaaee6dd5badbc9a1ec339ebfc3b1f52e293aee6bb71a4"},
{file = "PyYAML-6.0-cp36-cp36m-win32.whl", hash = "sha256:0283c35a6a9fbf047493e3a0ce8d79ef5030852c51e9d911a27badfde0605293"},
{file = "PyYAML-6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:07751360502caac1c067a8132d150cf3d61339af5691fe9e87803040dbc5db57"},
{file = "PyYAML-6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:819b3830a1543db06c4d4b865e70ded25be52a2e0631ccd2f6a47a2822f2fd7c"},
{file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:473f9edb243cb1935ab5a084eb238d842fb8f404ed2193a915d1784b5a6b5fc0"},
{file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ce82d761c532fe4ec3f87fc45688bdd3a4c1dc5e0b4a19814b9009a29baefd4"},
{file = "PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:231710d57adfd809ef5d34183b8ed1eeae3f76459c18fb4a0b373ad56bedcdd9"},
{file = "PyYAML-6.0-cp37-cp37m-win32.whl", hash = "sha256:c5687b8d43cf58545ade1fe3e055f70eac7a5a1a0bf42824308d868289a95737"},
{file = "PyYAML-6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:d15a181d1ecd0d4270dc32edb46f7cb7733c7c508857278d3d378d14d606db2d"},
{file = "PyYAML-6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0b4624f379dab24d3725ffde76559cff63d9ec94e1736b556dacdfebe5ab6d4b"},
{file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:213c60cd50106436cc818accf5baa1aba61c0189ff610f64f4a3e8c6726218ba"},
{file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9fa600030013c4de8165339db93d182b9431076eb98eb40ee068700c9c813e34"},
{file = "PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:277a0ef2981ca40581a47093e9e2d13b3f1fbbeffae064c1d21bfceba2030287"},
{file = "PyYAML-6.0-cp38-cp38-win32.whl", hash = "sha256:d4eccecf9adf6fbcc6861a38015c2a64f38b9d94838ac1810a9023a0609e1b78"},
{file = "PyYAML-6.0-cp38-cp38-win_amd64.whl", hash = "sha256:1e4747bc279b4f613a09eb64bba2ba602d8a6664c6ce6396a4d0cd413a50ce07"},
{file = "PyYAML-6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:055d937d65826939cb044fc8c9b08889e8c743fdc6a32b33e2390f66013e449b"},
{file = "PyYAML-6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e61ceaab6f49fb8bdfaa0f92c4b57bcfbea54c09277b1b4f7ac376bfb7a7c174"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d67d839ede4ed1b28a4e8909735fc992a923cdb84e618544973d7dfc71540803"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cba8c411ef271aa037d7357a2bc8f9ee8b58b9965831d9e51baf703280dc73d3"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:40527857252b61eacd1d9af500c3337ba8deb8fc298940291486c465c8b46ec0"},
{file = "PyYAML-6.0-cp39-cp39-win32.whl", hash = "sha256:b5b9eccad747aabaaffbc6064800670f0c297e52c12754eb1d976c57e4f74dcb"},
{file = "PyYAML-6.0-cp39-cp39-win_amd64.whl", hash = "sha256:b3d267842bf12586ba6c734f89d1f5b871df0273157918b0ccefa29deb05c21c"},
{file = "PyYAML-6.0.tar.gz", hash = "sha256:68fb519c14306fec9720a2a5b45bc9f0c8d1b9c72adf45c37baedfcd949c35a2"},
]
pyzmq = [
{file = "pyzmq-24.0.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:28b119ba97129d3001673a697b7cce47fe6de1f7255d104c2f01108a5179a066"},
{file = "pyzmq-24.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bcbebd369493d68162cddb74a9c1fcebd139dfbb7ddb23d8f8e43e6c87bac3a6"},
{file = "pyzmq-24.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ae61446166983c663cee42c852ed63899e43e484abf080089f771df4b9d272ef"},
{file = "pyzmq-24.0.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:87f7ac99b15270db8d53f28c3c7b968612993a90a5cf359da354efe96f5372b4"},
{file = "pyzmq-24.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9dca7c3956b03b7663fac4d150f5e6d4f6f38b2462c1e9afd83bcf7019f17913"},
{file = "pyzmq-24.0.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:8c78bfe20d4c890cb5580a3b9290f700c570e167d4cdcc55feec07030297a5e3"},
{file = "pyzmq-24.0.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:48f721f070726cd2a6e44f3c33f8ee4b24188e4b816e6dd8ba542c8c3bb5b246"},
{file = "pyzmq-24.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:afe1f3bc486d0ce40abb0a0c9adb39aed3bbac36ebdc596487b0cceba55c21c1"},
{file = "pyzmq-24.0.1-cp310-cp310-win32.whl", hash = "sha256:3e6192dbcefaaa52ed81be88525a54a445f4b4fe2fffcae7fe40ebb58bd06bfd"},
{file = "pyzmq-24.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:86de64468cad9c6d269f32a6390e210ca5ada568c7a55de8e681ca3b897bb340"},
{file = "pyzmq-24.0.1-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:838812c65ed5f7c2bd11f7b098d2e5d01685a3f6d1f82849423b570bae698c00"},
{file = "pyzmq-24.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:dfb992dbcd88d8254471760879d48fb20836d91baa90f181c957122f9592b3dc"},
{file = "pyzmq-24.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7abddb2bd5489d30ffeb4b93a428130886c171b4d355ccd226e83254fcb6b9ef"},
{file = "pyzmq-24.0.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:94010bd61bc168c103a5b3b0f56ed3b616688192db7cd5b1d626e49f28ff51b3"},
{file = "pyzmq-24.0.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:8242543c522d84d033fe79be04cb559b80d7eb98ad81b137ff7e0a9020f00ace"},
{file = "pyzmq-24.0.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ccb94342d13e3bf3ffa6e62f95b5e3f0bc6bfa94558cb37f4b3d09d6feb536ff"},
{file = "pyzmq-24.0.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:6640f83df0ae4ae1104d4c62b77e9ef39be85ebe53f636388707d532bee2b7b8"},
{file = "pyzmq-24.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:a180dbd5ea5d47c2d3b716d5c19cc3fb162d1c8db93b21a1295d69585bfddac1"},
{file = "pyzmq-24.0.1-cp311-cp311-win32.whl", hash = "sha256:624321120f7e60336be8ec74a172ae7fba5c3ed5bf787cc85f7e9986c9e0ebc2"},
{file = "pyzmq-24.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:1724117bae69e091309ffb8255412c4651d3f6355560d9af312d547f6c5bc8b8"},
{file = "pyzmq-24.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:15975747462ec49fdc863af906bab87c43b2491403ab37a6d88410635786b0f4"},
{file = "pyzmq-24.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b947e264f0e77d30dcbccbb00f49f900b204b922eb0c3a9f0afd61aaa1cedc3d"},
{file = "pyzmq-24.0.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0ec91f1bad66f3ee8c6deb65fa1fe418e8ad803efedd69c35f3b5502f43bd1dc"},
{file = "pyzmq-24.0.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:db03704b3506455d86ec72c3358a779e9b1d07b61220dfb43702b7b668edcd0d"},
{file = "pyzmq-24.0.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:e7e66b4e403c2836ac74f26c4b65d8ac0ca1eef41dfcac2d013b7482befaad83"},
{file = "pyzmq-24.0.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:7a23ccc1083c260fa9685c93e3b170baba45aeed4b524deb3f426b0c40c11639"},
{file = "pyzmq-24.0.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:fa0ae3275ef706c0309556061185dd0e4c4cd3b7d6f67ae617e4e677c7a41e2e"},
{file = "pyzmq-24.0.1-cp36-cp36m-win32.whl", hash = "sha256:f01de4ec083daebf210531e2cca3bdb1608dbbbe00a9723e261d92087a1f6ebc"},
{file = "pyzmq-24.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:de4217b9eb8b541cf2b7fde4401ce9d9a411cc0af85d410f9d6f4333f43640be"},
{file = "pyzmq-24.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:78068e8678ca023594e4a0ab558905c1033b2d3e806a0ad9e3094e231e115a33"},
{file = "pyzmq-24.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77c2713faf25a953c69cf0f723d1b7dd83827b0834e6c41e3fb3bbc6765914a1"},
{file = "pyzmq-24.0.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:8bb4af15f305056e95ca1bd086239b9ebc6ad55e9f49076d27d80027f72752f6"},
{file = "pyzmq-24.0.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0f14cffd32e9c4c73da66db97853a6aeceaac34acdc0fae9e5bbc9370281864c"},
{file = "pyzmq-24.0.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:0108358dab8c6b27ff6b985c2af4b12665c1bc659648284153ee501000f5c107"},
{file = "pyzmq-24.0.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:d66689e840e75221b0b290b0befa86f059fb35e1ee6443bce51516d4d61b6b99"},
{file = "pyzmq-24.0.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:ae08ac90aa8fa14caafc7a6251bd218bf6dac518b7bff09caaa5e781119ba3f2"},
{file = "pyzmq-24.0.1-cp37-cp37m-win32.whl", hash = "sha256:8421aa8c9b45ea608c205db9e1c0c855c7e54d0e9c2c2f337ce024f6843cab3b"},
{file = "pyzmq-24.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:54d8b9c5e288362ec8595c1d98666d36f2070fd0c2f76e2b3c60fbad9bd76227"},
{file = "pyzmq-24.0.1-cp38-cp38-macosx_10_15_universal2.whl", hash = "sha256:acbd0a6d61cc954b9f535daaa9ec26b0a60a0d4353c5f7c1438ebc88a359a47e"},
{file = "pyzmq-24.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:47b11a729d61a47df56346283a4a800fa379ae6a85870d5a2e1e4956c828eedc"},
{file = "pyzmq-24.0.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abe6eb10122f0d746a0d510c2039ae8edb27bc9af29f6d1b05a66cc2401353ff"},
{file = "pyzmq-24.0.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:07bec1a1b22dacf718f2c0e71b49600bb6a31a88f06527dfd0b5aababe3fa3f7"},
{file = "pyzmq-24.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f0d945a85b70da97ae86113faf9f1b9294efe66bd4a5d6f82f2676d567338b66"},
{file = "pyzmq-24.0.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:1b7928bb7580736ffac5baf814097be342ba08d3cfdfb48e52773ec959572287"},
{file = "pyzmq-24.0.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:b946da90dc2799bcafa682692c1d2139b2a96ec3c24fa9fc6f5b0da782675330"},
{file = "pyzmq-24.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:c8840f064b1fb377cffd3efeaad2b190c14d4c8da02316dae07571252d20b31f"},
{file = "pyzmq-24.0.1-cp38-cp38-win32.whl", hash = "sha256:4854f9edc5208f63f0841c0c667260ae8d6846cfa233c479e29fdc85d42ebd58"},
{file = "pyzmq-24.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:42d4f97b9795a7aafa152a36fe2ad44549b83a743fd3e77011136def512e6c2a"},
{file = "pyzmq-24.0.1-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:52afb0ac962963fff30cf1be775bc51ae083ef4c1e354266ab20e5382057dd62"},
{file = "pyzmq-24.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8bad8210ad4df68c44ff3685cca3cda448ee46e20d13edcff8909eba6ec01ca4"},
{file = "pyzmq-24.0.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:dabf1a05318d95b1537fd61d9330ef4313ea1216eea128a17615038859da3b3b"},
{file = "pyzmq-24.0.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5bd3d7dfd9cd058eb68d9a905dec854f86649f64d4ddf21f3ec289341386c44b"},
{file = "pyzmq-24.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e8012bce6836d3f20a6c9599f81dfa945f433dab4dbd0c4917a6fb1f998ab33d"},
{file = "pyzmq-24.0.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:c31805d2c8ade9b11feca4674eee2b9cce1fec3e8ddb7bbdd961a09dc76a80ea"},
{file = "pyzmq-24.0.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:3104f4b084ad5d9c0cb87445cc8cfd96bba710bef4a66c2674910127044df209"},
{file = "pyzmq-24.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:df0841f94928f8af9c7a1f0aaaffba1fb74607af023a152f59379c01c53aee58"},
{file = "pyzmq-24.0.1-cp39-cp39-win32.whl", hash = "sha256:a435ef8a3bd95c8a2d316d6e0ff70d0db524f6037411652803e118871d703333"},
{file = "pyzmq-24.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:2032d9cb994ce3b4cba2b8dfae08c7e25bc14ba484c770d4d3be33c27de8c45b"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:bb5635c851eef3a7a54becde6da99485eecf7d068bd885ac8e6d173c4ecd68b0"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:83ea1a398f192957cb986d9206ce229efe0ee75e3c6635baff53ddf39bd718d5"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:941fab0073f0a54dc33d1a0460cb04e0d85893cb0c5e1476c785000f8b359409"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0e8f482c44ccb5884bf3f638f29bea0f8dc68c97e38b2061769c4cb697f6140d"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:613010b5d17906c4367609e6f52e9a2595e35d5cc27d36ff3f1b6fa6e954d944"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:65c94410b5a8355cfcf12fd600a313efee46ce96a09e911ea92cf2acf6708804"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:20e7eeb1166087db636c06cae04a1ef59298627f56fb17da10528ab52a14c87f"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a2712aee7b3834ace51738c15d9ee152cc5a98dc7d57dd93300461b792ab7b43"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1a7c280185c4da99e0cc06c63bdf91f5b0b71deb70d8717f0ab870a43e376db8"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:858375573c9225cc8e5b49bfac846a77b696b8d5e815711b8d4ba3141e6e8879"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:80093b595921eed1a2cead546a683b9e2ae7f4a4592bb2ab22f70d30174f003a"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f3f3154fde2b1ff3aa7b4f9326347ebc89c8ef425ca1db8f665175e6d3bd42f"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abb756147314430bee5d10919b8493c0ccb109ddb7f5dfd2fcd7441266a25b75"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:44e706bac34e9f50779cb8c39f10b53a4d15aebb97235643d3112ac20bd577b4"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:687700f8371643916a1d2c61f3fdaa630407dd205c38afff936545d7b7466066"},
{file = "pyzmq-24.0.1.tar.gz", hash = "sha256:216f5d7dbb67166759e59b0479bca82b8acf9bed6015b526b8eb10143fb08e77"},
]
qtconsole = [
{file = "qtconsole-5.4.0-py3-none-any.whl", hash = "sha256:be13560c19bdb3b54ed9741a915aa701a68d424519e8341ac479a91209e694b2"},
{file = "qtconsole-5.4.0.tar.gz", hash = "sha256:57748ea2fd26320a0b77adba20131cfbb13818c7c96d83fafcb110ff55f58b35"},
]
qtpy = [
{file = "QtPy-2.3.0-py3-none-any.whl", hash = "sha256:8d6d544fc20facd27360ea189592e6135c614785f0dec0b4f083289de6beb408"},
{file = "QtPy-2.3.0.tar.gz", hash = "sha256:0603c9c83ccc035a4717a12908bf6bc6cb22509827ea2ec0e94c2da7c9ed57c5"},
]
requests = [
{file = "requests-2.28.1-py3-none-any.whl", hash = "sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349"},
{file = "requests-2.28.1.tar.gz", hash = "sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983"},
]
requests-oauthlib = [
{file = "requests-oauthlib-1.3.1.tar.gz", hash = "sha256:75beac4a47881eeb94d5ea5d6ad31ef88856affe2332b9aafb52c6452ccf0d7a"},
{file = "requests_oauthlib-1.3.1-py2.py3-none-any.whl", hash = "sha256:2577c501a2fb8d05a304c09d090d6e47c306fef15809d102b327cf8364bddab5"},
]
rpy2 = [
{file = "rpy2-3.5.6-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:7f56bb66d95aaa59f52c82bdff3bb268a5745cc3779839ca1ac9aecfc411c17a"},
{file = "rpy2-3.5.6-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:defff796b43fe230e1e698a1bc353b7a4a25d4d9de856ee1bcffd6831edc825c"},
{file = "rpy2-3.5.6-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:a3f74cd54bd2e21a94274ae5306113e24f8a15c034b15be931188939292b49f7"},
{file = "rpy2-3.5.6-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:6a2e4be001b98c00f084a561cfcf9ca52f938cd8fcd8acfa0fbfc6a8be219339"},
{file = "rpy2-3.5.6.tar.gz", hash = "sha256:3404f1031d2d8ff8a1002656ab8e394b8ac16dd34ca43af68deed102f396e771"},
]
rsa = [
{file = "rsa-4.9-py3-none-any.whl", hash = "sha256:90260d9058e514786967344d0ef75fa8727eed8a7d2e43ce9f4bcf1b536174f7"},
{file = "rsa-4.9.tar.gz", hash = "sha256:e38464a49c6c85d7f1351b0126661487a7e0a14a50f1675ec50eb34d4f20ef21"},
]
s3transfer = [
{file = "s3transfer-0.6.0-py3-none-any.whl", hash = "sha256:06176b74f3a15f61f1b4f25a1fc29a4429040b7647133a463da8fa5bd28d5ecd"},
{file = "s3transfer-0.6.0.tar.gz", hash = "sha256:2ed07d3866f523cc561bf4a00fc5535827981b117dd7876f036b0c1aca42c947"},
]
scikit-learn = [
{file = "scikit-learn-1.0.2.tar.gz", hash = "sha256:b5870959a5484b614f26d31ca4c17524b1b0317522199dc985c3b4256e030767"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:da3c84694ff693b5b3194d8752ccf935a665b8b5edc33a283122f4273ca3e687"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:75307d9ea39236cad7eea87143155eea24d48f93f3a2f9389c817f7019f00705"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f14517e174bd7332f1cca2c959e704696a5e0ba246eb8763e6c24876d8710049"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9aac97e57c196206179f674f09bc6bffcd0284e2ba95b7fe0b402ac3f986023"},
{file = "scikit_learn-1.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:d93d4c28370aea8a7cbf6015e8a669cd5d69f856cc2aa44e7a590fb805bb5583"},
{file = "scikit_learn-1.0.2-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:85260fb430b795d806251dd3bb05e6f48cdc777ac31f2bcf2bc8bbed3270a8f5"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a053a6a527c87c5c4fa7bf1ab2556fa16d8345cf99b6c5a19030a4a7cd8fd2c0"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:245c9b5a67445f6f044411e16a93a554edc1efdcce94d3fc0bc6a4b9ac30b752"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:158faf30684c92a78e12da19c73feff9641a928a8024b4fa5ec11d583f3d8a87"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:08ef968f6b72033c16c479c966bf37ccd49b06ea91b765e1cc27afefe723920b"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16455ace947d8d9e5391435c2977178d0ff03a261571e67f627c8fee0f9d431a"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win32.whl", hash = "sha256:2f3b453e0b149898577e301d27e098dfe1a36943f7bb0ad704d1e548efc3b448"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win_amd64.whl", hash = "sha256:46f431ec59dead665e1370314dbebc99ead05e1c0a9df42f22d6a0e00044820f"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:ff3fa8ea0e09e38677762afc6e14cad77b5e125b0ea70c9bba1992f02c93b028"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:9369b030e155f8188743eb4893ac17a27f81d28a884af460870c7c072f114243"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7d6b2475f1c23a698b48515217eb26b45a6598c7b1840ba23b3c5acece658dbb"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:285db0352e635b9e3392b0b426bc48c3b485512d3b4ac3c7a44ec2a2ba061e66"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cb33fe1dc6f73dc19e67b264dbb5dde2a0539b986435fdd78ed978c14654830"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b1391d1a6e2268485a63c3073111fe3ba6ec5145fc957481cfd0652be571226d"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc3744dabc56b50bec73624aeca02e0def06b03cb287de26836e730659c5d29c"},
{file = "scikit_learn-1.0.2-cp38-cp38-win32.whl", hash = "sha256:a999c9f02ff9570c783069f1074f06fe7386ec65b84c983db5aeb8144356a355"},
{file = "scikit_learn-1.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:7626a34eabbf370a638f32d1a3ad50526844ba58d63e3ab81ba91e2a7c6d037e"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:a90b60048f9ffdd962d2ad2fb16367a87ac34d76e02550968719eb7b5716fd10"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:7a93c1292799620df90348800d5ac06f3794c1316ca247525fa31169f6d25855"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:eabceab574f471de0b0eb3f2ecf2eee9f10b3106570481d007ed1c84ebf6d6a1"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:55f2f3a8414e14fbee03782f9fe16cca0f141d639d2b1c1a36779fa069e1db57"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80095a1e4b93bd33261ef03b9bc86d6db649f988ea4dbcf7110d0cded8d7213d"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fa38a1b9b38ae1fad2863eff5e0d69608567453fdfc850c992e6e47eb764e846"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ff746a69ff2ef25f62b36338c615dd15954ddc3ab8e73530237dd73235e76d62"},
{file = "scikit_learn-1.0.2-cp39-cp39-win32.whl", hash = "sha256:e174242caecb11e4abf169342641778f68e1bfaba80cd18acd6bc84286b9a534"},
{file = "scikit_learn-1.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:b54a62c6e318ddbfa7d22c383466d38d2ee770ebdb5ddb668d56a099f6eaf75f"},
]
scipy = [
{file = "scipy-1.8.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:65b77f20202599c51eb2771d11a6b899b97989159b7975e9b5259594f1d35ef4"},
{file = "scipy-1.8.1-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:e013aed00ed776d790be4cb32826adb72799c61e318676172495383ba4570aa4"},
{file = "scipy-1.8.1-cp310-cp310-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:02b567e722d62bddd4ac253dafb01ce7ed8742cf8031aea030a41414b86c1125"},
{file = "scipy-1.8.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1da52b45ce1a24a4a22db6c157c38b39885a990a566748fc904ec9f03ed8c6ba"},
{file = "scipy-1.8.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a0aa8220b89b2e3748a2836fbfa116194378910f1a6e78e4675a095bcd2c762d"},
{file = "scipy-1.8.1-cp310-cp310-win_amd64.whl", hash = "sha256:4e53a55f6a4f22de01ffe1d2f016e30adedb67a699a310cdcac312806807ca81"},
{file = "scipy-1.8.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:28d2cab0c6ac5aa131cc5071a3a1d8e1366dad82288d9ec2ca44df78fb50e649"},
{file = "scipy-1.8.1-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:6311e3ae9cc75f77c33076cb2794fb0606f14c8f1b1c9ff8ce6005ba2c283621"},
{file = "scipy-1.8.1-cp38-cp38-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:3b69b90c9419884efeffaac2c38376d6ef566e6e730a231e15722b0ab58f0328"},
{file = "scipy-1.8.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:6cc6b33139eb63f30725d5f7fa175763dc2df6a8f38ddf8df971f7c345b652dc"},
{file = "scipy-1.8.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9c4e3ae8a716c8b3151e16c05edb1daf4cb4d866caa385e861556aff41300c14"},
{file = "scipy-1.8.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23b22fbeef3807966ea42d8163322366dd89da9bebdc075da7034cee3a1441ca"},
{file = "scipy-1.8.1-cp38-cp38-win32.whl", hash = "sha256:4b93ec6f4c3c4d041b26b5f179a6aab8f5045423117ae7a45ba9710301d7e462"},
{file = "scipy-1.8.1-cp38-cp38-win_amd64.whl", hash = "sha256:70ebc84134cf0c504ce6a5f12d6db92cb2a8a53a49437a6bb4edca0bc101f11c"},
{file = "scipy-1.8.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f3e7a8867f307e3359cc0ed2c63b61a1e33a19080f92fe377bc7d49f646f2ec1"},
{file = "scipy-1.8.1-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:2ef0fbc8bcf102c1998c1f16f15befe7cffba90895d6e84861cd6c6a33fb54f6"},
{file = "scipy-1.8.1-cp39-cp39-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:83606129247e7610b58d0e1e93d2c5133959e9cf93555d3c27e536892f1ba1f2"},
{file = "scipy-1.8.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:93d07494a8900d55492401917a119948ed330b8c3f1d700e0b904a578f10ead4"},
{file = "scipy-1.8.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3b3c8924252caaffc54d4a99f1360aeec001e61267595561089f8b5900821bb"},
{file = "scipy-1.8.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70de2f11bf64ca9921fda018864c78af7147025e467ce9f4a11bc877266900a6"},
{file = "scipy-1.8.1-cp39-cp39-win32.whl", hash = "sha256:1166514aa3bbf04cb5941027c6e294a000bba0cf00f5cdac6c77f2dad479b434"},
{file = "scipy-1.8.1-cp39-cp39-win_amd64.whl", hash = "sha256:9dd4012ac599a1e7eb63c114d1eee1bcfc6dc75a29b589ff0ad0bb3d9412034f"},
{file = "scipy-1.8.1.tar.gz", hash = "sha256:9e3fb1b0e896f14a85aa9a28d5f755daaeeb54c897b746df7a55ccb02b340f33"},
{file = "scipy-1.9.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:1884b66a54887e21addf9c16fb588720a8309a57b2e258ae1c7986d4444d3bc0"},
{file = "scipy-1.9.3-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:83b89e9586c62e787f5012e8475fbb12185bafb996a03257e9675cd73d3736dd"},
{file = "scipy-1.9.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1a72d885fa44247f92743fc20732ae55564ff2a519e8302fb7e18717c5355a8b"},
{file = "scipy-1.9.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d01e1dd7b15bd2449c8bfc6b7cc67d630700ed655654f0dfcf121600bad205c9"},
{file = "scipy-1.9.3-cp310-cp310-win_amd64.whl", hash = "sha256:68239b6aa6f9c593da8be1509a05cb7f9efe98b80f43a5861cd24c7557e98523"},
{file = "scipy-1.9.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b41bc822679ad1c9a5f023bc93f6d0543129ca0f37c1ce294dd9d386f0a21096"},
{file = "scipy-1.9.3-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:90453d2b93ea82a9f434e4e1cba043e779ff67b92f7a0e85d05d286a3625df3c"},
{file = "scipy-1.9.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83c06e62a390a9167da60bedd4575a14c1f58ca9dfde59830fc42e5197283dab"},
{file = "scipy-1.9.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:abaf921531b5aeaafced90157db505e10345e45038c39e5d9b6c7922d68085cb"},
{file = "scipy-1.9.3-cp311-cp311-win_amd64.whl", hash = "sha256:06d2e1b4c491dc7d8eacea139a1b0b295f74e1a1a0f704c375028f8320d16e31"},
{file = "scipy-1.9.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a04cd7d0d3eff6ea4719371cbc44df31411862b9646db617c99718ff68d4840"},
{file = "scipy-1.9.3-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:545c83ffb518094d8c9d83cce216c0c32f8c04aaf28b92cc8283eda0685162d5"},
{file = "scipy-1.9.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0d54222d7a3ba6022fdf5773931b5d7c56efe41ede7f7128c7b1637700409108"},
{file = "scipy-1.9.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cff3a5295234037e39500d35316a4c5794739433528310e117b8a9a0c76d20fc"},
{file = "scipy-1.9.3-cp38-cp38-win_amd64.whl", hash = "sha256:2318bef588acc7a574f5bfdff9c172d0b1bf2c8143d9582e05f878e580a3781e"},
{file = "scipy-1.9.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d644a64e174c16cb4b2e41dfea6af722053e83d066da7343f333a54dae9bc31c"},
{file = "scipy-1.9.3-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:da8245491d73ed0a994ed9c2e380fd058ce2fa8a18da204681f2fe1f57f98f95"},
{file = "scipy-1.9.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4db5b30849606a95dcf519763dd3ab6fe9bd91df49eba517359e450a7d80ce2e"},
{file = "scipy-1.9.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c68db6b290cbd4049012990d7fe71a2abd9ffbe82c0056ebe0f01df8be5436b0"},
{file = "scipy-1.9.3-cp39-cp39-win_amd64.whl", hash = "sha256:5b88e6d91ad9d59478fafe92a7c757d00c59e3bdc3331be8ada76a4f8d683f58"},
{file = "scipy-1.9.3.tar.gz", hash = "sha256:fbc5c05c85c1a02be77b1ff591087c83bc44579c6d2bd9fb798bb64ea5e1a027"},
]
seaborn = [
{file = "seaborn-0.12.1-py3-none-any.whl", hash = "sha256:a9eb39cba095fcb1e4c89a7fab1c57137d70a715a7f2eefcd41c9913c4d4ed65"},
{file = "seaborn-0.12.1.tar.gz", hash = "sha256:bb1eb1d51d3097368c187c3ef089c0288ec1fe8aa1c69fb324c68aa1d02df4c1"},
]
send2trash = [
{file = "Send2Trash-1.8.0-py3-none-any.whl", hash = "sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08"},
{file = "Send2Trash-1.8.0.tar.gz", hash = "sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d"},
]
setuptools = [
{file = "setuptools-65.6.1-py3-none-any.whl", hash = "sha256:9b1b1b4129877c74b0f77de72b64a1084a57ccb106e7252f5fb70f192b3d9055"},
{file = "setuptools-65.6.1.tar.gz", hash = "sha256:1da770a0ee69681e4d2a8196d0b30c16f25d1c8b3d3e755baaedc90f8db04963"},
]
setuptools-scm = [
{file = "setuptools_scm-7.0.5-py3-none-any.whl", hash = "sha256:7930f720905e03ccd1e1d821db521bff7ec2ac9cf0ceb6552dd73d24a45d3b02"},
{file = "setuptools_scm-7.0.5.tar.gz", hash = "sha256:031e13af771d6f892b941adb6ea04545bbf91ebc5ce68c78aaf3fff6e1fb4844"},
]
shap = [
{file = "shap-0.40.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:8bb8b4c01bd33592412dae5246286f62efbb24ad774b63e59b8b16969b915b6d"},
{file = "shap-0.40.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:d2844acab55e18bcb3d691237a720301223a38805e6e43752e6717f3a8b2cc28"},
{file = "shap-0.40.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:e7dd3040b0ec91bc9f477a354973d231d3a6beebe2fa7a5c6a565a79ba7746e8"},
{file = "shap-0.40.0-cp36-cp36m-win32.whl", hash = "sha256:86ea1466244c7e0d0c5dd91d26a90e0b645f5c9d7066810462a921263463529b"},
{file = "shap-0.40.0-cp36-cp36m-win_amd64.whl", hash = "sha256:bbf0cfa30cd8c51f8830d3f25c3881b9949e062124cd0d0b3d8efdc7e0cf5136"},
{file = "shap-0.40.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3d3c5ace8bd5222b455fa5650f9043146e19d80d701f95b25c4c5fb81f628547"},
{file = "shap-0.40.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:18b4ca36a43409b784dc76810f76aaa504c467eac17fa89ef5ee330cb460b2b7"},
{file = "shap-0.40.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:dbb1ec9b2c05c3939425529437c5f3cfba7a3929fed0e820fb84a42e82358cdd"},
{file = "shap-0.40.0-cp37-cp37m-win32.whl", hash = "sha256:0d12f7d86481afd000d5f144c10cadb31d52fb1f77f68659472d6f6d89f7843b"},
{file = "shap-0.40.0-cp37-cp37m-win_amd64.whl", hash = "sha256:dbd07e48fc7f4d5916f6cdd9dbb8d29b7711a265cc9beac92e7d4a4d9e738bc7"},
{file = "shap-0.40.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:399325caecc7306eb7de17ac19aa797abbf2fcda47d2bb4588d9492adb2dce65"},
{file = "shap-0.40.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:4ec50bd0aa24efe1add177371b8b62080484efb87c6dbcf321895c5a08cf68d6"},
{file = "shap-0.40.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:e2b5f2d3cac82de0c49afde6529bebb6d5b20334325640267bf25dce572175a1"},
{file = "shap-0.40.0-cp38-cp38-win32.whl", hash = "sha256:ba06256568747aaab9ad0091306550bfe826c1f195bf2cf57b405ae1de16faed"},
{file = "shap-0.40.0-cp38-cp38-win_amd64.whl", hash = "sha256:fb1b325a55fdf58061d332ed3308d44162084d4cb5f53f2c7774ce943d60b0ad"},
{file = "shap-0.40.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f282fa12ca6fc594bcadca389309d733f73fe071e29ab49cb6e51beaa8b01a1a"},
{file = "shap-0.40.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:2e72a47407f010f845b3ed6cb4f5160f0907ec8ab97df2bca164ebcb263b4205"},
{file = "shap-0.40.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:649c905f9a4629839142e1769235989fb61730eb789a70d27ec7593eb02186a7"},
{file = "shap-0.40.0-cp39-cp39-win32.whl", hash = "sha256:5c220632ba57426d450dcc8ca43c55f657fe18e18f5d223d2a4e2aa02d905047"},
{file = "shap-0.40.0-cp39-cp39-win_amd64.whl", hash = "sha256:46e7084ce021eea450306bf7434adaead53921fd32504f04d1804569839e2979"},
{file = "shap-0.40.0.tar.gz", hash = "sha256:add0a27bb4eb57f0a363c2c4265b1a1328a8c15b01c14c7d432d9cc387dd8579"},
]
six = [
{file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"},
{file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"},
]
slicer = [
{file = "slicer-0.0.7-py3-none-any.whl", hash = "sha256:0b94faa5251c0f23782c03f7b7eedda91d80144059645f452c4bc80fab875976"},
{file = "slicer-0.0.7.tar.gz", hash = "sha256:f5d5f7b45f98d155b9c0ba6554fa9770c6b26d5793a3e77a1030fb56910ebeec"},
]
smart-open = [
{file = "smart_open-5.2.1-py3-none-any.whl", hash = "sha256:71d14489da58b60ce12fc3ecb823facc59a8b23cd1b58edb97175640350d3a62"},
{file = "smart_open-5.2.1.tar.gz", hash = "sha256:75abf758717a92a8f53aa96953f0c245c8cedf8e1e4184903db3659b419d4c17"},
]
sniffio = [
{file = "sniffio-1.3.0-py3-none-any.whl", hash = "sha256:eecefdce1e5bbfb7ad2eeaabf7c1eeb404d7757c379bd1f7e5cce9d8bf425384"},
{file = "sniffio-1.3.0.tar.gz", hash = "sha256:e60305c5e5d314f5389259b7f22aaa33d8f7dee49763119234af3755c55b9101"},
]
snowballstemmer = [
{file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
{file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
]
sortedcontainers = [
{file = "sortedcontainers-2.4.0-py2.py3-none-any.whl", hash = "sha256:a163dcaede0f1c021485e957a39245190e74249897e2ae4b2aa38595db237ee0"},
{file = "sortedcontainers-2.4.0.tar.gz", hash = "sha256:25caa5a06cc30b6b83d11423433f65d1f9d76c4c6a0c90e3379eaa43b9bfdb88"},
]
soupsieve = [
{file = "soupsieve-2.3.2.post1-py3-none-any.whl", hash = "sha256:3b2503d3c7084a42b1ebd08116e5f81aadfaea95863628c80a3b774a11b7c759"},
{file = "soupsieve-2.3.2.post1.tar.gz", hash = "sha256:fc53893b3da2c33de295667a0e19f078c14bf86544af307354de5fcf12a3f30d"},
]
spacy = [
{file = "spacy-3.4.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e546b314f619502ae03e5eb9a0cfd09ca7a9db265bcdd8a3af83cfb0f1432e55"},
{file = "spacy-3.4.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ded11aa8966236aab145b4d2d024b3eb61ac50078362d77d9ed7d8c240ef0f4a"},
{file = "spacy-3.4.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:462e141f514d78cff85685b5b12eb8cadac0bad2f7820149cbe18d03ccb2e59c"},
{file = "spacy-3.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c966d25b3f3e49f5de08546b3638928f49678c365cbbebd0eec28f74e0adb539"},
{file = "spacy-3.4.3-cp310-cp310-win_amd64.whl", hash = "sha256:2ddba486c4c981abe6f1e3fd72648dc8811966e5f0e05808f9c9fab155c388d7"},
{file = "spacy-3.4.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3c87117dd335fba44d1c0d77602f0763c3addf4e7ef9bdbe9a495466c3484c69"},
{file = "spacy-3.4.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3ce3938720f48eaeeb360a7f623f15a0d9efd1a688d5d740e3d4cdcd6f6da8a3"},
{file = "spacy-3.4.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6ad6bf5e4e7f0bc2ef94b7ff6fe59abd766f74c192bca2f17430a3b3cd5bda5a"},
{file = "spacy-3.4.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6644c678bd7af567c6ce679f71d64119282e7d6f1a6f787162a91be3ea39333"},
{file = "spacy-3.4.3-cp311-cp311-win_amd64.whl", hash = "sha256:e6b871de8857a6820140358db3943180fdbe03d44ed792155cee6cb95f4ac4ea"},
{file = "spacy-3.4.3-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d211c2b8894354bf8d961af9a9dcab38f764e1dcddd7b80760e438fcd4c9fe43"},
{file = "spacy-3.4.3-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2ea41f9de30435456235c4182d8bc2eb54a0a64719856e66e780350bb4c8cfbe"},
{file = "spacy-3.4.3-cp36-cp36m-win_amd64.whl", hash = "sha256:afaf6e716cbac4a0fbfa9e9bf95decff223936597ddd03ea869118a7576aa1b1"},
{file = "spacy-3.4.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7115da36369b3c537caf2fe08e0b45528bd091c7f56ba3580af1e6fdfa9b1081"},
{file = "spacy-3.4.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3b3e629c889cac9656151286ec1232c6a948ce0d44a39f1ef5e60fed4f183a10"},
{file = "spacy-3.4.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9277cd0fcb96ee5dd885f7e96c639f21afd96198d61ca32100446afbff4dfbef"},
{file = "spacy-3.4.3-cp37-cp37m-win_amd64.whl", hash = "sha256:a36bd06a5a147350e5f5f6903c4777296c37b18199251bb41056c3a73aa4494f"},
{file = "spacy-3.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bdafcd0823ca804c39d0bed9e677eb7d0235b1259563d0fd4d3a201c71108af8"},
{file = "spacy-3.4.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0cdc23a48e6543402b4c56ebf2d36246001175c29fd56d3081efcec684651abc"},
{file = "spacy-3.4.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:455c2fbd1de24b6fe34fa121d87525134d7498f9f458ebc8274d7940b473999e"},
{file = "spacy-3.4.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d1c85279fbb6b75d7fb8d7c59c2b734502e51271cad90926e8df1d21b67da5aa"},
{file = "spacy-3.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:5c0d65f39184f522b4e67b965a42d121a3b2d799362682fe8847b64b0ce5bc7c"},
{file = "spacy-3.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a7b97ec21ed773edb2479ae5d6c7686b8034f418df6bccd9218f5c3c2b7cf888"},
{file = "spacy-3.4.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:36a9a506029842795099fd97ad95f0da2845c319020fcc7164cbf33650726f83"},
{file = "spacy-3.4.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5ab293eb1423fa05c7ee71b2fedda57c2b4a4ca8dc054ce678809457287b01dc"},
{file = "spacy-3.4.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bb6d0f185126decc8392cde7d28eb6e85ba4bca15424713288cccc49c2a3c52b"},
{file = "spacy-3.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:676ab9ab2cf94ba48caa306f185a166e85bd35b388ec24512c8ba7dfcbc7517e"},
{file = "spacy-3.4.3.tar.gz", hash = "sha256:22698cf5175e2b697e82699fcccee3092b42137a57d352df208d71657fd693bb"},
]
spacy-legacy = [
{file = "spacy-legacy-3.0.10.tar.gz", hash = "sha256:16104595d8ab1b7267f817a449ad1f986eb1f2a2edf1050748f08739a479679a"},
{file = "spacy_legacy-3.0.10-py2.py3-none-any.whl", hash = "sha256:8526a54d178dee9b7f218d43e5c21362c59056c5da23380b319b56043e9211f3"},
]
spacy-loggers = [
{file = "spacy-loggers-1.0.3.tar.gz", hash = "sha256:00f6fd554db9fd1fde6501b23e1f0e72f6eef14bb1e7fc15456d11d1d2de92ca"},
{file = "spacy_loggers-1.0.3-py3-none-any.whl", hash = "sha256:f74386b390a023f9615dcb499b7b4ad63338236a8187f0ec4dfe265a9f665ee8"},
]
sparse = [
{file = "sparse-0.13.0-py2.py3-none-any.whl", hash = "sha256:95ed0b649a0663b1488756ad4cf242b0a9bb2c9a25bc752a7c6ca9fbe8258966"},
{file = "sparse-0.13.0.tar.gz", hash = "sha256:685dc994aa770ee1b23f2d5392819c8429f27958771f8dceb2c4fb80210d5915"},
]
sphinx = [
{file = "Sphinx-5.3.0.tar.gz", hash = "sha256:51026de0a9ff9fc13c05d74913ad66047e104f56a129ff73e174eb5c3ee794b5"},
{file = "sphinx-5.3.0-py3-none-any.whl", hash = "sha256:060ca5c9f7ba57a08a1219e547b269fadf125ae25b06b9fa7f66768efb652d6d"},
]
sphinx-copybutton = [
{file = "sphinx-copybutton-0.5.0.tar.gz", hash = "sha256:a0c059daadd03c27ba750da534a92a63e7a36a7736dcf684f26ee346199787f6"},
{file = "sphinx_copybutton-0.5.0-py3-none-any.whl", hash = "sha256:9684dec7434bd73f0eea58dda93f9bb879d24bff2d8b187b1f2ec08dfe7b5f48"},
]
sphinx-design = [
{file = "sphinx_design-0.3.0-py3-none-any.whl", hash = "sha256:823c1dd74f31efb3285ec2f1254caefed29d762a40cd676f58413a1e4ed5cc96"},
{file = "sphinx_design-0.3.0.tar.gz", hash = "sha256:7183fa1fae55b37ef01bda5125a21ee841f5bbcbf59a35382be598180c4cefba"},
]
sphinx-rtd-theme = [
{file = "sphinx_rtd_theme-1.1.1-py2.py3-none-any.whl", hash = "sha256:31faa07d3e97c8955637fc3f1423a5ab2c44b74b8cc558a51498c202ce5cbda7"},
{file = "sphinx_rtd_theme-1.1.1.tar.gz", hash = "sha256:6146c845f1e1947b3c3dd4432c28998a1693ccc742b4f9ad7c63129f0757c103"},
]
sphinxcontrib-applehelp = [
{file = "sphinxcontrib-applehelp-1.0.2.tar.gz", hash = "sha256:a072735ec80e7675e3f432fcae8610ecf509c5f1869d17e2eecff44389cdbc58"},
{file = "sphinxcontrib_applehelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:806111e5e962be97c29ec4c1e7fe277bfd19e9652fb1a4392105b43e01af885a"},
]
sphinxcontrib-devhelp = [
{file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
{file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
]
sphinxcontrib-googleanalytics = []
sphinxcontrib-htmlhelp = [
{file = "sphinxcontrib-htmlhelp-2.0.0.tar.gz", hash = "sha256:f5f8bb2d0d629f398bf47d0d69c07bc13b65f75a81ad9e2f71a63d4b7a2f6db2"},
{file = "sphinxcontrib_htmlhelp-2.0.0-py2.py3-none-any.whl", hash = "sha256:d412243dfb797ae3ec2b59eca0e52dac12e75a241bf0e4eb861e450d06c6ed07"},
]
sphinxcontrib-jsmath = [
{file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
{file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
]
sphinxcontrib-qthelp = [
{file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
{file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
]
sphinxcontrib-serializinghtml = [
{file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
{file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
]
srsly = [
{file = "srsly-2.4.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:8fed31ef8acbb5fead2152824ef39e12d749fcd254968689ba5991dd257b63b4"},
{file = "srsly-2.4.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:04d0b4cd91e098cdac12d2c28e256b1181ba98bcd00e460b8e42dee3e8542804"},
{file = "srsly-2.4.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6d83bea1f774b54d9313a374a95f11a776d37bcedcda93c526bf7f1cb5f26428"},
{file = "srsly-2.4.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cae5d48a0bda55a3728f49976ea0b652f508dbc5ac3e849f41b64a5753ec7f0a"},
{file = "srsly-2.4.5-cp310-cp310-win_amd64.whl", hash = "sha256:f74c64934423bcc2d3508cf3a079c7034e5cde988255dc57c7a09794c78f0610"},
{file = "srsly-2.4.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0f9abb7857f9363f1ac52123db94dfe1c4af8959a39d698eff791d17e45e00b6"},
{file = "srsly-2.4.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f48d40c3b3d20e38410e7a95fa5b4050c035f467b0793aaf67188b1edad37fe3"},
{file = "srsly-2.4.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1434759effec2ee266a24acd9b53793a81cac01fc1e6321c623195eda1b9c7df"},
{file = "srsly-2.4.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5e7b0cd9853b0d9e00ad23d26199c1e44d8fd74096cbbbabc92447a915bcfd78"},
{file = "srsly-2.4.5-cp311-cp311-win_amd64.whl", hash = "sha256:874010587a807264963de9a1c91668c43cee9ed2f683f5406bdf5a34dfe12cca"},
{file = "srsly-2.4.5-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa4e1fe143275339d1c4a74e46d4c75168eed8b200f44f2ea023d45ff089a2f"},
{file = "srsly-2.4.5-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1c4291ee125796fb05e778e9ca8f9a829e8c314b757826f2e1d533e424a93531"},
{file = "srsly-2.4.5-cp36-cp36m-win_amd64.whl", hash = "sha256:8f258ee69aefb053258ac2e4f4b9d597e622b79f78874534430e864cef0be199"},
{file = "srsly-2.4.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:ace951c3088204bd66f30326f93ab6e615ce1562a461a8a464759d99fa9c2a02"},
{file = "srsly-2.4.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:facab907801fbcb0e54b3532e04bc6a0709184d68004ef3a129e8c7e3ca63d82"},
{file = "srsly-2.4.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a49c089541a9a0a27ccb841a596350b7ee1d6adfc7ebd28eddedfd34dc9f12c5"},
{file = "srsly-2.4.5-cp37-cp37m-win_amd64.whl", hash = "sha256:db6bc02bd1e3372a3636e47b22098107c9df2cf12d220321b51c586ba17904b3"},
{file = "srsly-2.4.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9a95c682de8c6e6145199f10a7c597647ff7d398fb28874f845ba7d34a86a033"},
{file = "srsly-2.4.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:8c26c5c0e07ea7bb7b8b8735e1b2261fea308c2c883b99211d11747162c6d897"},
{file = "srsly-2.4.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e0043eff95be45acb5ce09cebb80ebdb9f2b6856aa3a15979e6fe3cc9a486753"},
{file = "srsly-2.4.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a2075124d4872e754af966e76f3258cd526eeac84f0995ee8cd561fd4cf1b68e"},
{file = "srsly-2.4.5-cp38-cp38-win_amd64.whl", hash = "sha256:1a41e5b10902c885cabe326ba86d549d7011e38534c45bed158ecb8abd4b44ce"},
{file = "srsly-2.4.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b5a96f0ae15b651fa3fd87421bd93e61c6dc46c0831cbe275c9b790d253126b5"},
{file = "srsly-2.4.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:764906e9f4c2ac5f748c49d95c8bf79648404ebc548864f9cb1fa0707942d830"},
{file = "srsly-2.4.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:95afe9625badaf5ce326e37b21362423d7e8578a5ec9c85b15c3fca93205a883"},
{file = "srsly-2.4.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90359cc3c5601afd45ec12c52bde1cf1ccbe0dc7d4244fd1f8d0c9e100c71707"},
{file = "srsly-2.4.5-cp39-cp39-win_amd64.whl", hash = "sha256:2d3b0d32be2267fb489da172d71399ac59f763189b47dbe68eedb0817afaa6dc"},
{file = "srsly-2.4.5.tar.gz", hash = "sha256:c842258967baa527cea9367986e42b8143a1a890e7d4a18d25a36edc3c7a33c7"},
]
stack-data = [
{file = "stack_data-0.6.1-py3-none-any.whl", hash = "sha256:960cb054d6a1b2fdd9cbd529e365b3c163e8dabf1272e02cfe36b58403cff5c6"},
{file = "stack_data-0.6.1.tar.gz", hash = "sha256:6c9a10eb5f342415fe085db551d673955611afb821551f554d91772415464315"},
]
statsmodels = [
{file = "statsmodels-0.13.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c75319fddded9507cc310fc3980e4ae4d64e3ff37b322ad5e203a84f89d85203"},
{file = "statsmodels-0.13.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6f148920ef27c7ba69a5735724f65de9422c0c8bcef71b50c846b823ceab8840"},
{file = "statsmodels-0.13.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cc4d3e866bfe0c4f804bca362d0e7e29d24b840aaba8d35a754387e16d2a119"},
{file = "statsmodels-0.13.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:072950d6f7820a6b0bd6a27b2d792a6d6f952a1d2f62f0dcf8dd808799475855"},
{file = "statsmodels-0.13.5-cp310-cp310-win_amd64.whl", hash = "sha256:159ae9962c61b31dcffe6356d72ae3d074bc597ad9273ec93ae653fe607b8516"},
{file = "statsmodels-0.13.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:9061c0d5ee4f3038b590afedd527a925e5de27195dc342381bac7675b2c5efe4"},
{file = "statsmodels-0.13.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e1d89cba5fafc1bf8e75296fdfad0b619de2bfb5e6c132913991d207f3ead675"},
{file = "statsmodels-0.13.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:01bc16e7c66acb30cd3dda6004c43212c758223d1966131226024a5c99ec5a7e"},
{file = "statsmodels-0.13.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d5cd9ab5de2c7489b890213cba2aec3d6468eaaec547041c2dfcb1e03411f7e"},
{file = "statsmodels-0.13.5-cp311-cp311-win_amd64.whl", hash = "sha256:857d5c0564a68a7ef77dc2252bb43c994c0699919b4e1f06a9852c2fbb588765"},
{file = "statsmodels-0.13.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5a5348b2757ab31c5c31b498f25eff2ea3c42086bef3d3b88847c25a30bdab9c"},
{file = "statsmodels-0.13.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9b21648e3a8e7514839ba000a48e495cdd8bb55f1b71c608cf314b05541e283b"},
{file = "statsmodels-0.13.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b829eada6cec07990f5e6820a152af4871c601fd458f76a896fb79ae2114985"},
{file = "statsmodels-0.13.5-cp37-cp37m-win_amd64.whl", hash = "sha256:872b3a8186ef20f647c7ab5ace512a8fc050148f3c2f366460ab359eec3d9695"},
{file = "statsmodels-0.13.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bc1abb81d24f56425febd5a22bb852a1b98e53b80c4a67f50938f9512f154141"},
{file = "statsmodels-0.13.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:a2c46f1b0811a9736db37badeb102c0903f33bec80145ced3aa54df61aee5c2b"},
{file = "statsmodels-0.13.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:947f79ba9662359f1cfa6e943851f17f72b06e55f4a7c7a2928ed3bc57ed6cb8"},
{file = "statsmodels-0.13.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:046251c939c51e7632bcc8c6d6f31b8ca0eaffdf726d2498463f8de3735c9a82"},
{file = "statsmodels-0.13.5-cp38-cp38-win_amd64.whl", hash = "sha256:84f720e8d611ef8f297e6d2ffa7248764e223ef7221a3fc136e47ae089609611"},
{file = "statsmodels-0.13.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b0d1d24e4adf96ec3c64d9a027dcee2c5d5096bb0dad33b4d91034c0a3c40371"},
{file = "statsmodels-0.13.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0f0e5c9c58fb6cba41db01504ec8dd018c96a95152266b7d5d67e0de98840474"},
{file = "statsmodels-0.13.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5b034aa4b9ad4f4d21abc4dd4841be0809a446db14c7aa5c8a65090aea9f1143"},
{file = "statsmodels-0.13.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:73f97565c29241e839ffcef74fa995afdfe781910ccc27c189e5890193085958"},
{file = "statsmodels-0.13.5-cp39-cp39-win_amd64.whl", hash = "sha256:2ff331e508f2d1a53d3a188305477f4cf05cd8c52beb6483885eb3d51c8be3ad"},
{file = "statsmodels-0.13.5.tar.gz", hash = "sha256:593526acae1c0fda0ea6c48439f67c3943094c542fe769f8b90fe9e6c6cc4871"},
]
sympy = [
{file = "sympy-1.11.1-py3-none-any.whl", hash = "sha256:938f984ee2b1e8eae8a07b884c8b7a1146010040fccddc6539c54f401c8f6fcf"},
{file = "sympy-1.11.1.tar.gz", hash = "sha256:e32380dce63cb7c0108ed525570092fd45168bdae2faa17e528221ef72e88658"},
]
tblib = [
{file = "tblib-1.7.0-py2.py3-none-any.whl", hash = "sha256:289fa7359e580950e7d9743eab36b0691f0310fce64dee7d9c31065b8f723e23"},
{file = "tblib-1.7.0.tar.gz", hash = "sha256:059bd77306ea7b419d4f76016aef6d7027cc8a0785579b5aad198803435f882c"},
]
tenacity = [
{file = "tenacity-8.1.0-py3-none-any.whl", hash = "sha256:35525cd47f82830069f0d6b73f7eb83bc5b73ee2fff0437952cedf98b27653ac"},
{file = "tenacity-8.1.0.tar.gz", hash = "sha256:e48c437fdf9340f5666b92cd7990e96bc5fc955e1298baf4a907e3972067a445"},
]
tensorboard = [
{file = "tensorboard-2.11.0-py3-none-any.whl", hash = "sha256:a0e592ee87962e17af3f0dce7faae3fbbd239030159e9e625cce810b7e35c53d"},
]
tensorboard-data-server = [
{file = "tensorboard_data_server-0.6.1-py3-none-any.whl", hash = "sha256:809fe9887682d35c1f7d1f54f0f40f98bb1f771b14265b453ca051e2ce58fca7"},
{file = "tensorboard_data_server-0.6.1-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:fa8cef9be4fcae2f2363c88176638baf2da19c5ec90addb49b1cde05c95c88ee"},
{file = "tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl", hash = "sha256:d8237580755e58eff68d1f3abefb5b1e39ae5c8b127cc40920f9c4fb33f4b98a"},
]
tensorboard-plugin-wit = [
{file = "tensorboard_plugin_wit-1.8.1-py3-none-any.whl", hash = "sha256:ff26bdd583d155aa951ee3b152b3d0cffae8005dc697f72b44a8e8c2a77a8cbe"},
]
tensorflow = [
{file = "tensorflow-2.11.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:6c049fec6c2040685d6f43a63e17ccc5d6b0abc16b70cc6f5e7d691262b5d2d0"},
{file = "tensorflow-2.11.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bcc8380820cea8f68f6c90b8aee5432e8537e5bb9ec79ac61a98e6a9a02c7d40"},
{file = "tensorflow-2.11.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d973458241c8771bf95d4ba68ad5d67b094f72dd181c2d562ffab538c1b0dad7"},
{file = "tensorflow-2.11.0-cp310-cp310-win_amd64.whl", hash = "sha256:d470b772ee3c291a8c7be2331e7c379e0c338223c0bf532f5906d4556f17580d"},
{file = "tensorflow-2.11.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:d29c1179149fa469ad68234c52c83081d037ead243f90e826074e2563a0f938a"},
{file = "tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2cdba2fce00d6c924470d4fb65d5e95a4b6571a863860608c0c13f0393f4ca0d"},
{file = "tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2ab20f93d2b52a44b414ec6dcf82aa12110e90e0920039a27108de28ae2728"},
{file = "tensorflow-2.11.0-cp37-cp37m-win_amd64.whl", hash = "sha256:445510f092f7827e1f60f59b8bfb58e664aaf05d07daaa21c5735a7f76ca2b25"},
{file = "tensorflow-2.11.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:056d29f2212342536ce3856aa47910a2515eb97ec0a6cc29ed47fc4be1369ec8"},
{file = "tensorflow-2.11.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:17b29d6d360fad545ab1127db52592efd3f19ac55c1a45e5014da328ae867ab4"},
{file = "tensorflow-2.11.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:335ab5cccd7a1c46e3d89d9d46913f0715e8032df8d7438f9743b3fb97b39f69"},
{file = "tensorflow-2.11.0-cp38-cp38-win_amd64.whl", hash = "sha256:d48da37c8ae711eb38047a56a052ca8bb4ee018a91a479e42b7a8d117628c32e"},
{file = "tensorflow-2.11.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:d9cf25bca641f2e5c77caa3bfd8dd6b892a7aec0695c54d2a7c9f52a54a8d487"},
{file = "tensorflow-2.11.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0d28f9691ebc48c0075e271023b3f147ae2bc29a3d3a7f42d45019c6b4a700d2"},
{file = "tensorflow-2.11.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:276a44210d956701899dc78ad0aa116a0071f22fb0bcc1ea6bb59f7646b08d11"},
{file = "tensorflow-2.11.0-cp39-cp39-win_amd64.whl", hash = "sha256:cc3444fe1d58c65a195a69656bf56015bf19dc2916da607d784b0a1e215ec008"},
]
tensorflow-estimator = [
{file = "tensorflow_estimator-2.11.0-py2.py3-none-any.whl", hash = "sha256:ea3b64acfff3d9a244f06178c9bdedcbdd3f125b67d0888dba8229498d06468b"},
]
tensorflow-io-gcs-filesystem = [
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:22753dc28c949bfaf29b573ee376370762c88d80330fe95cfb291261eb5e927a"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:52988659f405166df79905e9859bc84ae2a71e3ff61522ba32a95e4dce8e66d2"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp310-cp310-win_amd64.whl", hash = "sha256:698d7f89e09812b9afeb47c3860797343a22f997c64ab9dab98132c61daa8a7d"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:bbf245883aa52ec687b66d0fcbe0f5f0a92d98c0b1c53e6a736039a3548d29a1"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp311-cp311-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:6d95f306ff225c5053fd06deeab3e3a2716357923cb40c44d566c11be779caa3"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp311-cp311-win_amd64.whl", hash = "sha256:5fbef5836e70026245d8d9e692c44dae2c6dbc208c743d01f5b7a2978d6b6bc6"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:00cf6a92f1f9f90b2ba2d728870bcd2a70b116316d0817ab0b91dd390c25b3fd"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f76cbe1a784841c223f6861e5f6c7e53aa6232cb626d57e76881a0638c365de6"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp37-cp37m-win_amd64.whl", hash = "sha256:c5d99f56c12a349905ff684142e4d2df06ae68ecf50c4aad5449a5f81731d858"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:b6e2d275020fb4d1a952cd3fa546483f4e46ad91d64e90d3458e5ca3d12f6477"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a6670e0da16c884267e896ea5c3334d6fd319bd6ff7cf917043a9f3b2babb1b3"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp38-cp38-win_amd64.whl", hash = "sha256:bfed720fc691d3f45802a7bed420716805aef0939c11cebf25798906201f626e"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:cc062ce13ec95fb64b1fd426818a6d2b0e5be9692bc0e43a19cce115b6da4336"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:366e1eff8dbd6b64333d7061e2a8efd081ae4742614f717ced08d8cc9379eb50"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp39-cp39-win_amd64.whl", hash = "sha256:9484893779324b2d34874b0aacf3b824eb4f22d782e75df029cbccab2e607974"},
]
termcolor = [
{file = "termcolor-2.1.1-py3-none-any.whl", hash = "sha256:fa852e957f97252205e105dd55bbc23b419a70fec0085708fc0515e399f304fd"},
{file = "termcolor-2.1.1.tar.gz", hash = "sha256:67cee2009adc6449c650f6bcf3bdeed00c8ba53a8cda5362733c53e0a39fb70b"},
]
terminado = [
{file = "terminado-0.17.0-py3-none-any.whl", hash = "sha256:bf6fe52accd06d0661d7611cc73202121ec6ee51e46d8185d489ac074ca457c2"},
{file = "terminado-0.17.0.tar.gz", hash = "sha256:520feaa3aeab8ad64a69ca779be54be9234edb2d0d6567e76c93c2c9a4e6e43f"},
]
thinc = [
{file = "thinc-8.1.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5dc6629e4770a13dec34eda3c4d89302f1b5c91ac4663cd53f876a4e761fcc00"},
{file = "thinc-8.1.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8af5639de41a08d358fac073ac116faefe75289d9bed5c1fbf6c7a54724529ea"},
{file = "thinc-8.1.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4d66eeacc29769bf4238a0666f05e38d75dce60ab609eea5089975e6d8b82721"},
{file = "thinc-8.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:25fcf9b53317f3addca048f1295d4708a95c526821295fe42398e23520514373"},
{file = "thinc-8.1.5-cp310-cp310-win_amd64.whl", hash = "sha256:a683f5280601f2fa1625e738e2b6ce481d17b07350823164f5863aab6b8b8a5d"},
{file = "thinc-8.1.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:404af2a714d6e688d27f7816042bca85766cbc57808aa9afb3309ad786000726"},
{file = "thinc-8.1.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ee28aa9773cb69d6c95d0c58b3fa9997c88840ad1eb877576f407a5b3b0f93c0"},
{file = "thinc-8.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7acccd5fb2fcd6caab1f3ad9d3f6acd1c6194a638dceccb5a33bd6f1875221ab"},
{file = "thinc-8.1.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1dc59ab558c85f901ac8299eb8ff1be14404b4d47e5ed3f94f897e25496e4f80"},
{file = "thinc-8.1.5-cp311-cp311-win_amd64.whl", hash = "sha256:07a4cf13c6f0259f32c9d023e2d32d0f5e0aa12ce0422792dbadd24fa1e0379e"},
{file = "thinc-8.1.5-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3ad722c4b1351a712bf8759307ea1213f236aee4a170b2ff31f7908f31b34261"},
{file = "thinc-8.1.5-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:076d68f6c27862b66e15af3622651c58f66b3d3b1c69beadbf1c13da294f05cc"},
{file = "thinc-8.1.5-cp36-cp36m-win_amd64.whl", hash = "sha256:91a8ef8dd565b6aa9b3161b97eece079993109be156f4e8501c8bd36e02b6f3f"},
{file = "thinc-8.1.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:73538c0e596d1f281678354f6508d4af5fad3ae0743b069a96628f2a96085fa5"},
{file = "thinc-8.1.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea5e6502565fe72f9a975f6fe5d1be9d19914d2a3abb3158da08b4adffaa97c6"},
{file = "thinc-8.1.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d202e79e3d785a2931d580d3dafaa6ca357c5656c82341121731a3491a1c8887"},
{file = "thinc-8.1.5-cp37-cp37m-win_amd64.whl", hash = "sha256:61dfa235c891c1fa24f9607cd0cad264806adeb70d267162c6e5d91fb9f78640"},
{file = "thinc-8.1.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b62a4247cce4c3a07014b9386b9045dbc15a83aa46102a7fcd5d8eec21fa463a"},
{file = "thinc-8.1.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:345d15eb45743b305a35dd1dc77d282248e55e45a0a84c38d2dfc9fad6130125"},
{file = "thinc-8.1.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6793340b5ada30f11d9beaa6001ade6d80cf3a7877d701ec1710552145dabb33"},
{file = "thinc-8.1.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fa07750e65cc7d3bd922bf2046a10ef28cf22497990da13c3ca154b25449b758"},
{file = "thinc-8.1.5-cp38-cp38-win_amd64.whl", hash = "sha256:b7c1b8417e6bebcebe0bbded816b7b6587a1e239539109897e15cf8463dbed10"},
{file = "thinc-8.1.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ad96acada56e4a0509b834c2e0950a5066727ddfc8d2201b83f7bca8751886aa"},
{file = "thinc-8.1.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5d0144cccb3fb08b15bba73a97f83c0f311a388417fb89d5bb4451abe559b0a2"},
{file = "thinc-8.1.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ced446d2af306a29b0c9ba8940a6631e2e9ef287f9643f4a1d539d69e9fc7266"},
{file = "thinc-8.1.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bb376234c44f173445651c9bf397d05622e31c09a98f81cee98f5908d674380"},
{file = "thinc-8.1.5-cp39-cp39-win_amd64.whl", hash = "sha256:16be051c6f71d967fe87c3bda3a760699539cf75fee6b32527ea38feb3002e56"},
{file = "thinc-8.1.5.tar.gz", hash = "sha256:4d3e4de33d2d0eae7c1455c60c680e453b0204c29e3d2d548d7a9e7fe08ccfbd"},
]
threadpoolctl = [
{file = "threadpoolctl-3.1.0-py3-none-any.whl", hash = "sha256:8b99adda265feb6773280df41eece7b2e6561b772d21ffd52e372f999024907b"},
{file = "threadpoolctl-3.1.0.tar.gz", hash = "sha256:a335baacfaa4400ae1f0d8e3a58d6674d2f8828e3716bb2802c44955ad391380"},
]
tinycss2 = [
{file = "tinycss2-1.2.1-py3-none-any.whl", hash = "sha256:2b80a96d41e7c3914b8cda8bc7f705a4d9c49275616e886103dd839dfc847847"},
{file = "tinycss2-1.2.1.tar.gz", hash = "sha256:8cff3a8f066c2ec677c06dbc7b45619804a6938478d9d73c284b29d14ecb0627"},
]
tokenize-rt = [
{file = "tokenize_rt-5.0.0-py2.py3-none-any.whl", hash = "sha256:c67772c662c6b3dc65edf66808577968fb10badfc2042e3027196bed4daf9e5a"},
{file = "tokenize_rt-5.0.0.tar.gz", hash = "sha256:3160bc0c3e8491312d0485171dea861fc160a240f5f5766b72a1165408d10740"},
]
tomli = [
{file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
]
toolz = [
{file = "toolz-0.12.0-py3-none-any.whl", hash = "sha256:2059bd4148deb1884bb0eb770a3cde70e7f954cfbbdc2285f1f2de01fd21eb6f"},
{file = "toolz-0.12.0.tar.gz", hash = "sha256:88c570861c440ee3f2f6037c4654613228ff40c93a6c25e0eba70d17282c6194"},
]
torch = [
{file = "torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:9c038662db894a23e49e385df13d47b2a777ffd56d9bcd5b832593fab0a7e286"},
{file = "torch-1.12.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:4e1b9c14cf13fd2ab8d769529050629a0e68a6fc5cb8e84b4a3cc1dd8c4fe541"},
{file = "torch-1.12.1-cp310-cp310-win_amd64.whl", hash = "sha256:e9c8f4a311ac29fc7e8e955cfb7733deb5dbe1bdaabf5d4af2765695824b7e0d"},
{file = "torch-1.12.1-cp310-none-macosx_10_9_x86_64.whl", hash = "sha256:976c3f997cea38ee91a0dd3c3a42322785414748d1761ef926b789dfa97c6134"},
{file = "torch-1.12.1-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:68104e4715a55c4bb29a85c6a8d57d820e0757da363be1ba680fa8cc5be17b52"},
{file = "torch-1.12.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:743784ccea0dc8f2a3fe6a536bec8c4763bd82c1352f314937cb4008d4805de1"},
{file = "torch-1.12.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b5dbcca369800ce99ba7ae6dee3466607a66958afca3b740690d88168752abcf"},
{file = "torch-1.12.1-cp37-cp37m-win_amd64.whl", hash = "sha256:f3b52a634e62821e747e872084ab32fbcb01b7fa7dbb7471b6218279f02a178a"},
{file = "torch-1.12.1-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:8a34a2fbbaa07c921e1b203f59d3d6e00ed379f2b384445773bd14e328a5b6c8"},
{file = "torch-1.12.1-cp37-none-macosx_11_0_arm64.whl", hash = "sha256:42f639501928caabb9d1d55ddd17f07cd694de146686c24489ab8c615c2871f2"},
{file = "torch-1.12.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:0b44601ec56f7dd44ad8afc00846051162ef9c26a8579dda0a02194327f2d55e"},
{file = "torch-1.12.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:cd26d8c5640c3a28c526d41ccdca14cf1cbca0d0f2e14e8263a7ac17194ab1d2"},
{file = "torch-1.12.1-cp38-cp38-win_amd64.whl", hash = "sha256:42e115dab26f60c29e298559dbec88444175528b729ae994ec4c65d56fe267dd"},
{file = "torch-1.12.1-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:a8320ba9ad87e80ca5a6a016e46ada4d1ba0c54626e135d99b2129a4541c509d"},
{file = "torch-1.12.1-cp38-none-macosx_11_0_arm64.whl", hash = "sha256:03e31c37711db2cd201e02de5826de875529e45a55631d317aadce2f1ed45aa8"},
{file = "torch-1.12.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:9b356aea223772cd754edb4d9ecf2a025909b8615a7668ac7d5130f86e7ec421"},
{file = "torch-1.12.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:6cf6f54b43c0c30335428195589bd00e764a6d27f3b9ba637aaa8c11aaf93073"},
{file = "torch-1.12.1-cp39-cp39-win_amd64.whl", hash = "sha256:f00c721f489089dc6364a01fd84906348fe02243d0af737f944fddb36003400d"},
{file = "torch-1.12.1-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:bfec2843daa654f04fda23ba823af03e7b6f7650a873cdb726752d0e3718dada"},
{file = "torch-1.12.1-cp39-none-macosx_11_0_arm64.whl", hash = "sha256:69fe2cae7c39ccadd65a123793d30e0db881f1c1927945519c5c17323131437e"},
]
torchvision = [
{file = "torchvision-0.13.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:19286a733c69dcbd417b86793df807bd227db5786ed787c17297741a9b0d0fc7"},
{file = "torchvision-0.13.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:08f592ea61836ebeceb5c97f4d7a813b9d7dc651bbf7ce4401563ccfae6a21fc"},
{file = "torchvision-0.13.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:ef5fe3ec1848123cd0ec74c07658192b3147dcd38e507308c790d5943e87b88c"},
{file = "torchvision-0.13.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:099874088df104d54d8008f2a28539ca0117b512daed8bf3c2bbfa2b7ccb187a"},
{file = "torchvision-0.13.1-cp310-cp310-win_amd64.whl", hash = "sha256:8e4d02e4d8a203e0c09c10dfb478214c224d080d31efc0dbf36d9c4051f7f3c6"},
{file = "torchvision-0.13.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5e631241bee3661de64f83616656224af2e3512eb2580da7c08e08b8c965a8ac"},
{file = "torchvision-0.13.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:899eec0b9f3b99b96d6f85b9aa58c002db41c672437677b553015b9135b3be7e"},
{file = "torchvision-0.13.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:83e9e2457f23110fd53b0177e1bc621518d6ea2108f570e853b768ce36b7c679"},
{file = "torchvision-0.13.1-cp37-cp37m-win_amd64.whl", hash = "sha256:7552e80fa222252b8b217a951c85e172a710ea4cad0ae0c06fbb67addece7871"},
{file = "torchvision-0.13.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f230a1a40ed70d51e463ce43df243ec520902f8725de2502e485efc5eea9d864"},
{file = "torchvision-0.13.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e9a563894f9fa40692e24d1aa58c3ef040450017cfed3598ff9637f404f3fe3b"},
{file = "torchvision-0.13.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7cb789ceefe6dcd0dc8eeda37bfc45efb7cf34770eac9533861d51ca508eb5b3"},
{file = "torchvision-0.13.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:87c137f343197769a51333076e66bfcd576301d2cd8614b06657187c71b06c4f"},
{file = "torchvision-0.13.1-cp38-cp38-win_amd64.whl", hash = "sha256:4d8bf321c4380854ef04613935fdd415dce29d1088a7ff99e06e113f0efe9203"},
{file = "torchvision-0.13.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:0298bae3b09ac361866088434008d82b99d6458fe8888c8df90720ef4b347d44"},
{file = "torchvision-0.13.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c5ed609c8bc88c575226400b2232e0309094477c82af38952e0373edef0003fd"},
{file = "torchvision-0.13.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:3567fb3def829229ec217c1e38f08c5128ff7fb65854cac17ebac358ff7aa309"},
{file = "torchvision-0.13.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:b167934a5943242da7b1e59318f911d2d253feeca0d13ad5d832b58eed943401"},
{file = "torchvision-0.13.1-cp39-cp39-win_amd64.whl", hash = "sha256:0e77706cc90462653620e336bb90daf03d7bf1b88c3a9a3037df8d111823a56e"},
]
tornado = [
{file = "tornado-6.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:20f638fd8cc85f3cbae3c732326e96addff0a15e22d80f049e00121651e82e72"},
{file = "tornado-6.2-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:87dcafae3e884462f90c90ecc200defe5e580a7fbbb4365eda7c7c1eb809ebc9"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba09ef14ca9893954244fd872798b4ccb2367c165946ce2dd7376aebdde8e3ac"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8150f721c101abdef99073bf66d3903e292d851bee51910839831caba341a75"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3a2f5999215a3a06a4fc218026cd84c61b8b2b40ac5296a6db1f1451ef04c1e"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5f8c52d219d4995388119af7ccaa0bcec289535747620116a58d830e7c25d8a8"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_i686.whl", hash = "sha256:6fdfabffd8dfcb6cf887428849d30cf19a3ea34c2c248461e1f7d718ad30b66b"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:1d54d13ab8414ed44de07efecb97d4ef7c39f7438cf5e976ccd356bebb1b5fca"},
{file = "tornado-6.2-cp37-abi3-win32.whl", hash = "sha256:5c87076709343557ef8032934ce5f637dbb552efa7b21d08e89ae7619ed0eb23"},
{file = "tornado-6.2-cp37-abi3-win_amd64.whl", hash = "sha256:e5f923aa6a47e133d1cf87d60700889d7eae68988704e20c75fb2d65677a8e4b"},
{file = "tornado-6.2.tar.gz", hash = "sha256:9b630419bde84ec666bfd7ea0a4cb2a8a651c2d5cccdbdd1972a0c859dfc3c13"},
]
tqdm = [
{file = "tqdm-4.64.1-py2.py3-none-any.whl", hash = "sha256:6fee160d6ffcd1b1c68c65f14c829c22832bc401726335ce92c52d395944a6a1"},
{file = "tqdm-4.64.1.tar.gz", hash = "sha256:5f4f682a004951c1b450bc753c710e9280c5746ce6ffedee253ddbcbf54cf1e4"},
]
traitlets = [
{file = "traitlets-5.5.0-py3-none-any.whl", hash = "sha256:1201b2c9f76097195989cdf7f65db9897593b0dfd69e4ac96016661bb6f0d30f"},
{file = "traitlets-5.5.0.tar.gz", hash = "sha256:b122f9ff2f2f6c1709dab289a05555be011c87828e911c0cf4074b85cb780a79"},
]
typer = [
{file = "typer-0.7.0-py3-none-any.whl", hash = "sha256:b5e704f4e48ec263de1c0b3a2387cd405a13767d2f907f44c1a08cbad96f606d"},
{file = "typer-0.7.0.tar.gz", hash = "sha256:ff797846578a9f2a201b53442aedeb543319466870fbe1c701eab66dd7681165"},
]
typing-extensions = [
{file = "typing_extensions-4.4.0-py3-none-any.whl", hash = "sha256:16fa4864408f655d35ec496218b85f79b3437c829e93320c7c9215ccfd92489e"},
{file = "typing_extensions-4.4.0.tar.gz", hash = "sha256:1511434bb92bf8dd198c12b1cc812e800d4181cfcb867674e0f8279cc93087aa"},
]
tzdata = [
{file = "tzdata-2022.6-py2.py3-none-any.whl", hash = "sha256:04a680bdc5b15750c39c12a448885a51134a27ec9af83667663f0b3a1bf3f342"},
{file = "tzdata-2022.6.tar.gz", hash = "sha256:91f11db4503385928c15598c98573e3af07e7229181bee5375bd30f1695ddcae"},
]
tzlocal = [
{file = "tzlocal-4.2-py3-none-any.whl", hash = "sha256:89885494684c929d9191c57aa27502afc87a579be5cdd3225c77c463ea043745"},
{file = "tzlocal-4.2.tar.gz", hash = "sha256:ee5842fa3a795f023514ac2d801c4a81d1743bbe642e3940143326b3a00addd7"},
]
urllib3 = [
{file = "urllib3-1.26.12-py2.py3-none-any.whl", hash = "sha256:b930dd878d5a8afb066a637fbb35144fe7901e3b209d1cd4f524bd0e9deee997"},
{file = "urllib3-1.26.12.tar.gz", hash = "sha256:3fa96cf423e6987997fc326ae8df396db2a8b7c667747d47ddd8ecba91f4a74e"},
]
wasabi = [
{file = "wasabi-0.10.1-py3-none-any.whl", hash = "sha256:fe862cc24034fbc9f04717cd312ab884f71f51a8ecabebc3449b751c2a649d83"},
{file = "wasabi-0.10.1.tar.gz", hash = "sha256:c8e372781be19272942382b14d99314d175518d7822057cb7a97010c4259d249"},
]
wcwidth = [
{file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"},
{file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"},
]
webencodings = [
{file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
{file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
]
websocket-client = [
{file = "websocket-client-1.4.2.tar.gz", hash = "sha256:d6e8f90ca8e2dd4e8027c4561adeb9456b54044312dba655e7cae652ceb9ae59"},
{file = "websocket_client-1.4.2-py3-none-any.whl", hash = "sha256:d6b06432f184438d99ac1f456eaf22fe1ade524c3dd16e661142dc54e9cba574"},
]
werkzeug = [
{file = "Werkzeug-2.2.2-py3-none-any.whl", hash = "sha256:f979ab81f58d7318e064e99c4506445d60135ac5cd2e177a2de0089bfd4c9bd5"},
{file = "Werkzeug-2.2.2.tar.gz", hash = "sha256:7ea2d48322cc7c0f8b3a215ed73eabd7b5d75d0b50e31ab006286ccff9e00b8f"},
]
wheel = [
{file = "wheel-0.38.4-py3-none-any.whl", hash = "sha256:b60533f3f5d530e971d6737ca6d58681ee434818fab630c83a734bb10c083ce8"},
{file = "wheel-0.38.4.tar.gz", hash = "sha256:965f5259b566725405b05e7cf774052044b1ed30119b5d586b2703aafe8719ac"},
]
widgetsnbextension = [
{file = "widgetsnbextension-4.0.3-py3-none-any.whl", hash = "sha256:7f3b0de8fda692d31ef03743b598620e31c2668b835edbd3962d080ccecf31eb"},
{file = "widgetsnbextension-4.0.3.tar.gz", hash = "sha256:34824864c062b0b3030ad78210db5ae6a3960dfb61d5b27562d6631774de0286"},
]
wrapt = [
{file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"},
{file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"},
{file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"},
{file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"},
{file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"},
{file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"},
{file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"},
{file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"},
{file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"},
{file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"},
{file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"},
{file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"},
{file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"},
{file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"},
{file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"},
{file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"},
]
xgboost = [
{file = "xgboost-1.7.1-py3-none-macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:373d8e95f2f0c0a680ee625a96141b0009f334e132be8493e0f6c69026221bbd"},
{file = "xgboost-1.7.1-py3-none-macosx_12_0_arm64.whl", hash = "sha256:91dfd4af12c01c6e683b0412f48744d2d30d6754e33b297e40845e2d136b3d30"},
{file = "xgboost-1.7.1-py3-none-manylinux2014_aarch64.whl", hash = "sha256:18b9fbad68d2af60737618072e77a43f88eec1113a143f9498698eb5db0d9c41"},
{file = "xgboost-1.7.1-py3-none-manylinux2014_x86_64.whl", hash = "sha256:e96305eb8c8b6061d83ac9fef25437e8ebc8d9c9300e75b8d07f35de1031166b"},
{file = "xgboost-1.7.1-py3-none-win_amd64.whl", hash = "sha256:fbe06896e1b12843c7f428ae56da6ac1c5975545d8785f137f73fd591c54e5f5"},
{file = "xgboost-1.7.1.tar.gz", hash = "sha256:bb302c5c33e14bab94603940987940f29203ecb8767a7a719daf579fbfaace64"},
]
zict = [
{file = "zict-2.2.0-py2.py3-none-any.whl", hash = "sha256:dabcc8c8b6833aa3b6602daad50f03da068322c1a90999ff78aed9eecc8fa92c"},
{file = "zict-2.2.0.tar.gz", hash = "sha256:d7366c2e2293314112dcf2432108428a67b927b00005619feefc310d12d833f3"},
]
zipp = [
{file = "zipp-3.10.0-py3-none-any.whl", hash = "sha256:4fcb6f278987a6605757302a6e40e896257570d11c51628968ccb2a47e80c6c1"},
{file = "zipp-3.10.0.tar.gz", hash = "sha256:7a7262fd930bd3e36c50b9a64897aec3fafff3dfdeec9623ae22b40e93f99bb8"},
]
| [[package]]
name = "absl-py"
version = "1.3.0"
description = "Abseil Python Common Libraries, see https://github.com/abseil/abseil-py."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "alabaster"
version = "0.7.12"
description = "A configurable sidebar-enabled Sphinx theme"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "anyio"
version = "3.6.2"
description = "High level compatibility layer for multiple asynchronous event loop implementations"
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
idna = ">=2.8"
sniffio = ">=1.1"
[package.extras]
doc = ["packaging", "sphinx-autodoc-typehints (>=1.2.0)", "sphinx-rtd-theme"]
test = ["contextlib2", "coverage[toml] (>=4.5)", "hypothesis (>=4.0)", "mock (>=4)", "pytest (>=7.0)", "pytest-mock (>=3.6.1)", "trustme", "uvloop (<0.15)", "uvloop (>=0.15)"]
trio = ["trio (>=0.16,<0.22)"]
[[package]]
name = "appnope"
version = "0.1.3"
description = "Disable App Nap on macOS >= 10.9"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "argon2-cffi"
version = "21.3.0"
description = "The secure Argon2 password hashing algorithm."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
argon2-cffi-bindings = "*"
[package.extras]
dev = ["cogapp", "coverage[toml] (>=5.0.2)", "furo", "hypothesis", "pre-commit", "pytest", "sphinx", "sphinx-notfound-page", "tomli"]
docs = ["furo", "sphinx", "sphinx-notfound-page"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pytest"]
[[package]]
name = "argon2-cffi-bindings"
version = "21.2.0"
description = "Low-level CFFI bindings for Argon2"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.0.1"
[package.extras]
dev = ["cogapp", "pre-commit", "pytest", "wheel"]
tests = ["pytest"]
[[package]]
name = "asttokens"
version = "2.1.0"
description = "Annotate AST trees with source code positions"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[package.extras]
test = ["astroid (<=2.5.3)", "pytest"]
[[package]]
name = "astunparse"
version = "1.6.3"
description = "An AST unparser for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = ">=1.6.1,<2.0"
wheel = ">=0.23.0,<1.0"
[[package]]
name = "attrs"
version = "22.1.0"
description = "Classes Without Boilerplate"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
dev = ["cloudpickle", "coverage[toml] (>=5.0.2)", "furo", "hypothesis", "mypy (>=0.900,!=0.940)", "pre-commit", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "sphinx", "sphinx-notfound-page", "zope.interface"]
docs = ["furo", "sphinx", "sphinx-notfound-page", "zope.interface"]
tests = ["cloudpickle", "coverage[toml] (>=5.0.2)", "hypothesis", "mypy (>=0.900,!=0.940)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "zope.interface"]
tests_no_zope = ["cloudpickle", "coverage[toml] (>=5.0.2)", "hypothesis", "mypy (>=0.900,!=0.940)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins"]
[[package]]
name = "autogluon.common"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
boto3 = "*"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
setuptools = "*"
[package.extras]
tests = ["pytest", "pytest-mypy", "types-requests", "types-setuptools"]
[[package]]
name = "autogluon.core"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
"autogluon.common" = "0.6.0"
boto3 = "*"
dask = ">=2021.09.1,<=2021.11.2"
distributed = ">=2021.09.1,<=2021.11.2"
matplotlib = "*"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
requests = "*"
scikit-learn = ">=1.0.0,<1.2"
scipy = ">=1.5.4,<1.10.0"
tqdm = ">=4.38.0"
[package.extras]
all = ["hyperopt (>=0.2.7,<0.2.8)", "ray (>=2.0,<2.1)", "ray[tune] (>=2.0,<2.1)"]
ray = ["ray (>=2.0,<2.1)"]
raytune = ["hyperopt (>=0.2.7,<0.2.8)", "ray[tune] (>=2.0,<2.1)"]
tests = ["pytest", "pytest-mypy", "types-requests", "types-setuptools"]
[[package]]
name = "autogluon.features"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
"autogluon.common" = "0.6.0"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
psutil = ">=5.7.3,<6"
scikit-learn = ">=1.0.0,<1.2"
[[package]]
name = "autogluon.tabular"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
"autogluon.core" = "0.6.0"
"autogluon.features" = "0.6.0"
catboost = {version = ">=1.0,<1.2", optional = true, markers = "extra == \"all\""}
fastai = {version = ">=2.3.1,<2.8", optional = true, markers = "extra == \"all\""}
lightgbm = {version = ">=3.3,<3.4", optional = true, markers = "extra == \"all\""}
networkx = ">=2.3,<3.0"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
psutil = ">=5.7.3,<6"
scikit-learn = ">=1.0.0,<1.2"
scipy = ">=1.5.4,<1.10.0"
torch = {version = ">=1.0,<1.13", optional = true, markers = "extra == \"all\""}
xgboost = {version = ">=1.6,<1.8", optional = true, markers = "extra == \"all\""}
[package.extras]
all = ["catboost (>=1.0,<1.2)", "fastai (>=2.3.1,<2.8)", "lightgbm (>=3.3,<3.4)", "torch (>=1.0,<1.13)", "xgboost (>=1.6,<1.8)"]
catboost = ["catboost (>=1.0,<1.2)"]
fastai = ["fastai (>=2.3.1,<2.8)", "torch (>=1.0,<1.13)"]
imodels = ["imodels (>=1.3.0)"]
lightgbm = ["lightgbm (>=3.3,<3.4)"]
skex = ["scikit-learn-intelex (>=2021.5,<2021.6)"]
skl2onnx = ["skl2onnx (>=1.12.0,<1.13.0)"]
tests = ["imodels (>=1.3.0)", "skl2onnx (>=1.12.0,<1.13.0)", "vowpalwabbit (>=8.10,<8.11)"]
vowpalwabbit = ["vowpalwabbit (>=8.10,<8.11)"]
xgboost = ["xgboost (>=1.6,<1.8)"]
[[package]]
name = "Babel"
version = "2.11.0"
description = "Internationalization utilities"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pytz = ">=2015.7"
[[package]]
name = "backcall"
version = "0.2.0"
description = "Specifications for callback functions passed in to an API"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "backports.zoneinfo"
version = "0.2.1"
description = "Backport of the standard library zoneinfo module"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
tzdata = ["tzdata"]
[[package]]
name = "beautifulsoup4"
version = "4.11.1"
description = "Screen-scraping library"
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.dependencies]
soupsieve = ">1.2"
[package.extras]
html5lib = ["html5lib"]
lxml = ["lxml"]
[[package]]
name = "black"
version = "22.10.0"
description = "The uncompromising code formatter."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
click = ">=8.0.0"
ipython = {version = ">=7.8.0", optional = true, markers = "extra == \"jupyter\""}
mypy-extensions = ">=0.4.3"
pathspec = ">=0.9.0"
platformdirs = ">=2"
tokenize-rt = {version = ">=3.2.0", optional = true, markers = "extra == \"jupyter\""}
tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
[package.extras]
colorama = ["colorama (>=0.4.3)"]
d = ["aiohttp (>=3.7.4)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "bleach"
version = "5.0.1"
description = "An easy safelist-based HTML-sanitizing tool."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.9.0"
webencodings = "*"
[package.extras]
css = ["tinycss2 (>=1.1.0,<1.2)"]
dev = ["Sphinx (==4.3.2)", "black (==22.3.0)", "build (==0.8.0)", "flake8 (==4.0.1)", "hashin (==0.17.0)", "mypy (==0.961)", "pip-tools (==6.6.2)", "pytest (==7.1.2)", "tox (==3.25.0)", "twine (==4.0.1)", "wheel (==0.37.1)"]
[[package]]
name = "blis"
version = "0.7.9"
description = "The Blis BLAS-like linear algebra library, as a self-contained C-extension."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.15.0"
[[package]]
name = "boto3"
version = "1.26.17"
description = "The AWS SDK for Python"
category = "main"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
botocore = ">=1.29.17,<1.30.0"
jmespath = ">=0.7.1,<2.0.0"
s3transfer = ">=0.6.0,<0.7.0"
[package.extras]
crt = ["botocore[crt] (>=1.21.0,<2.0a0)"]
[[package]]
name = "botocore"
version = "1.29.17"
description = "Low-level, data-driven core of boto 3."
category = "main"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
jmespath = ">=0.7.1,<2.0.0"
python-dateutil = ">=2.1,<3.0.0"
urllib3 = ">=1.25.4,<1.27"
[package.extras]
crt = ["awscrt (==0.14.0)"]
[[package]]
name = "cachetools"
version = "5.2.0"
description = "Extensible memoizing collections and decorators"
category = "dev"
optional = false
python-versions = "~=3.7"
[[package]]
name = "catalogue"
version = "2.0.8"
description = "Super lightweight function registries for your library"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "catboost"
version = "1.1.1"
description = "Catboost Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
graphviz = "*"
matplotlib = "*"
numpy = ">=1.16.0"
pandas = ">=0.24.0"
plotly = "*"
scipy = "*"
six = "*"
[[package]]
name = "causal-learn"
version = "0.1.3.0"
description = "causal-learn Python Package"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
graphviz = "*"
matplotlib = "*"
networkx = "*"
numpy = "*"
pandas = "*"
pydot = "*"
scikit-learn = "*"
scipy = "*"
statsmodels = "*"
tqdm = "*"
[[package]]
name = "causalml"
version = "0.13.0"
description = "Python Package for Uplift Modeling and Causal Inference with Machine Learning Algorithms"
category = "main"
optional = true
python-versions = ">=3.7"
develop = false
[package.dependencies]
Cython = ">=0.28.0"
dill = "*"
forestci = "0.6"
graphviz = "*"
lightgbm = "*"
matplotlib = "*"
numpy = ">=1.18.5"
packaging = "*"
pandas = ">=0.24.1"
pathos = "0.2.9"
pip = ">=10.0"
pydotplus = "*"
pygam = "*"
pyro-ppl = "*"
scikit-learn = "<=1.0.2"
scipy = ">=1.4.1"
seaborn = "*"
setuptools = ">=41.0.0"
shap = "*"
statsmodels = ">=0.9.0"
torch = "*"
tqdm = "*"
xgboost = "*"
[package.extras]
tf = ["tensorflow (>=2.4.0)"]
[package.source]
type = "git"
url = "https://github.com/uber/causalml"
reference = "master"
resolved_reference = "7050c74c257254de3600f69d49bda84a3ac152e2"
[[package]]
name = "certifi"
version = "2022.9.24"
description = "Python package for providing Mozilla's CA Bundle."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cffi"
version = "1.15.1"
description = "Foreign Function Interface for Python calling C code."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pycparser = "*"
[[package]]
name = "charset-normalizer"
version = "2.1.1"
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
category = "main"
optional = false
python-versions = ">=3.6.0"
[package.extras]
unicode_backport = ["unicodedata2"]
[[package]]
name = "click"
version = "8.1.3"
description = "Composable command line interface toolkit"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "cloudpickle"
version = "2.2.0"
description = "Extended pickling support for Python objects"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "colorama"
version = "0.4.6"
description = "Cross-platform colored terminal text."
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
[[package]]
name = "comm"
version = "0.1.1"
description = "Jupyter Python Comm implementation, for usage in ipykernel, xeus-python etc."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
traitlets = ">5.3"
[package.extras]
test = ["pytest"]
[[package]]
name = "confection"
version = "0.0.3"
description = "The sweetest config system for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0"
srsly = ">=2.4.0,<3.0.0"
[[package]]
name = "contourpy"
version = "1.0.6"
description = "Python library for calculating contours of 2D quadrilateral grids"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.16"
[package.extras]
bokeh = ["bokeh", "selenium"]
docs = ["docutils (<0.18)", "sphinx (<=5.2.0)", "sphinx-rtd-theme"]
test = ["Pillow", "flake8", "isort", "matplotlib", "pytest"]
test-minimal = ["pytest"]
test-no-codebase = ["Pillow", "matplotlib", "pytest"]
[[package]]
name = "coverage"
version = "6.5.0"
description = "Code coverage measurement for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
tomli = {version = "*", optional = true, markers = "python_full_version <= \"3.11.0a6\" and extra == \"toml\""}
[package.extras]
toml = ["tomli"]
[[package]]
name = "cycler"
version = "0.11.0"
description = "Composable style cycles"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cymem"
version = "2.0.7"
description = "Manage calls to calloc/free through Cython"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "Cython"
version = "0.29.32"
description = "The Cython compiler for writing C extensions for the Python language."
category = "main"
optional = false
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "dask"
version = "2021.11.2"
description = "Parallel PyData with Task Scheduling"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
cloudpickle = ">=1.1.1"
fsspec = ">=0.6.0"
packaging = ">=20.0"
partd = ">=0.3.10"
pyyaml = "*"
toolz = ">=0.8.2"
[package.extras]
array = ["numpy (>=1.18)"]
complete = ["bokeh (>=1.0.0,!=2.0.0)", "distributed (==2021.11.2)", "jinja2", "numpy (>=1.18)", "pandas (>=1.0)"]
dataframe = ["numpy (>=1.18)", "pandas (>=1.0)"]
diagnostics = ["bokeh (>=1.0.0,!=2.0.0)", "jinja2"]
distributed = ["distributed (==2021.11.2)"]
test = ["pre-commit", "pytest", "pytest-rerunfailures", "pytest-xdist"]
[[package]]
name = "debugpy"
version = "1.6.3"
description = "An implementation of the Debug Adapter Protocol for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "decorator"
version = "5.1.1"
description = "Decorators for Humans"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "defusedxml"
version = "0.7.1"
description = "XML bomb protection for Python stdlib modules"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "dill"
version = "0.3.6"
description = "serialize all of python"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
graph = ["objgraph (>=1.7.2)"]
[[package]]
name = "distributed"
version = "2021.11.2"
description = "Distributed scheduler for Dask"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
click = ">=6.6"
cloudpickle = ">=1.5.0"
dask = "2021.11.2"
jinja2 = "*"
msgpack = ">=0.6.0"
psutil = ">=5.0"
pyyaml = "*"
setuptools = "*"
sortedcontainers = "<2.0.0 || >2.0.0,<2.0.1 || >2.0.1"
tblib = ">=1.6.0"
toolz = ">=0.8.2"
tornado = {version = ">=6.0.3", markers = "python_version >= \"3.8\""}
zict = ">=0.1.3"
[[package]]
name = "docutils"
version = "0.17.1"
description = "Docutils -- Python Documentation Utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "econml"
version = "0.14.0"
description = "This package contains several methods for calculating Conditional Average Treatment Effects"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
joblib = ">=0.13.0"
lightgbm = "*"
numpy = "*"
pandas = "*"
scikit-learn = ">0.22.0,<1.2"
scipy = ">1.4.0"
shap = ">=0.38.1,<0.41.0"
sparse = "*"
statsmodels = ">=0.10"
[package.extras]
all = ["azure-cli", "dowhy (<0.9)", "keras (<2.4)", "matplotlib (<3.6.0)", "protobuf (<4)", "tensorflow (>1.10,<2.3)"]
automl = ["azure-cli"]
dowhy = ["dowhy (<0.9)"]
plt = ["graphviz", "matplotlib (<3.6.0)"]
tf = ["keras (<2.4)", "protobuf (<4)", "tensorflow (>1.10,<2.3)"]
[[package]]
name = "entrypoints"
version = "0.4"
description = "Discover and load entry points from installed packages."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "exceptiongroup"
version = "1.0.4"
description = "Backport of PEP 654 (exception groups)"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pytest (>=6)"]
[[package]]
name = "executing"
version = "1.2.0"
description = "Get the currently executing AST node of a frame, and other information"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
tests = ["asttokens", "littleutils", "pytest", "rich"]
[[package]]
name = "fastai"
version = "2.7.10"
description = "fastai simplifies training fast and accurate neural nets using modern best practices"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastcore = ">=1.4.5,<1.6"
fastdownload = ">=0.0.5,<2"
fastprogress = ">=0.2.4"
matplotlib = "*"
packaging = "*"
pandas = "*"
pillow = ">6.0.0"
pip = "*"
pyyaml = "*"
requests = "*"
scikit-learn = "*"
scipy = "*"
spacy = "<4"
torch = ">=1.7,<1.14"
torchvision = ">=0.8.2"
[package.extras]
dev = ["accelerate (>=0.10.0)", "albumentations", "captum (>=0.3)", "catalyst", "comet-ml", "flask", "flask-compress", "ipywidgets", "kornia", "neptune-client", "ninja", "opencv-python", "pyarrow", "pydicom", "pytorch-ignite", "pytorch-lightning", "scikit-image", "sentencepiece", "tensorboard", "timm (>=0.6.2.dev)", "transformers", "wandb"]
[[package]]
name = "fastcore"
version = "1.5.27"
description = "Python supercharged for fastai development"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
pip = "*"
[package.extras]
dev = ["jupyterlab", "matplotlib", "nbdev (>=0.2.39)", "numpy", "pandas", "pillow", "torch"]
[[package]]
name = "fastdownload"
version = "0.0.7"
description = "A general purpose data downloading library."
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
fastcore = ">=1.3.26"
fastprogress = "*"
[[package]]
name = "fastjsonschema"
version = "2.16.2"
description = "Fastest Python implementation of JSON schema"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
devel = ["colorama", "json-spec", "jsonschema", "pylint", "pytest", "pytest-benchmark", "pytest-cache", "validictory"]
[[package]]
name = "fastprogress"
version = "1.0.3"
description = "A nested progress with plotting options for fastai"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "flake8"
version = "4.0.1"
description = "the modular source code checker: pep8 pyflakes and co"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mccabe = ">=0.6.0,<0.7.0"
pycodestyle = ">=2.8.0,<2.9.0"
pyflakes = ">=2.4.0,<2.5.0"
[[package]]
name = "flaky"
version = "3.7.0"
description = "Plugin for nose or pytest that automatically reruns flaky tests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "flatbuffers"
version = "22.11.23"
description = "The FlatBuffers serialization format for Python"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fonttools"
version = "4.38.0"
description = "Tools to manipulate font files"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
all = ["brotli (>=1.0.1)", "brotlicffi (>=0.8.0)", "fs (>=2.2.0,<3)", "lxml (>=4.0,<5)", "lz4 (>=1.7.4.2)", "matplotlib", "munkres", "scipy", "skia-pathops (>=0.5.0)", "sympy", "uharfbuzz (>=0.23.0)", "unicodedata2 (>=14.0.0)", "xattr", "zopfli (>=0.1.4)"]
graphite = ["lz4 (>=1.7.4.2)"]
interpolatable = ["munkres", "scipy"]
lxml = ["lxml (>=4.0,<5)"]
pathops = ["skia-pathops (>=0.5.0)"]
plot = ["matplotlib"]
repacker = ["uharfbuzz (>=0.23.0)"]
symfont = ["sympy"]
type1 = ["xattr"]
ufo = ["fs (>=2.2.0,<3)"]
unicode = ["unicodedata2 (>=14.0.0)"]
woff = ["brotli (>=1.0.1)", "brotlicffi (>=0.8.0)", "zopfli (>=0.1.4)"]
[[package]]
name = "forestci"
version = "0.6"
description = "forestci: confidence intervals for scikit-learn forest algorithms"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
numpy = ">=1.20"
scikit-learn = ">=0.23.1"
[[package]]
name = "fsspec"
version = "2022.11.0"
description = "File-system specification"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
abfs = ["adlfs"]
adl = ["adlfs"]
arrow = ["pyarrow (>=1)"]
dask = ["dask", "distributed"]
dropbox = ["dropbox", "dropboxdrivefs", "requests"]
entrypoints = ["importlib-metadata"]
fuse = ["fusepy"]
gcs = ["gcsfs"]
git = ["pygit2"]
github = ["requests"]
gs = ["gcsfs"]
gui = ["panel"]
hdfs = ["pyarrow (>=1)"]
http = ["aiohttp (!=4.0.0a0,!=4.0.0a1)", "requests"]
libarchive = ["libarchive-c"]
oci = ["ocifs"]
s3 = ["s3fs"]
sftp = ["paramiko"]
smb = ["smbprotocol"]
ssh = ["paramiko"]
tqdm = ["tqdm"]
[[package]]
name = "future"
version = "0.18.2"
description = "Clean single-source support for Python 3 and 2"
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "gast"
version = "0.4.0"
description = "Python AST that abstracts the underlying Python version"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "google-auth"
version = "2.14.1"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*"
[package.dependencies]
cachetools = ">=2.0.0,<6.0"
pyasn1-modules = ">=0.2.1"
rsa = {version = ">=3.1.4,<5", markers = "python_version >= \"3.6\""}
six = ">=1.9.0"
[package.extras]
aiohttp = ["aiohttp (>=3.6.2,<4.0.0dev)", "requests (>=2.20.0,<3.0.0dev)"]
enterprise_cert = ["cryptography (==36.0.2)", "pyopenssl (==22.0.0)"]
pyopenssl = ["cryptography (>=38.0.3)", "pyopenssl (>=20.0.0)"]
reauth = ["pyu2f (>=0.1.5)"]
[[package]]
name = "google-auth-oauthlib"
version = "0.4.6"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
google-auth = ">=1.0.0"
requests-oauthlib = ">=0.7.0"
[package.extras]
tool = ["click (>=6.0.0)"]
[[package]]
name = "google-pasta"
version = "0.2.0"
description = "pasta is an AST-based Python refactoring library"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[[package]]
name = "graphviz"
version = "0.20.1"
description = "Simple Python interface for Graphviz"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
dev = ["flake8", "pep8-naming", "tox (>=3)", "twine", "wheel"]
docs = ["sphinx (>=5)", "sphinx-autodoc-typehints", "sphinx-rtd-theme"]
test = ["coverage", "mock (>=4)", "pytest (>=7)", "pytest-cov", "pytest-mock (>=3)"]
[[package]]
name = "grpcio"
version = "1.50.0"
description = "HTTP/2-based RPC framework"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.5.2"
[package.extras]
protobuf = ["grpcio-tools (>=1.50.0)"]
[[package]]
name = "h5py"
version = "3.7.0"
description = "Read and write HDF5 files from Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.14.5"
[[package]]
name = "HeapDict"
version = "1.0.1"
description = "a heap with decrease-key and increase-key operations"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "idna"
version = "3.4"
description = "Internationalized Domain Names in Applications (IDNA)"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "imagesize"
version = "1.4.1"
description = "Getting image size from png/jpeg/jpeg2000/gif file"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "importlib-metadata"
version = "5.1.0"
description = "Read metadata from Python packages"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = ">=0.5"
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"]
perf = ["ipython"]
testing = ["flake8 (<5)", "flufl.flake8", "importlib-resources (>=1.3)", "packaging", "pyfakefs", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)", "pytest-perf (>=0.9.2)"]
[[package]]
name = "importlib-resources"
version = "5.10.0"
description = "Read resources from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""}
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"]
testing = ["flake8 (<5)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
[[package]]
name = "iniconfig"
version = "1.1.1"
description = "iniconfig: brain-dead simple config-ini parsing"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipykernel"
version = "6.18.1"
description = "IPython Kernel for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "platform_system == \"Darwin\""}
comm = ">=0.1"
debugpy = ">=1.0"
ipython = ">=7.23.1"
jupyter-client = ">=6.1.12"
matplotlib-inline = ">=0.1"
nest-asyncio = "*"
packaging = "*"
psutil = "*"
pyzmq = ">=17"
tornado = ">=6.1"
traitlets = ">=5.1.0"
[package.extras]
cov = ["coverage[toml]", "curio", "matplotlib", "pytest-cov", "trio"]
docs = ["myst-parser", "pydata-sphinx-theme", "sphinx", "sphinxcontrib-github-alt"]
test = ["flaky", "ipyparallel", "pre-commit", "pytest (>=7.0)", "pytest-asyncio", "pytest-cov", "pytest-timeout"]
[[package]]
name = "ipython"
version = "8.7.0"
description = "IPython: Productive Interactive Computing"
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "sys_platform == \"darwin\""}
backcall = "*"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
decorator = "*"
jedi = ">=0.16"
matplotlib-inline = "*"
pexpect = {version = ">4.3", markers = "sys_platform != \"win32\""}
pickleshare = "*"
prompt-toolkit = ">=3.0.11,<3.1.0"
pygments = ">=2.4.0"
stack-data = "*"
traitlets = ">=5"
[package.extras]
all = ["black", "curio", "docrepr", "ipykernel", "ipyparallel", "ipywidgets", "matplotlib", "matplotlib (!=3.2.0)", "nbconvert", "nbformat", "notebook", "numpy (>=1.20)", "pandas", "pytest (<7)", "pytest (<7.1)", "pytest-asyncio", "qtconsole", "setuptools (>=18.5)", "sphinx (>=1.3)", "sphinx-rtd-theme", "stack-data", "testpath", "trio", "typing-extensions"]
black = ["black"]
doc = ["docrepr", "ipykernel", "matplotlib", "pytest (<7)", "pytest (<7.1)", "pytest-asyncio", "setuptools (>=18.5)", "sphinx (>=1.3)", "sphinx-rtd-theme", "stack-data", "testpath", "typing-extensions"]
kernel = ["ipykernel"]
nbconvert = ["nbconvert"]
nbformat = ["nbformat"]
notebook = ["ipywidgets", "notebook"]
parallel = ["ipyparallel"]
qtconsole = ["qtconsole"]
test = ["pytest (<7.1)", "pytest-asyncio", "testpath"]
test_extra = ["curio", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.20)", "pandas", "pytest (<7.1)", "pytest-asyncio", "testpath", "trio"]
[[package]]
name = "ipython_genutils"
version = "0.2.0"
description = "Vestigial utilities from IPython"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipywidgets"
version = "8.0.2"
description = "Jupyter interactive widgets"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = ">=4.5.1"
ipython = ">=6.1.0"
jupyterlab-widgets = ">=3.0,<4.0"
traitlets = ">=4.3.1"
widgetsnbextension = ">=4.0,<5.0"
[package.extras]
test = ["jsonschema", "pytest (>=3.6.0)", "pytest-cov", "pytz"]
[[package]]
name = "isort"
version = "5.10.1"
description = "A Python utility / library to sort Python imports."
category = "dev"
optional = false
python-versions = ">=3.6.1,<4.0"
[package.extras]
colors = ["colorama (>=0.4.3,<0.5.0)"]
pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
plugins = ["setuptools"]
requirements_deprecated_finder = ["pip-api", "pipreqs"]
[[package]]
name = "jedi"
version = "0.18.2"
description = "An autocompletion tool for Python that can be used for text editors."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
parso = ">=0.8.0,<0.9.0"
[package.extras]
docs = ["Jinja2 (==2.11.3)", "MarkupSafe (==1.1.1)", "Pygments (==2.8.1)", "alabaster (==0.7.12)", "babel (==2.9.1)", "chardet (==4.0.0)", "commonmark (==0.8.1)", "docutils (==0.17.1)", "future (==0.18.2)", "idna (==2.10)", "imagesize (==1.2.0)", "mock (==1.0.1)", "packaging (==20.9)", "pyparsing (==2.4.7)", "pytz (==2021.1)", "readthedocs-sphinx-ext (==2.1.4)", "recommonmark (==0.5.0)", "requests (==2.25.1)", "six (==1.15.0)", "snowballstemmer (==2.1.0)", "sphinx (==1.8.5)", "sphinx-rtd-theme (==0.4.3)", "sphinxcontrib-serializinghtml (==1.1.4)", "sphinxcontrib-websupport (==1.2.4)", "urllib3 (==1.26.4)"]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["Django (<3.1)", "attrs", "colorama", "docopt", "pytest (<7.0.0)"]
[[package]]
name = "Jinja2"
version = "3.1.2"
description = "A very fast and expressive template engine."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.0"
[package.extras]
i18n = ["Babel (>=2.7)"]
[[package]]
name = "jmespath"
version = "1.0.1"
description = "JSON Matching Expressions"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "joblib"
version = "1.2.0"
description = "Lightweight pipelining with Python functions"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jsonschema"
version = "4.17.1"
description = "An implementation of JSON Schema validation for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=17.4.0"
importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""}
pkgutil-resolve-name = {version = ">=1.3.10", markers = "python_version < \"3.9\""}
pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
[package.extras]
format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"]
format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=1.11)"]
[[package]]
name = "jupyter"
version = "1.0.0"
description = "Jupyter metapackage. Install all the Jupyter components in one go."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ipykernel = "*"
ipywidgets = "*"
jupyter-console = "*"
nbconvert = "*"
notebook = "*"
qtconsole = "*"
[[package]]
name = "jupyter-client"
version = "7.4.7"
description = "Jupyter protocol implementation and client libraries"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
entrypoints = "*"
jupyter-core = ">=4.9.2"
nest-asyncio = ">=1.5.4"
python-dateutil = ">=2.8.2"
pyzmq = ">=23.0"
tornado = ">=6.2"
traitlets = "*"
[package.extras]
doc = ["ipykernel", "myst-parser", "sphinx (>=1.3.6)", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
test = ["codecov", "coverage", "ipykernel (>=6.12)", "ipython", "mypy", "pre-commit", "pytest", "pytest-asyncio (>=0.18)", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-console"
version = "6.4.4"
description = "Jupyter terminal console"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = "*"
ipython = "*"
jupyter-client = ">=7.0.0"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = "*"
[package.extras]
test = ["pexpect"]
[[package]]
name = "jupyter-core"
version = "5.1.0"
description = "Jupyter core package. A base package on which Jupyter projects rely."
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
platformdirs = ">=2.5"
pywin32 = {version = ">=1.0", markers = "sys_platform == \"win32\" and platform_python_implementation != \"PyPy\""}
traitlets = ">=5.3"
[package.extras]
docs = ["myst-parser", "sphinxcontrib-github-alt", "traitlets"]
test = ["ipykernel", "pre-commit", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-server"
version = "1.23.3"
description = "The backend—i.e. core services, APIs, and REST endpoints—to Jupyter web applications."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
anyio = ">=3.1.0,<4"
argon2-cffi = "*"
jinja2 = "*"
jupyter-client = ">=6.1.12"
jupyter-core = ">=4.7.0"
nbconvert = ">=6.4.4"
nbformat = ">=5.2.0"
packaging = "*"
prometheus-client = "*"
pywinpty = {version = "*", markers = "os_name == \"nt\""}
pyzmq = ">=17"
Send2Trash = "*"
terminado = ">=0.8.3"
tornado = ">=6.1.0"
traitlets = ">=5.1"
websocket-client = "*"
[package.extras]
test = ["coverage", "ipykernel", "pre-commit", "pytest (>=7.0)", "pytest-console-scripts", "pytest-cov", "pytest-mock", "pytest-timeout", "pytest-tornasync", "requests"]
[[package]]
name = "jupyterlab-pygments"
version = "0.2.2"
description = "Pygments theme using JupyterLab CSS variables"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jupyterlab-widgets"
version = "3.0.3"
description = "Jupyter interactive widgets for JupyterLab"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "keras"
version = "2.11.0"
description = "Deep learning for humans."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "kiwisolver"
version = "1.4.4"
description = "A fast implementation of the Cassowary constraint solver"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "langcodes"
version = "3.3.0"
description = "Tools for labeling human languages with IETF language tags"
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
data = ["language-data (>=1.1,<2.0)"]
[[package]]
name = "libclang"
version = "14.0.6"
description = "Clang Python Bindings, mirrored from the official LLVM repo: https://github.com/llvm/llvm-project/tree/main/clang/bindings/python, to make the installation process easier."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "lightgbm"
version = "3.3.3"
description = "LightGBM Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = "*"
scikit-learn = "!=0.22.0"
scipy = "*"
wheel = "*"
[package.extras]
dask = ["dask[array] (>=2.0.0)", "dask[dataframe] (>=2.0.0)", "dask[distributed] (>=2.0.0)", "pandas"]
[[package]]
name = "llvmlite"
version = "0.36.0"
description = "lightweight wrapper around basic LLVM functionality"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[[package]]
name = "locket"
version = "1.0.0"
description = "File-based locks for Python on Linux and Windows"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "Markdown"
version = "3.4.1"
description = "Python implementation of Markdown."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
[package.extras]
testing = ["coverage", "pyyaml"]
[[package]]
name = "MarkupSafe"
version = "2.1.1"
description = "Safely add untrusted strings to HTML/XML markup."
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "matplotlib"
version = "3.6.2"
description = "Python plotting package"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
contourpy = ">=1.0.1"
cycler = ">=0.10"
fonttools = ">=4.22.0"
kiwisolver = ">=1.0.1"
numpy = ">=1.19"
packaging = ">=20.0"
pillow = ">=6.2.0"
pyparsing = ">=2.2.1"
python-dateutil = ">=2.7"
setuptools_scm = ">=7"
[[package]]
name = "matplotlib-inline"
version = "0.1.6"
description = "Inline Matplotlib backend for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.dependencies]
traitlets = "*"
[[package]]
name = "mccabe"
version = "0.6.1"
description = "McCabe checker, plugin for flake8"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mistune"
version = "2.0.4"
description = "A sane Markdown parser with useful plugins and renderers"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mpmath"
version = "1.2.1"
description = "Python library for arbitrary-precision floating-point arithmetic"
category = "main"
optional = false
python-versions = "*"
[package.extras]
develop = ["codecov", "pycodestyle", "pytest (>=4.6)", "pytest-cov", "wheel"]
tests = ["pytest (>=4.6)"]
[[package]]
name = "msgpack"
version = "1.0.4"
description = "MessagePack serializer"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "multiprocess"
version = "0.70.14"
description = "better multiprocessing and multithreading in python"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
dill = ">=0.3.6"
[[package]]
name = "murmurhash"
version = "1.0.9"
description = "Cython bindings for MurmurHash"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "mypy"
version = "0.971"
description = "Optional static typing for Python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mypy-extensions = ">=0.4.3"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typing-extensions = ">=3.10"
[package.extras]
dmypy = ["psutil (>=4.0)"]
python2 = ["typed-ast (>=1.4.0,<2)"]
reports = ["lxml"]
[[package]]
name = "mypy-extensions"
version = "0.4.3"
description = "Experimental type system extensions for programs checked with the mypy typechecker."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "nbclassic"
version = "0.4.8"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=6.1.1"
jupyter-core = ">=4.6.1"
jupyter-server = ">=1.8"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
notebook-shim = ">=0.1.0"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["myst-parser", "nbsphinx", "sphinx", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
json-logging = ["json-logging"]
test = ["coverage", "nbval", "pytest", "pytest-cov", "pytest-playwright", "pytest-tornasync", "requests", "requests-unixsocket", "testpath"]
[[package]]
name = "nbclient"
version = "0.7.0"
description = "A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor."
category = "dev"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
jupyter-client = ">=6.1.5"
nbformat = ">=5.0"
nest-asyncio = "*"
traitlets = ">=5.2.2"
[package.extras]
sphinx = ["Sphinx (>=1.7)", "autodoc-traits", "mock", "moto", "myst-parser", "sphinx-book-theme"]
test = ["black", "check-manifest", "flake8", "ipykernel", "ipython", "ipywidgets", "mypy", "nbconvert", "pip (>=18.1)", "pre-commit", "pytest (>=4.1)", "pytest-asyncio", "pytest-cov (>=2.6.1)", "setuptools (>=60.0)", "testpath", "twine (>=1.11.0)", "xmltodict"]
[[package]]
name = "nbconvert"
version = "7.0.0rc3"
description = "Converting Jupyter Notebooks"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
bleach = "*"
defusedxml = "*"
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jinja2 = ">=3.0"
jupyter-core = ">=4.7"
jupyterlab-pygments = "*"
markupsafe = ">=2.0"
mistune = ">=2.0.2,<3"
nbclient = ">=0.5.0"
nbformat = ">=5.1"
packaging = "*"
pandocfilters = ">=1.4.1"
pygments = ">=2.4.1"
tinycss2 = "*"
traitlets = ">=5.0"
[package.extras]
all = ["ipykernel", "ipython", "ipywidgets (>=7)", "nbsphinx (>=0.2.12)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency", "sphinx (>=1.5.1)", "sphinx-rtd-theme", "tornado (>=6.1)"]
docs = ["ipython", "nbsphinx (>=0.2.12)", "sphinx (>=1.5.1)", "sphinx-rtd-theme"]
serve = ["tornado (>=6.1)"]
test = ["ipykernel", "ipywidgets (>=7)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency"]
webpdf = ["pyppeteer (>=1,<1.1)"]
[[package]]
name = "nbformat"
version = "5.7.0"
description = "The Jupyter Notebook format"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastjsonschema = "*"
jsonschema = ">=2.6"
jupyter-core = "*"
traitlets = ">=5.1"
[package.extras]
test = ["check-manifest", "pep440", "pre-commit", "pytest", "testpath"]
[[package]]
name = "nbsphinx"
version = "0.8.10"
description = "Jupyter Notebook Tools for Sphinx"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
docutils = "*"
jinja2 = "*"
nbconvert = "!=5.4"
nbformat = "*"
sphinx = ">=1.8"
traitlets = ">=5"
[[package]]
name = "nest-asyncio"
version = "1.5.6"
description = "Patch asyncio to allow nested event loops"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "networkx"
version = "2.8.8"
description = "Python package for creating and manipulating graphs and networks"
category = "main"
optional = false
python-versions = ">=3.8"
[package.extras]
default = ["matplotlib (>=3.4)", "numpy (>=1.19)", "pandas (>=1.3)", "scipy (>=1.8)"]
developer = ["mypy (>=0.982)", "pre-commit (>=2.20)"]
doc = ["nb2plots (>=0.6)", "numpydoc (>=1.5)", "pillow (>=9.2)", "pydata-sphinx-theme (>=0.11)", "sphinx (>=5.2)", "sphinx-gallery (>=0.11)", "texext (>=0.6.6)"]
extra = ["lxml (>=4.6)", "pydot (>=1.4.2)", "pygraphviz (>=1.9)", "sympy (>=1.10)"]
test = ["codecov (>=2.1)", "pytest (>=7.2)", "pytest-cov (>=4.0)"]
[[package]]
name = "notebook"
version = "6.5.2"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=5.3.4"
jupyter-core = ">=4.6.1"
nbclassic = ">=0.4.7"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["myst-parser", "nbsphinx", "sphinx", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
json-logging = ["json-logging"]
test = ["coverage", "nbval", "pytest", "pytest-cov", "requests", "requests-unixsocket", "selenium (==4.1.5)", "testpath"]
[[package]]
name = "notebook-shim"
version = "0.2.2"
description = "A shim layer for notebook traits and config"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
jupyter-server = ">=1.8,<3"
[package.extras]
test = ["pytest", "pytest-console-scripts", "pytest-tornasync"]
[[package]]
name = "numba"
version = "0.53.1"
description = "compiling Python code using LLVM"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[package.dependencies]
llvmlite = ">=0.36.0rc1,<0.37"
numpy = ">=1.15"
setuptools = "*"
[[package]]
name = "numpy"
version = "1.23.5"
description = "NumPy is the fundamental package for array computing with Python."
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "oauthlib"
version = "3.2.2"
description = "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
rsa = ["cryptography (>=3.0.0)"]
signals = ["blinker (>=1.4.0)"]
signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "opt-einsum"
version = "3.3.0"
description = "Optimizing numpys einsum function"
category = "main"
optional = false
python-versions = ">=3.5"
[package.dependencies]
numpy = ">=1.7"
[package.extras]
docs = ["numpydoc", "sphinx (==1.2.3)", "sphinx-rtd-theme", "sphinxcontrib-napoleon"]
tests = ["pytest", "pytest-cov", "pytest-pep8"]
[[package]]
name = "packaging"
version = "21.3"
description = "Core utilities for Python packages"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pyparsing = ">=2.0.2,<3.0.5 || >3.0.5"
[[package]]
name = "pandas"
version = "1.5.2"
description = "Powerful data structures for data analysis, time series, and statistics"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = {version = ">=1.20.3", markers = "python_version < \"3.10\""}
python-dateutil = ">=2.8.1"
pytz = ">=2020.1"
[package.extras]
test = ["hypothesis (>=5.5.3)", "pytest (>=6.0)", "pytest-xdist (>=1.31)"]
[[package]]
name = "pandocfilters"
version = "1.5.0"
description = "Utilities for writing pandoc filters in python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "parso"
version = "0.8.3"
description = "A Python Parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["docopt", "pytest (<6.0.0)"]
[[package]]
name = "partd"
version = "1.3.0"
description = "Appendable key-value storage"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
locket = "*"
toolz = "*"
[package.extras]
complete = ["blosc", "numpy (>=1.9.0)", "pandas (>=0.19.0)", "pyzmq"]
[[package]]
name = "pastel"
version = "0.2.1"
description = "Bring colors to your terminal."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pathos"
version = "0.2.9"
description = "parallel graph management and execution in heterogeneous computing"
category = "main"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*"
[package.dependencies]
dill = ">=0.3.5.1"
multiprocess = ">=0.70.13"
pox = ">=0.3.1"
ppft = ">=1.7.6.5"
[[package]]
name = "pathspec"
version = "0.10.2"
description = "Utility library for gitignore style pattern matching of file paths."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pathy"
version = "0.10.0"
description = "pathlib.Path subclasses for local and cloud bucket storage"
category = "main"
optional = false
python-versions = ">= 3.6"
[package.dependencies]
smart-open = ">=5.2.1,<6.0.0"
typer = ">=0.3.0,<1.0.0"
[package.extras]
all = ["azure-storage-blob", "boto3", "google-cloud-storage (>=1.26.0,<2.0.0)", "mock", "pytest", "pytest-coverage", "typer-cli"]
azure = ["azure-storage-blob"]
gcs = ["google-cloud-storage (>=1.26.0,<2.0.0)"]
s3 = ["boto3"]
test = ["mock", "pytest", "pytest-coverage", "typer-cli"]
[[package]]
name = "patsy"
version = "0.5.3"
description = "A Python package for describing statistical models and for building design matrices."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.4"
six = "*"
[package.extras]
test = ["pytest", "pytest-cov", "scipy"]
[[package]]
name = "pexpect"
version = "4.8.0"
description = "Pexpect allows easy control of interactive console applications."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ptyprocess = ">=0.5"
[[package]]
name = "pickleshare"
version = "0.7.5"
description = "Tiny 'shelve'-like database with concurrency support"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "Pillow"
version = "9.3.0"
description = "Python Imaging Library (Fork)"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"]
tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "pip"
version = "22.3.1"
description = "The PyPA recommended tool for installing Python packages."
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pkgutil_resolve_name"
version = "1.3.10"
description = "Resolve a name to an object."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "platformdirs"
version = "2.5.4"
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo (>=2022.9.29)", "proselint (>=0.13)", "sphinx (>=5.3)", "sphinx-autodoc-typehints (>=1.19.4)"]
test = ["appdirs (==1.4.4)", "pytest (>=7.2)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"]
[[package]]
name = "plotly"
version = "5.11.0"
description = "An open-source, interactive data visualization library for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
tenacity = ">=6.2.0"
[[package]]
name = "pluggy"
version = "1.0.0"
description = "plugin and hook calling mechanisms for python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
dev = ["pre-commit", "tox"]
testing = ["pytest", "pytest-benchmark"]
[[package]]
name = "poethepoet"
version = "0.16.5"
description = "A task runner that works well with poetry."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pastel = ">=0.2.1,<0.3.0"
tomli = ">=1.2.2"
[package.extras]
poetry-plugin = ["poetry (>=1.0,<2.0)"]
[[package]]
name = "pox"
version = "0.3.2"
description = "utilities for filesystem exploration and automated builds"
category = "main"
optional = true
python-versions = ">=3.7"
[[package]]
name = "ppft"
version = "1.7.6.6"
description = "distributed and parallel python"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
dill = ["dill (>=0.3.6)"]
[[package]]
name = "preshed"
version = "3.0.8"
description = "Cython hash table that trusts the keys are pre-hashed"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cymem = ">=2.0.2,<2.1.0"
murmurhash = ">=0.28.0,<1.1.0"
[[package]]
name = "progressbar2"
version = "4.2.0"
description = "A Python Progressbar library to provide visual (yet text based) progress to long running operations."
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
python-utils = ">=3.0.0"
[package.extras]
docs = ["sphinx (>=1.8.5)"]
tests = ["flake8 (>=3.7.7)", "freezegun (>=0.3.11)", "pytest (>=4.6.9)", "pytest-cov (>=2.6.1)", "pytest-mypy", "sphinx (>=1.8.5)"]
[[package]]
name = "prometheus-client"
version = "0.15.0"
description = "Python client for the Prometheus monitoring system."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
twisted = ["twisted"]
[[package]]
name = "prompt-toolkit"
version = "3.0.33"
description = "Library for building powerful interactive command lines in Python"
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
wcwidth = "*"
[[package]]
name = "protobuf"
version = "3.19.6"
description = "Protocol Buffers"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "psutil"
version = "5.9.4"
description = "Cross-platform lib for process and system monitoring in Python."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.extras]
test = ["enum34", "ipaddress", "mock", "pywin32", "wmi"]
[[package]]
name = "ptyprocess"
version = "0.7.0"
description = "Run a subprocess in a pseudo terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pure-eval"
version = "0.2.2"
description = "Safely evaluate AST nodes without side effects"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest"]
[[package]]
name = "py"
version = "1.11.0"
description = "library with cross-python path, ini-parsing, io, code, log facilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pyasn1"
version = "0.4.8"
description = "ASN.1 types and codecs"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pyasn1-modules"
version = "0.2.8"
description = "A collection of ASN.1-based protocols modules."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pyasn1 = ">=0.4.6,<0.5.0"
[[package]]
name = "pycodestyle"
version = "2.8.0"
description = "Python style guide checker"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pycparser"
version = "2.21"
description = "C parser in Python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pydantic"
version = "1.10.2"
description = "Data validation and settings management using python type hints"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
typing-extensions = ">=4.1.0"
[package.extras]
dotenv = ["python-dotenv (>=0.10.4)"]
email = ["email-validator (>=1.0.3)"]
[[package]]
name = "pydata-sphinx-theme"
version = "0.9.0"
description = "Bootstrap-based Sphinx theme from the PyData community"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
docutils = "!=0.17.0"
packaging = "*"
sphinx = ">=4.0.2"
[package.extras]
coverage = ["codecov", "pydata-sphinx-theme[test]", "pytest-cov"]
dev = ["nox", "pre-commit", "pydata-sphinx-theme[coverage]", "pyyaml"]
doc = ["jupyter_sphinx", "myst-parser", "numpy", "numpydoc", "pandas", "plotly", "pytest", "pytest-regressions", "sphinx-design", "sphinx-sitemap", "sphinxext-rediraffe", "xarray"]
test = ["pydata-sphinx-theme[doc]", "pytest"]
[[package]]
name = "pydot"
version = "1.4.2"
description = "Python interface to Graphviz's Dot"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
pyparsing = ">=2.1.4"
[[package]]
name = "pydotplus"
version = "2.0.2"
description = "Python interface to Graphviz's Dot language"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
pyparsing = ">=2.0.1"
[[package]]
name = "pyflakes"
version = "2.4.0"
description = "passive checker of Python programs"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pygam"
version = "0.8.0"
description = "GAM toolkit"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
future = "*"
numpy = "*"
progressbar2 = "*"
scipy = "*"
[[package]]
name = "Pygments"
version = "2.13.0"
description = "Pygments is a syntax highlighting package written in Python."
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
plugins = ["importlib-metadata"]
[[package]]
name = "pygraphviz"
version = "1.10"
description = "Python interface to Graphviz"
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "pyparsing"
version = "3.0.9"
description = "pyparsing module - Classes and methods to define and execute parsing grammars"
category = "main"
optional = false
python-versions = ">=3.6.8"
[package.extras]
diagrams = ["jinja2", "railroad-diagrams"]
[[package]]
name = "pyro-api"
version = "0.1.2"
description = "Generic API for dispatch to Pyro backends."
category = "main"
optional = true
python-versions = "*"
[package.extras]
dev = ["ipython", "sphinx (>=2.0)", "sphinx-rtd-theme"]
test = ["flake8", "pytest (>=5.0)"]
[[package]]
name = "pyro-ppl"
version = "1.8.3"
description = "A Python library for probabilistic modeling and inference"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.7"
opt-einsum = ">=2.3.2"
pyro-api = ">=0.1.1"
torch = ">=1.11.0"
tqdm = ">=4.36"
[package.extras]
dev = ["black (>=21.4b0)", "flake8", "graphviz (>=0.8)", "isort (>=5.0)", "jupyter (>=1.0.0)", "lap", "matplotlib (>=1.3)", "mypy (>=0.812)", "nbformat", "nbsphinx (>=0.3.2)", "nbstripout", "nbval", "ninja", "pandas", "pillow (==8.2.0)", "pypandoc", "pytest (>=5.0)", "pytest-xdist", "scikit-learn", "scipy (>=1.1)", "seaborn (>=0.11.0)", "sphinx", "sphinx-rtd-theme", "torchvision (>=0.12.0)", "visdom (>=0.1.4,<0.2.2)", "wget", "yapf"]
extras = ["graphviz (>=0.8)", "jupyter (>=1.0.0)", "lap", "matplotlib (>=1.3)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn (>=0.11.0)", "torchvision (>=0.12.0)", "visdom (>=0.1.4,<0.2.2)", "wget"]
funsor = ["funsor[torch] (==0.4.3)"]
horovod = ["horovod[pytorch] (>=0.19)"]
profile = ["prettytable", "pytest-benchmark", "snakeviz"]
test = ["black (>=21.4b0)", "flake8", "graphviz (>=0.8)", "jupyter (>=1.0.0)", "lap", "matplotlib (>=1.3)", "nbval", "pandas", "pillow (==8.2.0)", "pytest (>=5.0)", "pytest-cov", "scikit-learn", "scipy (>=1.1)", "seaborn (>=0.11.0)", "torchvision (>=0.12.0)", "visdom (>=0.1.4,<0.2.2)", "wget"]
[[package]]
name = "pyrsistent"
version = "0.19.2"
description = "Persistent/Functional/Immutable data structures"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pytest"
version = "7.2.0"
description = "pytest: simple powerful testing with Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=19.2.0"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=0.12,<2.0"
tomli = {version = ">=1.0.0", markers = "python_version < \"3.11\""}
[package.extras]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
[[package]]
name = "pytest-cov"
version = "3.0.0"
description = "Pytest plugin for measuring coverage."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
coverage = {version = ">=5.2.1", extras = ["toml"]}
pytest = ">=4.6"
[package.extras]
testing = ["fields", "hunter", "process-tests", "pytest-xdist", "six", "virtualenv"]
[[package]]
name = "pytest-split"
version = "0.8.0"
description = "Pytest plugin which splits the test suite to equally sized sub suites based on test execution time."
category = "dev"
optional = false
python-versions = ">=3.7.1,<4.0"
[package.dependencies]
pytest = ">=5,<8"
[[package]]
name = "python-dateutil"
version = "2.8.2"
description = "Extensions to the standard Python datetime module"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
[package.dependencies]
six = ">=1.5"
[[package]]
name = "python-utils"
version = "3.4.5"
description = "Python Utils is a module with some convenient utilities not included with the standard Python install"
category = "main"
optional = true
python-versions = ">3.6.0"
[package.extras]
docs = ["mock", "python-utils", "sphinx"]
loguru = ["loguru"]
tests = ["flake8", "loguru", "pytest", "pytest-asyncio", "pytest-cov", "pytest-mypy", "sphinx", "types-setuptools"]
[[package]]
name = "pytz"
version = "2022.6"
description = "World timezone definitions, modern and historical"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pytz-deprecation-shim"
version = "0.1.0.post0"
description = "Shims to make deprecation of pytz easier"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version >= \"3.6\" and python_version < \"3.9\""}
tzdata = {version = "*", markers = "python_version >= \"3.6\""}
[[package]]
name = "pywin32"
version = "305"
description = "Python for Window Extensions"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywinpty"
version = "2.0.9"
description = "Pseudo terminal support for Windows from Python."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "PyYAML"
version = "6.0"
description = "YAML parser and emitter for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "pyzmq"
version = "24.0.1"
description = "Python bindings for 0MQ"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = {version = "*", markers = "implementation_name == \"pypy\""}
py = {version = "*", markers = "implementation_name == \"pypy\""}
[[package]]
name = "qtconsole"
version = "5.4.0"
description = "Jupyter Qt console"
category = "dev"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
ipykernel = ">=4.1"
ipython-genutils = "*"
jupyter-client = ">=4.1"
jupyter-core = "*"
pygments = "*"
pyzmq = ">=17.1"
qtpy = ">=2.0.1"
traitlets = "<5.2.1 || >5.2.1,<5.2.2 || >5.2.2"
[package.extras]
doc = ["Sphinx (>=1.3)"]
test = ["flaky", "pytest", "pytest-qt"]
[[package]]
name = "QtPy"
version = "2.3.0"
description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
[package.extras]
test = ["pytest (>=6,!=7.0.0,!=7.0.1)", "pytest-cov (>=3.0.0)", "pytest-qt"]
[[package]]
name = "requests"
version = "2.28.1"
description = "Python HTTP for Humans."
category = "main"
optional = false
python-versions = ">=3.7, <4"
[package.dependencies]
certifi = ">=2017.4.17"
charset-normalizer = ">=2,<3"
idna = ">=2.5,<4"
urllib3 = ">=1.21.1,<1.27"
[package.extras]
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use_chardet_on_py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "requests-oauthlib"
version = "1.3.1"
description = "OAuthlib authentication support for Requests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
oauthlib = ">=3.0.0"
requests = ">=2.0.0"
[package.extras]
rsa = ["oauthlib[signedtoken] (>=3.0.0)"]
[[package]]
name = "rpy2"
version = "3.5.6"
description = "Python interface to the R language (embedded R)"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
cffi = ">=1.10.0"
jinja2 = "*"
packaging = {version = "*", markers = "platform_system == \"Windows\""}
pytz = "*"
tzlocal = "*"
[package.extras]
all = ["ipython", "numpy", "pandas", "pytest"]
numpy = ["pandas"]
pandas = ["numpy", "pandas"]
test = ["ipython", "numpy", "pandas", "pytest"]
[[package]]
name = "rsa"
version = "4.9"
description = "Pure-Python RSA implementation"
category = "dev"
optional = false
python-versions = ">=3.6,<4"
[package.dependencies]
pyasn1 = ">=0.1.3"
[[package]]
name = "s3transfer"
version = "0.6.0"
description = "An Amazon S3 Transfer Manager"
category = "main"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
botocore = ">=1.12.36,<2.0a.0"
[package.extras]
crt = ["botocore[crt] (>=1.20.29,<2.0a.0)"]
[[package]]
name = "scikit-learn"
version = "1.0.2"
description = "A set of python modules for machine learning and data mining"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
joblib = ">=0.11"
numpy = ">=1.14.6"
scipy = ">=1.1.0"
threadpoolctl = ">=2.0.0"
[package.extras]
benchmark = ["matplotlib (>=2.2.3)", "memory-profiler (>=0.57.0)", "pandas (>=0.25.0)"]
docs = ["Pillow (>=7.1.2)", "matplotlib (>=2.2.3)", "memory-profiler (>=0.57.0)", "numpydoc (>=1.0.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "seaborn (>=0.9.0)", "sphinx (>=4.0.1)", "sphinx-gallery (>=0.7.0)", "sphinx-prompt (>=1.3.0)", "sphinxext-opengraph (>=0.4.2)"]
examples = ["matplotlib (>=2.2.3)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "seaborn (>=0.9.0)"]
tests = ["black (>=21.6b0)", "flake8 (>=3.8.2)", "matplotlib (>=2.2.3)", "mypy (>=0.770)", "pandas (>=0.25.0)", "pyamg (>=4.0.0)", "pytest (>=5.0.1)", "pytest-cov (>=2.9.0)", "scikit-image (>=0.14.5)"]
[[package]]
name = "scipy"
version = "1.8.1"
description = "SciPy: Scientific Library for Python"
category = "main"
optional = false
python-versions = ">=3.8,<3.11"
[package.dependencies]
numpy = ">=1.17.3,<1.25.0"
[[package]]
name = "scipy"
version = "1.9.3"
description = "Fundamental algorithms for scientific computing in Python"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = ">=1.18.5,<1.26.0"
[package.extras]
dev = ["flake8", "mypy", "pycodestyle", "typing_extensions"]
doc = ["matplotlib (>2)", "numpydoc", "pydata-sphinx-theme (==0.9.0)", "sphinx (!=4.1.0)", "sphinx-panels (>=0.5.2)", "sphinx-tabs"]
test = ["asv", "gmpy2", "mpmath", "pytest", "pytest-cov", "pytest-xdist", "scikit-umfpack", "threadpoolctl"]
[[package]]
name = "seaborn"
version = "0.12.1"
description = "Statistical data visualization"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
matplotlib = ">=3.1,<3.6.1 || >3.6.1"
numpy = ">=1.17"
pandas = ">=0.25"
[package.extras]
dev = ["flake8", "mypy", "pandas-stubs", "pre-commit", "pytest", "pytest-cov", "pytest-xdist"]
docs = ["ipykernel", "nbconvert", "numpydoc", "pydata_sphinx_theme (==0.10.0rc2)", "pyyaml", "sphinx-copybutton", "sphinx-design", "sphinx-issues"]
stats = ["scipy (>=1.3)", "statsmodels (>=0.10)"]
[[package]]
name = "Send2Trash"
version = "1.8.0"
description = "Send file to trash natively under Mac OS X, Windows and Linux."
category = "dev"
optional = false
python-versions = "*"
[package.extras]
nativelib = ["pyobjc-framework-Cocoa", "pywin32"]
objc = ["pyobjc-framework-Cocoa"]
win32 = ["pywin32"]
[[package]]
name = "setuptools"
version = "65.6.3"
description = "Easily download, build, install, upgrade, and uninstall Python packages"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-notfound-page (==0.8.3)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8 (<5)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pip-run (>=8.8)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
[[package]]
name = "setuptools-scm"
version = "7.0.5"
description = "the blessed package to manage your versions by scm tags"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = ">=20.0"
setuptools = "*"
tomli = ">=1.0.0"
typing-extensions = "*"
[package.extras]
test = ["pytest (>=6.2)", "virtualenv (>20)"]
toml = ["setuptools (>=42)"]
[[package]]
name = "shap"
version = "0.40.0"
description = "A unified approach to explain the output of any machine learning model."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
cloudpickle = "*"
numba = "*"
numpy = "*"
packaging = ">20.9"
pandas = "*"
scikit-learn = "*"
scipy = "*"
slicer = "0.0.7"
tqdm = ">4.25.0"
[package.extras]
all = ["catboost", "ipython", "lightgbm", "lime", "matplotlib", "nbsphinx", "numpydoc", "opencv-python", "pyod", "pyspark", "pytest", "pytest-cov", "pytest-mpl", "sentencepiece", "sphinx", "sphinx_rtd_theme", "torch", "transformers", "xgboost"]
docs = ["ipython", "matplotlib", "nbsphinx", "numpydoc", "sphinx", "sphinx_rtd_theme"]
others = ["lime"]
plots = ["ipython", "matplotlib"]
test = ["catboost", "lightgbm", "opencv-python", "pyod", "pyspark", "pytest", "pytest-cov", "pytest-mpl", "sentencepiece", "torch", "transformers", "xgboost"]
[[package]]
name = "six"
version = "1.16.0"
description = "Python 2 and 3 compatibility utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "slicer"
version = "0.0.7"
description = "A small package for big slicing."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "smart-open"
version = "5.2.1"
description = "Utils for streaming large files (S3, HDFS, GCS, Azure Blob Storage, gzip, bz2...)"
category = "main"
optional = false
python-versions = ">=3.6,<4.0"
[package.extras]
all = ["azure-common", "azure-core", "azure-storage-blob", "boto3", "google-cloud-storage", "requests"]
azure = ["azure-common", "azure-core", "azure-storage-blob"]
gcs = ["google-cloud-storage"]
http = ["requests"]
s3 = ["boto3"]
test = ["azure-common", "azure-core", "azure-storage-blob", "boto3", "google-cloud-storage", "moto[server] (==1.3.14)", "parameterizedtestcase", "paramiko", "pathlib2", "pytest", "pytest-rerunfailures", "requests", "responses"]
webhdfs = ["requests"]
[[package]]
name = "sniffio"
version = "1.3.0"
description = "Sniff out which async library your code is running under"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "snowballstemmer"
version = "2.2.0"
description = "This package provides 29 stemmers for 28 languages generated from Snowball algorithms."
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "sortedcontainers"
version = "2.4.0"
description = "Sorted Containers -- Sorted List, Sorted Dict, Sorted Set"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "soupsieve"
version = "2.3.2.post1"
description = "A modern CSS selector implementation for Beautiful Soup."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "spacy"
version = "3.4.3"
description = "Industrial-strength Natural Language Processing (NLP) in Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
catalogue = ">=2.0.6,<2.1.0"
cymem = ">=2.0.2,<2.1.0"
jinja2 = "*"
langcodes = ">=3.2.0,<4.0.0"
murmurhash = ">=0.28.0,<1.1.0"
numpy = ">=1.15.0"
packaging = ">=20.0"
pathy = ">=0.3.5"
preshed = ">=3.0.2,<3.1.0"
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0"
requests = ">=2.13.0,<3.0.0"
setuptools = "*"
spacy-legacy = ">=3.0.10,<3.1.0"
spacy-loggers = ">=1.0.0,<2.0.0"
srsly = ">=2.4.3,<3.0.0"
thinc = ">=8.1.0,<8.2.0"
tqdm = ">=4.38.0,<5.0.0"
typer = ">=0.3.0,<0.8.0"
wasabi = ">=0.9.1,<1.1.0"
[package.extras]
apple = ["thinc-apple-ops (>=0.1.0.dev0,<1.0.0)"]
cuda = ["cupy (>=5.0.0b4,<12.0.0)"]
cuda-autodetect = ["cupy-wheel (>=11.0.0,<12.0.0)"]
cuda100 = ["cupy-cuda100 (>=5.0.0b4,<12.0.0)"]
cuda101 = ["cupy-cuda101 (>=5.0.0b4,<12.0.0)"]
cuda102 = ["cupy-cuda102 (>=5.0.0b4,<12.0.0)"]
cuda110 = ["cupy-cuda110 (>=5.0.0b4,<12.0.0)"]
cuda111 = ["cupy-cuda111 (>=5.0.0b4,<12.0.0)"]
cuda112 = ["cupy-cuda112 (>=5.0.0b4,<12.0.0)"]
cuda113 = ["cupy-cuda113 (>=5.0.0b4,<12.0.0)"]
cuda114 = ["cupy-cuda114 (>=5.0.0b4,<12.0.0)"]
cuda115 = ["cupy-cuda115 (>=5.0.0b4,<12.0.0)"]
cuda116 = ["cupy-cuda116 (>=5.0.0b4,<12.0.0)"]
cuda117 = ["cupy-cuda117 (>=5.0.0b4,<12.0.0)"]
cuda11x = ["cupy-cuda11x (>=11.0.0,<12.0.0)"]
cuda80 = ["cupy-cuda80 (>=5.0.0b4,<12.0.0)"]
cuda90 = ["cupy-cuda90 (>=5.0.0b4,<12.0.0)"]
cuda91 = ["cupy-cuda91 (>=5.0.0b4,<12.0.0)"]
cuda92 = ["cupy-cuda92 (>=5.0.0b4,<12.0.0)"]
ja = ["sudachidict-core (>=20211220)", "sudachipy (>=0.5.2,!=0.6.1)"]
ko = ["natto-py (>=0.9.0)"]
lookups = ["spacy-lookups-data (>=1.0.3,<1.1.0)"]
ray = ["spacy-ray (>=0.1.0,<1.0.0)"]
th = ["pythainlp (>=2.0)"]
transformers = ["spacy-transformers (>=1.1.2,<1.2.0)"]
[[package]]
name = "spacy-legacy"
version = "3.0.10"
description = "Legacy registered functions for spaCy backwards compatibility"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "spacy-loggers"
version = "1.0.3"
description = "Logging utilities for SpaCy"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
wasabi = ">=0.8.1,<1.1.0"
[[package]]
name = "sparse"
version = "0.13.0"
description = "Sparse n-dimensional arrays"
category = "main"
optional = false
python-versions = ">=3.6, <4"
[package.dependencies]
numba = ">=0.49"
numpy = ">=1.17"
scipy = ">=0.19"
[package.extras]
all = ["dask[array]", "pytest (>=3.5)", "pytest-black", "pytest-cov", "sphinx", "sphinx-rtd-theme", "tox"]
docs = ["sphinx", "sphinx-rtd-theme"]
tests = ["dask[array]", "pytest (>=3.5)", "pytest-black", "pytest-cov"]
tox = ["dask[array]", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox"]
[[package]]
name = "Sphinx"
version = "5.3.0"
description = "Python documentation generator"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
alabaster = ">=0.7,<0.8"
babel = ">=2.9"
colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
docutils = ">=0.14,<0.20"
imagesize = ">=1.3"
importlib-metadata = {version = ">=4.8", markers = "python_version < \"3.10\""}
Jinja2 = ">=3.0"
packaging = ">=21.0"
Pygments = ">=2.12"
requests = ">=2.5.0"
snowballstemmer = ">=2.0"
sphinxcontrib-applehelp = "*"
sphinxcontrib-devhelp = "*"
sphinxcontrib-htmlhelp = ">=2.0.0"
sphinxcontrib-jsmath = "*"
sphinxcontrib-qthelp = "*"
sphinxcontrib-serializinghtml = ">=1.1.5"
[package.extras]
docs = ["sphinxcontrib-websupport"]
lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-bugbear", "flake8-comprehensions", "flake8-simplify", "isort", "mypy (>=0.981)", "sphinx-lint", "types-requests", "types-typed-ast"]
test = ["cython", "html5lib", "pytest (>=4.6)", "typed_ast"]
[[package]]
name = "sphinx-copybutton"
version = "0.5.0"
description = "Add a copy button to each of your code cells."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
sphinx = ">=1.8"
[package.extras]
code_style = ["pre-commit (==2.12.1)"]
rtd = ["ipython", "myst-nb", "sphinx", "sphinx-book-theme"]
[[package]]
name = "sphinx_design"
version = "0.3.0"
description = "A sphinx extension for designing beautiful, view size responsive web components."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
sphinx = ">=4,<6"
[package.extras]
code_style = ["pre-commit (>=2.12,<3.0)"]
rtd = ["myst-parser (>=0.18.0,<0.19.0)"]
testing = ["myst-parser (>=0.18.0,<0.19.0)", "pytest (>=7.1,<8.0)", "pytest-cov", "pytest-regressions"]
theme_furo = ["furo (>=2022.06.04,<2022.07)"]
theme_pydata = ["pydata-sphinx-theme (>=0.9.0,<0.10.0)"]
theme_rtd = ["sphinx-rtd-theme (>=1.0,<2.0)"]
theme_sbt = ["sphinx-book-theme (>=0.3.0,<0.4.0)"]
[[package]]
name = "sphinx-rtd-theme"
version = "1.1.1"
description = "Read the Docs theme for Sphinx"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
docutils = "<0.18"
sphinx = ">=1.6,<6"
[package.extras]
dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
[[package]]
name = "sphinxcontrib-applehelp"
version = "1.0.2"
description = "sphinxcontrib-applehelp is a sphinx extension which outputs Apple help books"
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "sphinxcontrib-devhelp"
version = "1.0.2"
description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "sphinxcontrib-googleanalytics"
version = "0.2"
description = ""
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
Sphinx = ">=0.6"
[package.source]
type = "git"
url = "https://github.com/sphinx-contrib/googleanalytics.git"
reference = "master"
resolved_reference = "42b3df99fdc01a136b9c575f3f251ae80cdfbe1d"
[[package]]
name = "sphinxcontrib-htmlhelp"
version = "2.0.0"
description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["html5lib", "pytest"]
[[package]]
name = "sphinxcontrib-jsmath"
version = "1.0.1"
description = "A sphinx extension which renders display math in HTML via JavaScript"
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["flake8", "mypy", "pytest"]
[[package]]
name = "sphinxcontrib-qthelp"
version = "1.0.3"
description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "sphinxcontrib-serializinghtml"
version = "1.1.5"
description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "srsly"
version = "2.4.5"
description = "Modern high-performance serialization utilities for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
catalogue = ">=2.0.3,<2.1.0"
[[package]]
name = "stack-data"
version = "0.6.2"
description = "Extract data from python stack frames and tracebacks for informative displays"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
asttokens = ">=2.1.0"
executing = ">=1.2.0"
pure-eval = "*"
[package.extras]
tests = ["cython", "littleutils", "pygments", "pytest", "typeguard"]
[[package]]
name = "statsmodels"
version = "0.13.5"
description = "Statistical computations and models for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = {version = ">=1.17", markers = "python_version != \"3.10\" or platform_system != \"Windows\" or platform_python_implementation == \"PyPy\""}
packaging = ">=21.3"
pandas = ">=0.25"
patsy = ">=0.5.2"
scipy = [
{version = ">=1.3", markers = "(python_version > \"3.9\" or platform_system != \"Windows\" or platform_machine != \"x86\") and python_version < \"3.12\""},
{version = ">=1.3,<1.9", markers = "python_version == \"3.8\" and platform_system == \"Windows\" and platform_machine == \"x86\" or python_version == \"3.9\" and platform_system == \"Windows\" and platform_machine == \"x86\""},
]
[package.extras]
build = ["cython (>=0.29.32)"]
develop = ["Jinja2", "colorama", "cython (>=0.29.32)", "cython (>=0.29.32,<3.0.0)", "flake8", "isort", "joblib", "matplotlib (>=3)", "oldest-supported-numpy (>=2022.4.18)", "pytest (>=7.0.1,<7.1.0)", "pytest-randomly", "pytest-xdist", "pywinpty", "setuptools-scm[toml] (>=7.0.0,<7.1.0)"]
docs = ["ipykernel", "jupyter-client", "matplotlib", "nbconvert", "nbformat", "numpydoc", "pandas-datareader", "sphinx"]
[[package]]
name = "sympy"
version = "1.11.1"
description = "Computer algebra system (CAS) in Python"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
mpmath = ">=0.19"
[[package]]
name = "tblib"
version = "1.7.0"
description = "Traceback serialization library."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "tenacity"
version = "8.1.0"
description = "Retry code until it succeeds"
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
doc = ["reno", "sphinx", "tornado (>=4.5)"]
[[package]]
name = "tensorboard"
version = "2.11.0"
description = "TensorBoard lets you watch Tensors Flow"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=0.4"
google-auth = ">=1.6.3,<3"
google-auth-oauthlib = ">=0.4.1,<0.5"
grpcio = ">=1.24.3"
markdown = ">=2.6.8"
numpy = ">=1.12.0"
protobuf = ">=3.9.2,<4"
requests = ">=2.21.0,<3"
setuptools = ">=41.0.0"
tensorboard-data-server = ">=0.6.0,<0.7.0"
tensorboard-plugin-wit = ">=1.6.0"
werkzeug = ">=1.0.1"
wheel = ">=0.26"
[[package]]
name = "tensorboard-data-server"
version = "0.6.1"
description = "Fast data loading for TensorBoard"
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tensorboard-plugin-wit"
version = "1.8.1"
description = "What-If Tool TensorBoard plugin."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "tensorflow"
version = "2.11.0"
description = "TensorFlow is an open source machine learning framework for everyone."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=1.0.0"
astunparse = ">=1.6.0"
flatbuffers = ">=2.0"
gast = ">=0.2.1,<=0.4.0"
google-pasta = ">=0.1.1"
grpcio = ">=1.24.3,<2.0"
h5py = ">=2.9.0"
keras = ">=2.11.0,<2.12"
libclang = ">=13.0.0"
numpy = ">=1.20"
opt-einsum = ">=2.3.2"
packaging = "*"
protobuf = ">=3.9.2,<3.20"
setuptools = "*"
six = ">=1.12.0"
tensorboard = ">=2.11,<2.12"
tensorflow-estimator = ">=2.11.0,<2.12"
tensorflow-io-gcs-filesystem = {version = ">=0.23.1", markers = "platform_machine != \"arm64\" or platform_system != \"Darwin\""}
termcolor = ">=1.1.0"
typing-extensions = ">=3.6.6"
wrapt = ">=1.11.0"
[[package]]
name = "tensorflow-estimator"
version = "2.11.0"
description = "TensorFlow Estimator."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tensorflow-io-gcs-filesystem"
version = "0.28.0"
description = "TensorFlow IO"
category = "dev"
optional = false
python-versions = ">=3.7, <3.11"
[package.extras]
tensorflow = ["tensorflow (>=2.11.0,<2.12.0)"]
tensorflow-aarch64 = ["tensorflow-aarch64 (>=2.11.0,<2.12.0)"]
tensorflow-cpu = ["tensorflow-cpu (>=2.11.0,<2.12.0)"]
tensorflow-gpu = ["tensorflow-gpu (>=2.11.0,<2.12.0)"]
tensorflow-rocm = ["tensorflow-rocm (>=2.11.0,<2.12.0)"]
[[package]]
name = "termcolor"
version = "2.1.1"
description = "ANSI color formatting for output in terminal"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
tests = ["pytest", "pytest-cov"]
[[package]]
name = "terminado"
version = "0.17.0"
description = "Tornado websocket backend for the Xterm.js Javascript terminal emulator library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ptyprocess = {version = "*", markers = "os_name != \"nt\""}
pywinpty = {version = ">=1.1.0", markers = "os_name == \"nt\""}
tornado = ">=6.1.0"
[package.extras]
docs = ["pydata-sphinx-theme", "sphinx"]
test = ["pre-commit", "pytest (>=7.0)", "pytest-timeout"]
[[package]]
name = "thinc"
version = "8.1.5"
description = "A refreshing functional take on deep learning, compatible with your favorite libraries"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
blis = ">=0.7.8,<0.8.0"
catalogue = ">=2.0.4,<2.1.0"
confection = ">=0.0.1,<1.0.0"
cymem = ">=2.0.2,<2.1.0"
murmurhash = ">=1.0.2,<1.1.0"
numpy = ">=1.15.0"
preshed = ">=3.0.2,<3.1.0"
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0"
setuptools = "*"
srsly = ">=2.4.0,<3.0.0"
wasabi = ">=0.8.1,<1.1.0"
[package.extras]
cuda = ["cupy (>=5.0.0b4)"]
cuda-autodetect = ["cupy-wheel (>=11.0.0)"]
cuda100 = ["cupy-cuda100 (>=5.0.0b4)"]
cuda101 = ["cupy-cuda101 (>=5.0.0b4)"]
cuda102 = ["cupy-cuda102 (>=5.0.0b4)"]
cuda110 = ["cupy-cuda110 (>=5.0.0b4)"]
cuda111 = ["cupy-cuda111 (>=5.0.0b4)"]
cuda112 = ["cupy-cuda112 (>=5.0.0b4)"]
cuda113 = ["cupy-cuda113 (>=5.0.0b4)"]
cuda114 = ["cupy-cuda114 (>=5.0.0b4)"]
cuda115 = ["cupy-cuda115 (>=5.0.0b4)"]
cuda116 = ["cupy-cuda116 (>=5.0.0b4)"]
cuda117 = ["cupy-cuda117 (>=5.0.0b4)"]
cuda11x = ["cupy-cuda11x (>=11.0.0)"]
cuda80 = ["cupy-cuda80 (>=5.0.0b4)"]
cuda90 = ["cupy-cuda90 (>=5.0.0b4)"]
cuda91 = ["cupy-cuda91 (>=5.0.0b4)"]
cuda92 = ["cupy-cuda92 (>=5.0.0b4)"]
datasets = ["ml-datasets (>=0.2.0,<0.3.0)"]
mxnet = ["mxnet (>=1.5.1,<1.6.0)"]
tensorflow = ["tensorflow (>=2.0.0,<2.6.0)"]
torch = ["torch (>=1.6.0)"]
[[package]]
name = "threadpoolctl"
version = "3.1.0"
description = "threadpoolctl"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tinycss2"
version = "1.2.1"
description = "A tiny CSS parser"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
webencodings = ">=0.4"
[package.extras]
doc = ["sphinx", "sphinx_rtd_theme"]
test = ["flake8", "isort", "pytest"]
[[package]]
name = "tokenize-rt"
version = "5.0.0"
description = "A wrapper around the stdlib `tokenize` which roundtrips."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tomli"
version = "2.0.1"
description = "A lil' TOML parser"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "toolz"
version = "0.12.0"
description = "List processing tools and functional utilities"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "torch"
version = "1.12.1"
description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration"
category = "main"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
typing-extensions = "*"
[[package]]
name = "torchvision"
version = "0.13.1"
description = "image and video datasets and models for torch deep learning"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = "*"
pillow = ">=5.3.0,<8.3.0 || >=8.4.0"
requests = "*"
torch = "1.12.1"
typing-extensions = "*"
[package.extras]
scipy = ["scipy"]
[[package]]
name = "tornado"
version = "6.2"
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
category = "main"
optional = false
python-versions = ">= 3.7"
[[package]]
name = "tqdm"
version = "4.64.1"
description = "Fast, Extensible Progress Meter"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
dev = ["py-make (>=0.1.0)", "twine", "wheel"]
notebook = ["ipywidgets (>=6)"]
slack = ["slack-sdk"]
telegram = ["requests"]
[[package]]
name = "traitlets"
version = "5.5.0"
description = ""
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["myst-parser", "pydata-sphinx-theme", "sphinx"]
test = ["pre-commit", "pytest"]
[[package]]
name = "typer"
version = "0.7.0"
description = "Typer, build great CLIs. Easy to code. Based on Python type hints."
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
click = ">=7.1.1,<9.0.0"
[package.extras]
all = ["colorama (>=0.4.3,<0.5.0)", "rich (>=10.11.0,<13.0.0)", "shellingham (>=1.3.0,<2.0.0)"]
dev = ["autoflake (>=1.3.1,<2.0.0)", "flake8 (>=3.8.3,<4.0.0)", "pre-commit (>=2.17.0,<3.0.0)"]
doc = ["cairosvg (>=2.5.2,<3.0.0)", "mdx-include (>=1.4.1,<2.0.0)", "mkdocs (>=1.1.2,<2.0.0)", "mkdocs-material (>=8.1.4,<9.0.0)", "pillow (>=9.3.0,<10.0.0)"]
test = ["black (>=22.3.0,<23.0.0)", "coverage (>=6.2,<7.0)", "isort (>=5.0.6,<6.0.0)", "mypy (==0.910)", "pytest (>=4.4.0,<8.0.0)", "pytest-cov (>=2.10.0,<5.0.0)", "pytest-sugar (>=0.9.4,<0.10.0)", "pytest-xdist (>=1.32.0,<4.0.0)", "rich (>=10.11.0,<13.0.0)", "shellingham (>=1.3.0,<2.0.0)"]
[[package]]
name = "typing-extensions"
version = "4.4.0"
description = "Backported and Experimental Type Hints for Python 3.7+"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tzdata"
version = "2022.6"
description = "Provider of IANA time zone data"
category = "dev"
optional = false
python-versions = ">=2"
[[package]]
name = "tzlocal"
version = "4.2"
description = "tzinfo object for the local timezone"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version < \"3.9\""}
pytz-deprecation-shim = "*"
tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["black", "pyroma", "pytest-cov", "zest.releaser"]
test = ["pytest (>=4.3)", "pytest-mock (>=3.3)"]
[[package]]
name = "urllib3"
version = "1.26.13"
description = "HTTP library with thread-safe connection pooling, file post, and more."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
[package.extras]
brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)", "brotlipy (>=0.6.0)"]
secure = ["certifi", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "ipaddress", "pyOpenSSL (>=0.14)", "urllib3-secure-extra"]
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
[[package]]
name = "wasabi"
version = "0.10.1"
description = "A lightweight console printing and formatting toolkit"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "wcwidth"
version = "0.2.5"
description = "Measures the displayed width of unicode strings in a terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "webencodings"
version = "0.5.1"
description = "Character encoding aliases for legacy web content"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "websocket-client"
version = "1.4.2"
description = "WebSocket client for Python with low level API options"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["Sphinx (>=3.4)", "sphinx-rtd-theme (>=0.5)"]
optional = ["python-socks", "wsaccel"]
test = ["websockets"]
[[package]]
name = "Werkzeug"
version = "2.2.2"
description = "The comprehensive WSGI web application library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.1.1"
[package.extras]
watchdog = ["watchdog"]
[[package]]
name = "wheel"
version = "0.38.4"
description = "A built-package format for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pytest (>=3.0.0)"]
[[package]]
name = "widgetsnbextension"
version = "4.0.3"
description = "Jupyter interactive widgets for Jupyter Notebook"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "wrapt"
version = "1.14.1"
description = "Module for decorators, wrappers and monkey patching."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "xgboost"
version = "1.7.1"
description = "XGBoost Python Package"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = "*"
scipy = "*"
[package.extras]
dask = ["dask", "distributed", "pandas"]
datatable = ["datatable"]
pandas = ["pandas"]
plotting = ["graphviz", "matplotlib"]
pyspark = ["cloudpickle", "pyspark", "scikit-learn"]
scikit-learn = ["scikit-learn"]
[[package]]
name = "zict"
version = "2.2.0"
description = "Mutable mapping tools"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
heapdict = "*"
[[package]]
name = "zipp"
version = "3.11.0"
description = "Backport of pathlib-compatible object wrapper for zip files"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"]
testing = ["flake8 (<5)", "func-timeout", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
[extras]
causalml = ["causalml", "llvmlite", "cython"]
econml = ["econml"]
plotting = ["matplotlib"]
pydot = ["pydot"]
pygraphviz = ["pygraphviz"]
[metadata]
lock-version = "1.1"
python-versions = ">=3.8,<3.10"
content-hash = "12d40b6d9616d209cd632e2315aafc72f78d3e35efdf6e52ca410588465787cc"
[metadata.files]
absl-py = [
{file = "absl-py-1.3.0.tar.gz", hash = "sha256:463c38a08d2e4cef6c498b76ba5bd4858e4c6ef51da1a5a1f27139a022e20248"},
{file = "absl_py-1.3.0-py3-none-any.whl", hash = "sha256:34995df9bd7a09b3b8749e230408f5a2a2dd7a68a0d33c12a3d0cb15a041a507"},
]
alabaster = [
{file = "alabaster-0.7.12-py2.py3-none-any.whl", hash = "sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359"},
{file = "alabaster-0.7.12.tar.gz", hash = "sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02"},
]
anyio = [
{file = "anyio-3.6.2-py3-none-any.whl", hash = "sha256:fbbe32bd270d2a2ef3ed1c5d45041250284e31fc0a4df4a5a6071842051a51e3"},
{file = "anyio-3.6.2.tar.gz", hash = "sha256:25ea0d673ae30af41a0c442f81cf3b38c7e79fdc7b60335a4c14e05eb0947421"},
]
appnope = [
{file = "appnope-0.1.3-py2.py3-none-any.whl", hash = "sha256:265a455292d0bd8a72453494fa24df5a11eb18373a60c7c0430889f22548605e"},
{file = "appnope-0.1.3.tar.gz", hash = "sha256:02bd91c4de869fbb1e1c50aafc4098827a7a54ab2f39d9dcba6c9547ed920e24"},
]
argon2-cffi = [
{file = "argon2-cffi-21.3.0.tar.gz", hash = "sha256:d384164d944190a7dd7ef22c6aa3ff197da12962bd04b17f64d4e93d934dba5b"},
{file = "argon2_cffi-21.3.0-py3-none-any.whl", hash = "sha256:8c976986f2c5c0e5000919e6de187906cfd81fb1c72bf9d88c01177e77da7f80"},
]
argon2-cffi-bindings = [
{file = "argon2-cffi-bindings-21.2.0.tar.gz", hash = "sha256:bb89ceffa6c791807d1305ceb77dbfacc5aa499891d2c55661c6459651fc39e3"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ccb949252cb2ab3a08c02024acb77cfb179492d5701c7cbdbfd776124d4d2367"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9524464572e12979364b7d600abf96181d3541da11e23ddf565a32e70bd4dc0d"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b746dba803a79238e925d9046a63aa26bf86ab2a2fe74ce6b009a1c3f5c8f2ae"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58ed19212051f49a523abb1dbe954337dc82d947fb6e5a0da60f7c8471a8476c"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd46088725ef7f58b5a1ef7ca06647ebaf0eb4baff7d1d0d177c6cc8744abd86"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_i686.whl", hash = "sha256:8cd69c07dd875537a824deec19f978e0f2078fdda07fd5c42ac29668dda5f40f"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f1152ac548bd5b8bcecfb0b0371f082037e47128653df2e8ba6e914d384f3c3e"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win32.whl", hash = "sha256:603ca0aba86b1349b147cab91ae970c63118a0f30444d4bc80355937c950c082"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:b2ef1c30440dbbcba7a5dc3e319408b59676e2e039e2ae11a8775ecf482b192f"},
{file = "argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e415e3f62c8d124ee16018e491a009937f8cf7ebf5eb430ffc5de21b900dad93"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3e385d1c39c520c08b53d63300c3ecc28622f076f4c2b0e6d7e796e9f6502194"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3e3cc67fdb7d82c4718f19b4e7a87123caf8a93fde7e23cf66ac0337d3cb3f"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a22ad9800121b71099d0fb0a65323810a15f2e292f2ba450810a7316e128ee5"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f9f8b450ed0547e3d473fdc8612083fd08dd2120d6ac8f73828df9b7d45bb351"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:93f9bf70084f97245ba10ee36575f0c3f1e7d7724d67d8e5b08e61787c320ed7"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3b9ef65804859d335dc6b31582cad2c5166f0c3e7975f324d9ffaa34ee7e6583"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4966ef5848d820776f5f562a7d45fdd70c2f330c961d0d745b784034bd9f48d"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20ef543a89dee4db46a1a6e206cd015360e5a75822f76df533845c3cbaf72670"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed2937d286e2ad0cc79a7087d3c272832865f779430e0cc2b4f3718d3159b0cb"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5e00316dabdaea0b2dd82d141cc66889ced0cdcbfa599e8b471cf22c620c329a"},
]
asttokens = [
{file = "asttokens-2.1.0-py2.py3-none-any.whl", hash = "sha256:1b28ed85e254b724439afc783d4bee767f780b936c3fe8b3275332f42cf5f561"},
{file = "asttokens-2.1.0.tar.gz", hash = "sha256:4aa76401a151c8cc572d906aad7aea2a841780834a19d780f4321c0fe1b54635"},
]
astunparse = [
{file = "astunparse-1.6.3-py2.py3-none-any.whl", hash = "sha256:c2652417f2c8b5bb325c885ae329bdf3f86424075c4fd1a128674bc6fba4b8e8"},
{file = "astunparse-1.6.3.tar.gz", hash = "sha256:5ad93a8456f0d084c3456d059fd9a92cce667963232cbf763eac3bc5b7940872"},
]
attrs = [
{file = "attrs-22.1.0-py2.py3-none-any.whl", hash = "sha256:86efa402f67bf2df34f51a335487cf46b1ec130d02b8d39fd248abfd30da551c"},
{file = "attrs-22.1.0.tar.gz", hash = "sha256:29adc2665447e5191d0e7c568fde78b21f9672d344281d0c6e1ab085429b22b6"},
]
"autogluon.common" = [
{file = "autogluon.common-0.6.0-py3-none-any.whl", hash = "sha256:8e1a46efaab051069589b875e417df30b38150a908e9aa2ff3ab479747a487ce"},
{file = "autogluon.common-0.6.0.tar.gz", hash = "sha256:d967844c728ad8e9a5c0f9e0deddbe6c4beb0e47cdf829a44a4834b5917798e0"},
]
"autogluon.core" = [
{file = "autogluon.core-0.6.0-py3-none-any.whl", hash = "sha256:b7efd2dfebfc9a3be0e39d1bf1bd352f45b23cccd503cf32afb9f5f23d58126b"},
{file = "autogluon.core-0.6.0.tar.gz", hash = "sha256:a6b6d57ec38d4193afab6b121cde63a6085446a51f84b9fa358221b7fed71ff4"},
]
"autogluon.features" = [
{file = "autogluon.features-0.6.0-py3-none-any.whl", hash = "sha256:ecff1a69cc768bc55777b3f7453ee89859352162dd43adda4451faadc9e583bf"},
{file = "autogluon.features-0.6.0.tar.gz", hash = "sha256:dced399ac2652c7c872da5208d0a0383778aeca3706a1b987b9781c9420d80c7"},
]
"autogluon.tabular" = [
{file = "autogluon.tabular-0.6.0-py3-none-any.whl", hash = "sha256:16404037c475e8746d61a7b1c977d5fd14afd853ebc9777fb0eafc851d37f8ad"},
{file = "autogluon.tabular-0.6.0.tar.gz", hash = "sha256:91892b7c9749942526eabfdd1bbb6d9daae2c24f785570a0552b2c7b9b851ab4"},
]
Babel = [
{file = "Babel-2.11.0-py3-none-any.whl", hash = "sha256:1ad3eca1c885218f6dce2ab67291178944f810a10a9b5f3cb8382a5a232b64fe"},
{file = "Babel-2.11.0.tar.gz", hash = "sha256:5ef4b3226b0180dedded4229651c8b0e1a3a6a2837d45a073272f313e4cf97f6"},
]
backcall = [
{file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
{file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"},
]
"backports.zoneinfo" = [
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:da6013fd84a690242c310d77ddb8441a559e9cb3d3d59ebac9aca1a57b2e18bc"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:89a48c0d158a3cc3f654da4c2de1ceba85263fafb861b98b59040a5086259722"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:1c5742112073a563c81f786e77514969acb58649bcdf6cdf0b4ed31a348d4546"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win32.whl", hash = "sha256:e8236383a20872c0cdf5a62b554b27538db7fa1bbec52429d8d106effbaeca08"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:8439c030a11780786a2002261569bdf362264f605dfa4d65090b64b05c9f79a7"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:f04e857b59d9d1ccc39ce2da1021d196e47234873820cbeaad210724b1ee28ac"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:17746bd546106fa389c51dbea67c8b7c8f0d14b5526a579ca6ccf5ed72c526cf"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5c144945a7752ca544b4b78c8c41544cdfaf9786f25fe5ffb10e838e19a27570"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win32.whl", hash = "sha256:e55b384612d93be96506932a786bbcde5a2db7a9e6a4bb4bffe8b733f5b9036b"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a76b38c52400b762e48131494ba26be363491ac4f9a04c1b7e92483d169f6582"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:8961c0f32cd0336fb8e8ead11a1f8cd99ec07145ec2931122faaac1c8f7fd987"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e81b76cace8eda1fca50e345242ba977f9be6ae3945af8d46326d776b4cf78d1"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7b0a64cda4145548fed9efc10322770f929b944ce5cee6c0dfe0c87bf4c0c8c9"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win32.whl", hash = "sha256:1b13e654a55cd45672cb54ed12148cd33628f672548f373963b0bff67b217328"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:4a0f800587060bf8880f954dbef70de6c11bbe59c673c3d818921f042f9954a6"},
{file = "backports.zoneinfo-0.2.1.tar.gz", hash = "sha256:fadbfe37f74051d024037f223b8e001611eac868b5c5b06144ef4d8b799862f2"},
]
beautifulsoup4 = [
{file = "beautifulsoup4-4.11.1-py3-none-any.whl", hash = "sha256:58d5c3d29f5a36ffeb94f02f0d786cd53014cf9b3b3951d42e0080d8a9498d30"},
{file = "beautifulsoup4-4.11.1.tar.gz", hash = "sha256:ad9aa55b65ef2808eb405f46cf74df7fcb7044d5cbc26487f96eb2ef2e436693"},
]
black = [
{file = "black-22.10.0-1fixedarch-cp310-cp310-macosx_11_0_x86_64.whl", hash = "sha256:5cc42ca67989e9c3cf859e84c2bf014f6633db63d1cbdf8fdb666dcd9e77e3fa"},
{file = "black-22.10.0-1fixedarch-cp311-cp311-macosx_11_0_x86_64.whl", hash = "sha256:5d8f74030e67087b219b032aa33a919fae8806d49c867846bfacde57f43972ef"},
{file = "black-22.10.0-1fixedarch-cp37-cp37m-macosx_10_16_x86_64.whl", hash = "sha256:197df8509263b0b8614e1df1756b1dd41be6738eed2ba9e9769f3880c2b9d7b6"},
{file = "black-22.10.0-1fixedarch-cp38-cp38-macosx_10_16_x86_64.whl", hash = "sha256:2644b5d63633702bc2c5f3754b1b475378fbbfb481f62319388235d0cd104c2d"},
{file = "black-22.10.0-1fixedarch-cp39-cp39-macosx_11_0_x86_64.whl", hash = "sha256:e41a86c6c650bcecc6633ee3180d80a025db041a8e2398dcc059b3afa8382cd4"},
{file = "black-22.10.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2039230db3c6c639bd84efe3292ec7b06e9214a2992cd9beb293d639c6402edb"},
{file = "black-22.10.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:14ff67aec0a47c424bc99b71005202045dc09270da44a27848d534600ac64fc7"},
{file = "black-22.10.0-cp310-cp310-win_amd64.whl", hash = "sha256:819dc789f4498ecc91438a7de64427c73b45035e2e3680c92e18795a839ebb66"},
{file = "black-22.10.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5b9b29da4f564ba8787c119f37d174f2b69cdfdf9015b7d8c5c16121ddc054ae"},
{file = "black-22.10.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b8b49776299fece66bffaafe357d929ca9451450f5466e997a7285ab0fe28e3b"},
{file = "black-22.10.0-cp311-cp311-win_amd64.whl", hash = "sha256:21199526696b8f09c3997e2b4db8d0b108d801a348414264d2eb8eb2532e540d"},
{file = "black-22.10.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1e464456d24e23d11fced2bc8c47ef66d471f845c7b7a42f3bd77bf3d1789650"},
{file = "black-22.10.0-cp37-cp37m-win_amd64.whl", hash = "sha256:9311e99228ae10023300ecac05be5a296f60d2fd10fff31cf5c1fa4ca4b1988d"},
{file = "black-22.10.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:fba8a281e570adafb79f7755ac8721b6cf1bbf691186a287e990c7929c7692ff"},
{file = "black-22.10.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:915ace4ff03fdfff953962fa672d44be269deb2eaf88499a0f8805221bc68c87"},
{file = "black-22.10.0-cp38-cp38-win_amd64.whl", hash = "sha256:444ebfb4e441254e87bad00c661fe32df9969b2bf224373a448d8aca2132b395"},
{file = "black-22.10.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:974308c58d057a651d182208a484ce80a26dac0caef2895836a92dd6ebd725e0"},
{file = "black-22.10.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:72ef3925f30e12a184889aac03d77d031056860ccae8a1e519f6cbb742736383"},
{file = "black-22.10.0-cp39-cp39-win_amd64.whl", hash = "sha256:432247333090c8c5366e69627ccb363bc58514ae3e63f7fc75c54b1ea80fa7de"},
{file = "black-22.10.0-py3-none-any.whl", hash = "sha256:c957b2b4ea88587b46cf49d1dc17681c1e672864fd7af32fc1e9664d572b3458"},
{file = "black-22.10.0.tar.gz", hash = "sha256:f513588da599943e0cde4e32cc9879e825d58720d6557062d1098c5ad80080e1"},
]
bleach = [
{file = "bleach-5.0.1-py3-none-any.whl", hash = "sha256:085f7f33c15bd408dd9b17a4ad77c577db66d76203e5984b1bd59baeee948b2a"},
{file = "bleach-5.0.1.tar.gz", hash = "sha256:0d03255c47eb9bd2f26aa9bb7f2107732e7e8fe195ca2f64709fcf3b0a4a085c"},
]
blis = [
{file = "blis-0.7.9-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b3ea73707a7938304c08363a0b990600e579bfb52dece7c674eafac4bf2df9f7"},
{file = "blis-0.7.9-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e85993364cae82707bfe7e637bee64ec96e232af31301e5c81a351778cb394b9"},
{file = "blis-0.7.9-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d205a7e69523e2bacdd67ea906b82b84034067e0de83b33bd83eb96b9e844ae3"},
{file = "blis-0.7.9-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b9737035636452fb6d08e7ab79e5a9904be18a0736868a129179cd9f9ab59825"},
{file = "blis-0.7.9-cp310-cp310-win_amd64.whl", hash = "sha256:d3882b4f44a33367812b5e287c0690027092830ffb1cce124b02f64e761819a4"},
{file = "blis-0.7.9-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3dbb44311029263a6f65ed55a35f970aeb1d20b18bfac4c025de5aadf7889a8c"},
{file = "blis-0.7.9-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:6fd5941bd5a21082b19d1dd0f6d62cd35609c25eb769aa3457d9877ef2ce37a9"},
{file = "blis-0.7.9-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:97ad55e9ef36e4ff06b35802d0cf7bfc56f9697c6bc9427f59c90956bb98377d"},
{file = "blis-0.7.9-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f7b6315d7b1ac5546bc0350f5f8d7cc064438d23db19a5c21aaa6ae7d93c1ab5"},
{file = "blis-0.7.9-cp311-cp311-win_amd64.whl", hash = "sha256:5fd46c649acd1920482b4f5556d1c88693cba9bf6a494a020b00f14b42e1132f"},
{file = "blis-0.7.9-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:db2959560dcb34e912dad0e0d091f19b05b61363bac15d78307c01334a4e5d9d"},
{file = "blis-0.7.9-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0521231bc95ab522f280da3bbb096299c910a62cac2376d48d4a1d403c54393"},
{file = "blis-0.7.9-cp36-cp36m-win_amd64.whl", hash = "sha256:d811e88480203d75e6e959f313fdbf3326393b4e2b317067d952347f5c56216e"},
{file = "blis-0.7.9-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5cb1db88ab629ccb39eac110b742b98e3511d48ce9caa82ca32609d9169a9c9c"},
{file = "blis-0.7.9-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c399a03de4059bf8e700b921f9ff5d72b2a86673616c40db40cd0592051bdd07"},
{file = "blis-0.7.9-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d4eb70a79562a211bd2e6b6db63f1e2eed32c0ab3e9ef921d86f657ae8375845"},
{file = "blis-0.7.9-cp37-cp37m-win_amd64.whl", hash = "sha256:3e3f95e035c7456a1f5f3b5a3cfe708483a00335a3a8ad2211d57ba4d5f749a5"},
{file = "blis-0.7.9-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:179037cb5e6744c2e93b6b5facc6e4a0073776d514933c3db1e1f064a3253425"},
{file = "blis-0.7.9-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d0e82a6e0337d5231129a4e8b36978fa7b973ad3bb0257fd8e3714a9b35ceffd"},
{file = "blis-0.7.9-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6d12475e588a322e66a18346a3faa9eb92523504042e665c193d1b9b0b3f0482"},
{file = "blis-0.7.9-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4d5755ef37a573647be62684ca1545698879d07321f1e5b89a4fd669ce355eb0"},
{file = "blis-0.7.9-cp38-cp38-win_amd64.whl", hash = "sha256:b8a1fcd2eb267301ab13e1e4209c165d172cdf9c0c9e08186a9e234bf91daa16"},
{file = "blis-0.7.9-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8275f6b6eee714b85f00bf882720f508ed6a60974bcde489715d37fd35529da8"},
{file = "blis-0.7.9-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:7417667c221e29fe8662c3b2ff9bc201c6a5214bbb5eb6cc290484868802258d"},
{file = "blis-0.7.9-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5f4691bf62013eccc167c38a85c09a0bf0c6e3e80d4c2229cdf2668c1124eb0"},
{file = "blis-0.7.9-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f5cec812ee47b29107eb36af9b457be7191163eab65d61775ed63538232c59d5"},
{file = "blis-0.7.9-cp39-cp39-win_amd64.whl", hash = "sha256:d81c3f627d33545fc25c9dcb5fee66c476d89288a27d63ac16ea63453401ffd5"},
{file = "blis-0.7.9.tar.gz", hash = "sha256:29ef4c25007785a90ffc2f0ab3d3bd3b75cd2d7856a9a482b7d0dac8d511a09d"},
]
boto3 = [
{file = "boto3-1.26.17-py3-none-any.whl", hash = "sha256:c39b7e87b27b00dcf452b2fc80252d311e275036f3d48464af34d0123077f985"},
{file = "boto3-1.26.17.tar.gz", hash = "sha256:bb40a9788dd2234851cdd1110eec0e3f6b3af6b98280924fa44c25199ced5737"},
]
botocore = [
{file = "botocore-1.29.17-py3-none-any.whl", hash = "sha256:d4bab7d42acdb18effa33fee53d137b8b1bdedc2da196428a2d1e04a123778bc"},
{file = "botocore-1.29.17.tar.gz", hash = "sha256:4be7ca8c581dbc6e8584876c4347dcc4f4bc6aa6e6e8131901fc11816fc8151b"},
]
cachetools = [
{file = "cachetools-5.2.0-py3-none-any.whl", hash = "sha256:f9f17d2aec496a9aa6b76f53e3b614c965223c061982d434d160f930c698a9db"},
{file = "cachetools-5.2.0.tar.gz", hash = "sha256:6a94c6402995a99c3970cc7e4884bb60b4a8639938157eeed436098bf9831757"},
]
catalogue = [
{file = "catalogue-2.0.8-py3-none-any.whl", hash = "sha256:2d786e229d8d202b4f8a2a059858e45a2331201d831e39746732daa704b99f69"},
{file = "catalogue-2.0.8.tar.gz", hash = "sha256:b325c77659208bfb6af1b0d93b1a1aa4112e1bb29a4c5ced816758a722f0e388"},
]
catboost = [
{file = "catboost-1.1.1-cp310-none-macosx_10_6_universal2.whl", hash = "sha256:93532f6807228f74db9c8184a0893ab222232d23fc5b3db534e2d8fedbba42cf"},
{file = "catboost-1.1.1-cp310-none-manylinux1_x86_64.whl", hash = "sha256:7c7364d79d5ff9deb56956560ba91a1b62b84204961d540bffd97f7b995e8cba"},
{file = "catboost-1.1.1-cp310-none-win_amd64.whl", hash = "sha256:5ec0c9bd65e53ae6c26d17c06f9c28e4febbd7cbdeb858460eb3d34249a10f30"},
{file = "catboost-1.1.1-cp36-none-macosx_10_6_universal2.whl", hash = "sha256:60acc4448eb45242f4d30aea6ccdf45bfaa8646bbc4ede3200cf25ba0d6bcf3d"},
{file = "catboost-1.1.1-cp36-none-manylinux1_x86_64.whl", hash = "sha256:b7443b40b5ddb141c6d14bff16c13f7cf4852893b57d7eda5dff30fb7517e14d"},
{file = "catboost-1.1.1-cp36-none-win_amd64.whl", hash = "sha256:190828590270e3dea5fb58f0fd13715ee2324f6ee321866592c422a1da141961"},
{file = "catboost-1.1.1-cp37-none-macosx_10_6_universal2.whl", hash = "sha256:a2fe4d08a360c3c3cabfa3a94c586f2261b93a3fff043ae2b43d2d4de121c2ce"},
{file = "catboost-1.1.1-cp37-none-manylinux1_x86_64.whl", hash = "sha256:4e350c40920dbd9644f1c7b88cb74cb8b96f1ecbbd7c12f6223964465d83b968"},
{file = "catboost-1.1.1-cp37-none-win_amd64.whl", hash = "sha256:0033569f2e6314a04a84ec83eecd39f77402426b52571b78991e629d7252c6f7"},
{file = "catboost-1.1.1-cp38-none-macosx_10_6_universal2.whl", hash = "sha256:454aae50922b10172b94971033d4b0607128a2e2ca8a5845cf8879ea28d80942"},
{file = "catboost-1.1.1-cp38-none-manylinux1_x86_64.whl", hash = "sha256:3fd12d9f1f89440292c63b242ccabdab012d313250e2b1e8a779d6618c734b32"},
{file = "catboost-1.1.1-cp38-none-win_amd64.whl", hash = "sha256:840348bf56dd11f6096030208601cbce87f1e6426ef33140fb6cc97bceb5fef3"},
{file = "catboost-1.1.1-cp39-none-macosx_10_6_universal2.whl", hash = "sha256:9e7c47050c8840ccaff4d394907d443bda01280a30778ae9d71939a7528f5ae3"},
{file = "catboost-1.1.1-cp39-none-manylinux1_x86_64.whl", hash = "sha256:a60ae2630f7b3752f262515a51b265521a4993df75dea26fa60777ec6e479395"},
{file = "catboost-1.1.1-cp39-none-win_amd64.whl", hash = "sha256:156264dbe9e841cb0b6333383e928cb8f65df4d00429a9771eb8b06b9bcfa17c"},
]
causal-learn = [
{file = "causal-learn-0.1.3.0.tar.gz", hash = "sha256:8242bced95e11eb4b4ee5f8085c528a25496d20c87bd5f3fcdb17d4678d7de63"},
{file = "causal_learn-0.1.3.0-py3-none-any.whl", hash = "sha256:d7271b0a60e839b725735373c4c5c012446dd216f17cc4b46aed550e08054d72"},
]
causalml = []
certifi = [
{file = "certifi-2022.9.24-py3-none-any.whl", hash = "sha256:90c1a32f1d68f940488354e36370f6cca89f0f106db09518524c88d6ed83f382"},
{file = "certifi-2022.9.24.tar.gz", hash = "sha256:0d9c601124e5a6ba9712dbc60d9c53c21e34f5f641fe83002317394311bdce14"},
]
cffi = [
{file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
{file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
{file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
{file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
{file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
{file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
{file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
{file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
{file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
{file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
{file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
{file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
{file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
{file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
{file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
{file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
{file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
{file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
{file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
{file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
{file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
{file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
{file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
{file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
{file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
]
charset-normalizer = [
{file = "charset-normalizer-2.1.1.tar.gz", hash = "sha256:5a3d016c7c547f69d6f81fb0db9449ce888b418b5b9952cc5e6e66843e9dd845"},
{file = "charset_normalizer-2.1.1-py3-none-any.whl", hash = "sha256:83e9a75d1911279afd89352c68b45348559d1fc0506b054b346651b5e7fee29f"},
]
click = [
{file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
{file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
]
cloudpickle = [
{file = "cloudpickle-2.2.0-py3-none-any.whl", hash = "sha256:7428798d5926d8fcbfd092d18d01a2a03daf8237d8fcdc8095d256b8490796f0"},
{file = "cloudpickle-2.2.0.tar.gz", hash = "sha256:3f4219469c55453cfe4737e564b67c2a149109dabf7f242478948b895f61106f"},
]
colorama = [
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
]
comm = [
{file = "comm-0.1.1-py3-none-any.whl", hash = "sha256:788a4ec961956c1cb2b0ba3c21f2458ff5757bb2f552032b140787af88d670a3"},
{file = "comm-0.1.1.tar.gz", hash = "sha256:f395ea64f4f261f35ffc2fbf80a62ec071375dac48cd3ea56092711e74dd063e"},
]
confection = [
{file = "confection-0.0.3-py3-none-any.whl", hash = "sha256:51af839c1240430421da2b248541ebc95f9d0ee385bcafa768b8acdbd2b0111d"},
{file = "confection-0.0.3.tar.gz", hash = "sha256:4fec47190057c43c9acbecb8b1b87a9bf31c469caa0d6888a5b9384432fdba5a"},
]
contourpy = [
{file = "contourpy-1.0.6-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:613c665529899b5d9fade7e5d1760111a0b011231277a0d36c49f0d3d6914bd6"},
{file = "contourpy-1.0.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:78ced51807ccb2f45d4ea73aca339756d75d021069604c2fccd05390dc3c28eb"},
{file = "contourpy-1.0.6-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b3b1bd7577c530eaf9d2bc52d1a93fef50ac516a8b1062c3d1b9bcec9ebe329b"},
{file = "contourpy-1.0.6-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d8834c14b8c3dd849005e06703469db9bf96ba2d66a3f88ecc539c9a8982e0ee"},
{file = "contourpy-1.0.6-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f4052a8a4926d4468416fc7d4b2a7b2a3e35f25b39f4061a7e2a3a2748c4fc48"},
{file = "contourpy-1.0.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1c0e1308307a75e07d1f1b5f0f56b5af84538a5e9027109a7bcf6cb47c434e72"},
{file = "contourpy-1.0.6-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9fc4e7973ed0e1fe689435842a6e6b330eb7ccc696080dda9a97b1a1b78e41db"},
{file = "contourpy-1.0.6-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:08e8d09d96219ace6cb596506fb9b64ea5f270b2fb9121158b976d88871fcfd1"},
{file = "contourpy-1.0.6-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:f33da6b5d19ad1bb5e7ad38bb8ba5c426d2178928bc2b2c44e8823ea0ecb6ff3"},
{file = "contourpy-1.0.6-cp310-cp310-win32.whl", hash = "sha256:12a7dc8439544ed05c6553bf026d5e8fa7fad48d63958a95d61698df0e00092b"},
{file = "contourpy-1.0.6-cp310-cp310-win_amd64.whl", hash = "sha256:eadad75bf91897f922e0fb3dca1b322a58b1726a953f98c2e5f0606bd8408621"},
{file = "contourpy-1.0.6-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:913bac9d064cff033cf3719e855d4f1db9f1c179e0ecf3ba9fdef21c21c6a16a"},
{file = "contourpy-1.0.6-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:46deb310a276cc5c1fd27958e358cce68b1e8a515fa5a574c670a504c3a3fe30"},
{file = "contourpy-1.0.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b64f747e92af7da3b85631a55d68c45a2d728b4036b03cdaba4bd94bcc85bd6f"},
{file = "contourpy-1.0.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50627bf76abb6ba291ad08db583161939c2c5fab38c38181b7833423ab9c7de3"},
{file = "contourpy-1.0.6-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:358f6364e4873f4d73360b35da30066f40387dd3c427a3e5432c6b28dd24a8fa"},
{file = "contourpy-1.0.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c78bfbc1a7bff053baf7e508449d2765964d67735c909b583204e3240a2aca45"},
{file = "contourpy-1.0.6-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e43255a83835a129ef98f75d13d643844d8c646b258bebd11e4a0975203e018f"},
{file = "contourpy-1.0.6-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:375d81366afd547b8558c4720337218345148bc2fcffa3a9870cab82b29667f2"},
{file = "contourpy-1.0.6-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:b98c820608e2dca6442e786817f646d11057c09a23b68d2b3737e6dcb6e4a49b"},
{file = "contourpy-1.0.6-cp311-cp311-win32.whl", hash = "sha256:0e4854cc02006ad6684ce092bdadab6f0912d131f91c2450ce6dbdea78ee3c0b"},
{file = "contourpy-1.0.6-cp311-cp311-win_amd64.whl", hash = "sha256:d2eff2af97ea0b61381828b1ad6cd249bbd41d280e53aea5cccd7b2b31b8225c"},
{file = "contourpy-1.0.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5b117d29433fc8393b18a696d794961464e37afb34a6eeb8b2c37b5f4128a83e"},
{file = "contourpy-1.0.6-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:341330ed19074f956cb20877ad8d2ae50e458884bfa6a6df3ae28487cc76c768"},
{file = "contourpy-1.0.6-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:371f6570a81dfdddbb837ba432293a63b4babb942a9eb7aaa699997adfb53278"},
{file = "contourpy-1.0.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9447c45df407d3ecb717d837af3b70cfef432138530712263730783b3d016512"},
{file = "contourpy-1.0.6-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:730c27978a0003b47b359935478b7d63fd8386dbb2dcd36c1e8de88cbfc1e9de"},
{file = "contourpy-1.0.6-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:da1ef35fd79be2926ba80fbb36327463e3656c02526e9b5b4c2b366588b74d9a"},
{file = "contourpy-1.0.6-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:cd2bc0c8f2e8de7dd89a7f1c10b8844e291bca17d359373203ef2e6100819edd"},
{file = "contourpy-1.0.6-cp37-cp37m-win32.whl", hash = "sha256:3a1917d3941dd58732c449c810fa7ce46cc305ce9325a11261d740118b85e6f3"},
{file = "contourpy-1.0.6-cp37-cp37m-win_amd64.whl", hash = "sha256:06ca79e1efbbe2df795822df2fa173d1a2b38b6e0f047a0ec7903fbca1d1847e"},
{file = "contourpy-1.0.6-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e626cefff8491bce356221c22af5a3ea528b0b41fbabc719c00ae233819ea0bf"},
{file = "contourpy-1.0.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:dbe6fe7a1166b1ddd7b6d887ea6fa8389d3f28b5ed3f73a8f40ece1fc5a3d340"},
{file = "contourpy-1.0.6-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e13b31d1b4b68db60b3b29f8e337908f328c7f05b9add4b1b5c74e0691180109"},
{file = "contourpy-1.0.6-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a79d239fc22c3b8d9d3de492aa0c245533f4f4c7608e5749af866949c0f1b1b9"},
{file = "contourpy-1.0.6-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9e8e686a6db92a46111a1ee0ee6f7fbfae4048f0019de207149f43ac1812cf95"},
{file = "contourpy-1.0.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:acd2bd02f1a7adff3a1f33e431eb96ab6d7987b039d2946a9b39fe6fb16a1036"},
{file = "contourpy-1.0.6-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:03d1b9c6b44a9e30d554654c72be89af94fab7510b4b9f62356c64c81cec8b7d"},
{file = "contourpy-1.0.6-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:b48d94386f1994db7c70c76b5808c12e23ed7a4ee13693c2fc5ab109d60243c0"},
{file = "contourpy-1.0.6-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:208bc904889c910d95aafcf7be9e677726df9ef71e216780170dbb7e37d118fa"},
{file = "contourpy-1.0.6-cp38-cp38-win32.whl", hash = "sha256:444fb776f58f4906d8d354eb6f6ce59d0a60f7b6a720da6c1ccb839db7c80eb9"},
{file = "contourpy-1.0.6-cp38-cp38-win_amd64.whl", hash = "sha256:9bc407a6af672da20da74823443707e38ece8b93a04009dca25856c2d9adadb1"},
{file = "contourpy-1.0.6-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:aa4674cf3fa2bd9c322982644967f01eed0c91bb890f624e0e0daf7a5c3383e9"},
{file = "contourpy-1.0.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6f56515e7c6fae4529b731f6c117752247bef9cdad2b12fc5ddf8ca6a50965a5"},
{file = "contourpy-1.0.6-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:344cb3badf6fc7316ad51835f56ac387bdf86c8e1b670904f18f437d70da4183"},
{file = "contourpy-1.0.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0b1e66346acfb17694d46175a0cea7d9036f12ed0c31dfe86f0f405eedde2bdd"},
{file = "contourpy-1.0.6-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8468b40528fa1e15181cccec4198623b55dcd58306f8815a793803f51f6c474a"},
{file = "contourpy-1.0.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1dedf4c64185a216c35eb488e6f433297c660321275734401760dafaeb0ad5c2"},
{file = "contourpy-1.0.6-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:494efed2c761f0f37262815f9e3c4bb9917c5c69806abdee1d1cb6611a7174a0"},
{file = "contourpy-1.0.6-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:75a2e638042118118ab39d337da4c7908c1af74a8464cad59f19fbc5bbafec9b"},
{file = "contourpy-1.0.6-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a628bba09ba72e472bf7b31018b6281fd4cc903f0888049a3724afba13b6e0b8"},
{file = "contourpy-1.0.6-cp39-cp39-win32.whl", hash = "sha256:e1739496c2f0108013629aa095cc32a8c6363444361960c07493818d0dea2da4"},
{file = "contourpy-1.0.6-cp39-cp39-win_amd64.whl", hash = "sha256:a457ee72d9032e86730f62c5eeddf402e732fdf5ca8b13b41772aa8ae13a4563"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:d912f0154a20a80ea449daada904a7eb6941c83281a9fab95de50529bfc3a1da"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4081918147fc4c29fad328d5066cfc751da100a1098398742f9f364be63803fc"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0537cc1195245bbe24f2913d1f9211b8f04eb203de9044630abd3664c6cc339c"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dcd556c8fc37a342dd636d7eef150b1399f823a4462f8c968e11e1ebeabee769"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:f6ca38dd8d988eca8f07305125dec6f54ac1c518f1aaddcc14d08c01aebb6efc"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:c1baa49ab9fedbf19d40d93163b7d3e735d9cd8d5efe4cce9907902a6dad391f"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:211dfe2bd43bf5791d23afbe23a7952e8ac8b67591d24be3638cabb648b3a6eb"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c38c6536c2d71ca2f7e418acaf5bca30a3af7f2a2fa106083c7d738337848dbe"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b1ee48a130da4dd0eb8055bbab34abf3f6262957832fd575e0cab4979a15a41"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5641927cc5ae66155d0c80195dc35726eae060e7defc18b7ab27600f39dd1fe7"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:7ee394502026d68652c2824348a40bf50f31351a668977b51437131a90d777ea"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0b97454ed5b1368b66ed414c754cba15b9750ce69938fc6153679787402e4cdf"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0236875c5a0784215b49d00ebbe80c5b6b5d5244b3655a36dda88105334dea17"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:84c593aeff7a0171f639da92cb86d24954bbb61f8a1b530f74eb750a14685832"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:9b0e7fe7f949fb719b206548e5cde2518ffb29936afa4303d8a1c4db43dcb675"},
{file = "contourpy-1.0.6.tar.gz", hash = "sha256:6e459ebb8bb5ee4c22c19cc000174f8059981971a33ce11e17dddf6aca97a142"},
]
coverage = [
{file = "coverage-6.5.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ef8674b0ee8cc11e2d574e3e2998aea5df5ab242e012286824ea3c6970580e53"},
{file = "coverage-6.5.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:784f53ebc9f3fd0e2a3f6a78b2be1bd1f5575d7863e10c6e12504f240fd06660"},
{file = "coverage-6.5.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b4a5be1748d538a710f87542f22c2cad22f80545a847ad91ce45e77417293eb4"},
{file = "coverage-6.5.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:83516205e254a0cb77d2d7bb3632ee019d93d9f4005de31dca0a8c3667d5bc04"},
{file = "coverage-6.5.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:af4fffaffc4067232253715065e30c5a7ec6faac36f8fc8d6f64263b15f74db0"},
{file = "coverage-6.5.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:97117225cdd992a9c2a5515db1f66b59db634f59d0679ca1fa3fe8da32749cae"},
{file = "coverage-6.5.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:a1170fa54185845505fbfa672f1c1ab175446c887cce8212c44149581cf2d466"},
{file = "coverage-6.5.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:11b990d520ea75e7ee8dcab5bc908072aaada194a794db9f6d7d5cfd19661e5a"},
{file = "coverage-6.5.0-cp310-cp310-win32.whl", hash = "sha256:5dbec3b9095749390c09ab7c89d314727f18800060d8d24e87f01fb9cfb40b32"},
{file = "coverage-6.5.0-cp310-cp310-win_amd64.whl", hash = "sha256:59f53f1dc5b656cafb1badd0feb428c1e7bc19b867479ff72f7a9dd9b479f10e"},
{file = "coverage-6.5.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:4a5375e28c5191ac38cca59b38edd33ef4cc914732c916f2929029b4bfb50795"},
{file = "coverage-6.5.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c4ed2820d919351f4167e52425e096af41bfabacb1857186c1ea32ff9983ed75"},
{file = "coverage-6.5.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:33a7da4376d5977fbf0a8ed91c4dffaaa8dbf0ddbf4c8eea500a2486d8bc4d7b"},
{file = "coverage-6.5.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a8fb6cf131ac4070c9c5a3e21de0f7dc5a0fbe8bc77c9456ced896c12fcdad91"},
{file = "coverage-6.5.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a6b7d95969b8845250586f269e81e5dfdd8ff828ddeb8567a4a2eaa7313460c4"},
{file = "coverage-6.5.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:1ef221513e6f68b69ee9e159506d583d31aa3567e0ae84eaad9d6ec1107dddaa"},
{file = "coverage-6.5.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cca4435eebea7962a52bdb216dec27215d0df64cf27fc1dd538415f5d2b9da6b"},
{file = "coverage-6.5.0-cp311-cp311-win32.whl", hash = "sha256:98e8a10b7a314f454d9eff4216a9a94d143a7ee65018dd12442e898ee2310578"},
{file = "coverage-6.5.0-cp311-cp311-win_amd64.whl", hash = "sha256:bc8ef5e043a2af066fa8cbfc6e708d58017024dc4345a1f9757b329a249f041b"},
{file = "coverage-6.5.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:4433b90fae13f86fafff0b326453dd42fc9a639a0d9e4eec4d366436d1a41b6d"},
{file = "coverage-6.5.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f4f05d88d9a80ad3cac6244d36dd89a3c00abc16371769f1340101d3cb899fc3"},
{file = "coverage-6.5.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:94e2565443291bd778421856bc975d351738963071e9b8839ca1fc08b42d4bef"},
{file = "coverage-6.5.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:027018943386e7b942fa832372ebc120155fd970837489896099f5cfa2890f79"},
{file = "coverage-6.5.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:255758a1e3b61db372ec2736c8e2a1fdfaf563977eedbdf131de003ca5779b7d"},
{file = "coverage-6.5.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:851cf4ff24062c6aec510a454b2584f6e998cada52d4cb58c5e233d07172e50c"},
{file = "coverage-6.5.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:12adf310e4aafddc58afdb04d686795f33f4d7a6fa67a7a9d4ce7d6ae24d949f"},
{file = "coverage-6.5.0-cp37-cp37m-win32.whl", hash = "sha256:b5604380f3415ba69de87a289a2b56687faa4fe04dbee0754bfcae433489316b"},
{file = "coverage-6.5.0-cp37-cp37m-win_amd64.whl", hash = "sha256:4a8dbc1f0fbb2ae3de73eb0bdbb914180c7abfbf258e90b311dcd4f585d44bd2"},
{file = "coverage-6.5.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d900bb429fdfd7f511f868cedd03a6bbb142f3f9118c09b99ef8dc9bf9643c3c"},
{file = "coverage-6.5.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:2198ea6fc548de52adc826f62cb18554caedfb1d26548c1b7c88d8f7faa8f6ba"},
{file = "coverage-6.5.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c4459b3de97b75e3bd6b7d4b7f0db13f17f504f3d13e2a7c623786289dd670e"},
{file = "coverage-6.5.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:20c8ac5386253717e5ccc827caad43ed66fea0efe255727b1053a8154d952398"},
{file = "coverage-6.5.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6b07130585d54fe8dff3d97b93b0e20290de974dc8177c320aeaf23459219c0b"},
{file = "coverage-6.5.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:dbdb91cd8c048c2b09eb17713b0c12a54fbd587d79adcebad543bc0cd9a3410b"},
{file = "coverage-6.5.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:de3001a203182842a4630e7b8d1a2c7c07ec1b45d3084a83d5d227a3806f530f"},
{file = "coverage-6.5.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:e07f4a4a9b41583d6eabec04f8b68076ab3cd44c20bd29332c6572dda36f372e"},
{file = "coverage-6.5.0-cp38-cp38-win32.whl", hash = "sha256:6d4817234349a80dbf03640cec6109cd90cba068330703fa65ddf56b60223a6d"},
{file = "coverage-6.5.0-cp38-cp38-win_amd64.whl", hash = "sha256:7ccf362abd726b0410bf8911c31fbf97f09f8f1061f8c1cf03dfc4b6372848f6"},
{file = "coverage-6.5.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:633713d70ad6bfc49b34ead4060531658dc6dfc9b3eb7d8a716d5873377ab745"},
{file = "coverage-6.5.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:95203854f974e07af96358c0b261f1048d8e1083f2de9b1c565e1be4a3a48cfc"},
{file = "coverage-6.5.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b9023e237f4c02ff739581ef35969c3739445fb059b060ca51771e69101efffe"},
{file = "coverage-6.5.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:265de0fa6778d07de30bcf4d9dc471c3dc4314a23a3c6603d356a3c9abc2dfcf"},
{file = "coverage-6.5.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f830ed581b45b82451a40faabb89c84e1a998124ee4212d440e9c6cf70083e5"},
{file = "coverage-6.5.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:7b6be138d61e458e18d8e6ddcddd36dd96215edfe5f1168de0b1b32635839b62"},
{file = "coverage-6.5.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:42eafe6778551cf006a7c43153af1211c3aaab658d4d66fa5fcc021613d02518"},
{file = "coverage-6.5.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:723e8130d4ecc8f56e9a611e73b31219595baa3bb252d539206f7bbbab6ffc1f"},
{file = "coverage-6.5.0-cp39-cp39-win32.whl", hash = "sha256:d9ecf0829c6a62b9b573c7bb6d4dcd6ba8b6f80be9ba4fc7ed50bf4ac9aecd72"},
{file = "coverage-6.5.0-cp39-cp39-win_amd64.whl", hash = "sha256:fc2af30ed0d5ae0b1abdb4ebdce598eafd5b35397d4d75deb341a614d333d987"},
{file = "coverage-6.5.0-pp36.pp37.pp38-none-any.whl", hash = "sha256:1431986dac3923c5945271f169f59c45b8802a114c8f548d611f2015133df77a"},
{file = "coverage-6.5.0.tar.gz", hash = "sha256:f642e90754ee3e06b0e7e51bce3379590e76b7f76b708e1a71ff043f87025c84"},
]
cycler = [
{file = "cycler-0.11.0-py3-none-any.whl", hash = "sha256:3a27e95f763a428a739d2add979fa7494c912a32c17c4c38c4d5f082cad165a3"},
{file = "cycler-0.11.0.tar.gz", hash = "sha256:9c87405839a19696e837b3b818fed3f5f69f16f1eec1a1ad77e043dcea9c772f"},
]
cymem = [
{file = "cymem-2.0.7-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:4981fc9182cc1fe54bfedf5f73bfec3ce0c27582d9be71e130c46e35958beef0"},
{file = "cymem-2.0.7-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:42aedfd2e77aa0518a24a2a60a2147308903abc8b13c84504af58539c39e52a3"},
{file = "cymem-2.0.7-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c183257dc5ab237b664f64156c743e788f562417c74ea58c5a3939fe2d48d6f6"},
{file = "cymem-2.0.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d18250f97eeb13af2e8b19d3cefe4bf743b963d93320b0a2e729771410fd8cf4"},
{file = "cymem-2.0.7-cp310-cp310-win_amd64.whl", hash = "sha256:864701e626b65eb2256060564ed8eb034ebb0a8f14ce3fbef337e88352cdee9f"},
{file = "cymem-2.0.7-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:314273be1f143da674388e0a125d409e2721fbf669c380ae27c5cbae4011e26d"},
{file = "cymem-2.0.7-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:df543a36e7000808fe0a03d92fd6cd8bf23fa8737c3f7ae791a5386de797bf79"},
{file = "cymem-2.0.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e5e1b7de7952d89508d07601b9e95b2244e70d7ef60fbc161b3ad68f22815f8"},
{file = "cymem-2.0.7-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2aa33f1dbd7ceda37970e174c38fd1cf106817a261aa58521ba9918156868231"},
{file = "cymem-2.0.7-cp311-cp311-win_amd64.whl", hash = "sha256:10178e402bb512b2686b8c2f41f930111e597237ca8f85cb583ea93822ef798d"},
{file = "cymem-2.0.7-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a2971b7da5aa2e65d8fbbe9f2acfc19ff8e73f1896e3d6e1223cc9bf275a0207"},
{file = "cymem-2.0.7-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:85359ab7b490e6c897c04863704481600bd45188a0e2ca7375eb5db193e13cb7"},
{file = "cymem-2.0.7-cp36-cp36m-win_amd64.whl", hash = "sha256:0ac45088abffbae9b7db2c597f098de51b7e3c1023cb314e55c0f7f08440cf66"},
{file = "cymem-2.0.7-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:26e5d5c6958855d2fe3d5629afe85a6aae5531abaa76f4bc21b9abf9caaccdfe"},
{file = "cymem-2.0.7-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:011039e12d3144ac1bf3a6b38f5722b817f0d6487c8184e88c891b360b69f533"},
{file = "cymem-2.0.7-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f9e63e5ad4ed6ffa21fd8db1c03b05be3fea2f32e32fdace67a840ea2702c3d"},
{file = "cymem-2.0.7-cp37-cp37m-win_amd64.whl", hash = "sha256:5ea6b027fdad0c3e9a4f1b94d28d213be08c466a60c72c633eb9db76cf30e53a"},
{file = "cymem-2.0.7-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:4302df5793a320c4f4a263c7785d2fa7f29928d72cb83ebeb34d64a610f8d819"},
{file = "cymem-2.0.7-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:24b779046484674c054af1e779c68cb224dc9694200ac13b22129d7fb7e99e6d"},
{file = "cymem-2.0.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c50794c612801ed8b599cd4af1ed810a0d39011711c8224f93e1153c00e08d1"},
{file = "cymem-2.0.7-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a9525ad563b36dc1e30889d0087a0daa67dd7bb7d3e1530c4b61cd65cc756a5b"},
{file = "cymem-2.0.7-cp38-cp38-win_amd64.whl", hash = "sha256:48b98da6b906fe976865263e27734ebc64f972a978a999d447ad6c83334e3f90"},
{file = "cymem-2.0.7-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:e156788d32ad8f7141330913c5d5d2aa67182fca8f15ae22645e9f379abe8a4c"},
{file = "cymem-2.0.7-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3da89464021fe669932fce1578343fcaf701e47e3206f50d320f4f21e6683ca5"},
{file = "cymem-2.0.7-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4f359cab9f16e25b3098f816c40acbf1697a3b614a8d02c56e6ebcb9c89a06b3"},
{file = "cymem-2.0.7-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f165d7bce55d6730930e29d8294569788aa127f1be8d1642d9550ed96223cb37"},
{file = "cymem-2.0.7-cp39-cp39-win_amd64.whl", hash = "sha256:59a09cf0e71b1b88bfa0de544b801585d81d06ea123c1725e7c5da05b7ca0d20"},
{file = "cymem-2.0.7.tar.gz", hash = "sha256:e6034badb5dd4e10344211c81f16505a55553a7164adc314c75bd80cf07e57a8"},
]
Cython = [
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:39afb4679b8c6bf7ccb15b24025568f4f9b4d7f9bf3cbd981021f542acecd75b"},
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:dbee03b8d42dca924e6aa057b836a064c769ddfd2a4c2919e65da2c8a362d528"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ba622326f2862f9c1f99ca8d47ade49871241920a352c917e16861e25b0e5c3"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e6ffa08aa1c111a1ebcbd1cf4afaaec120bc0bbdec3f2545f8bb7d3e8e77a1cd"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:97335b2cd4acebf30d14e2855d882de83ad838491a09be2011745579ac975833"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:06be83490c906b6429b4389e13487a26254ccaad2eef6f3d4ee21d8d3a4aaa2b"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:eefd2b9a5f38ded8d859fe96cc28d7d06e098dc3f677e7adbafda4dcdd4a461c"},
{file = "Cython-0.29.32-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5514f3b4122cb22317122a48e175a7194e18e1803ca555c4c959d7dfe68eaf98"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:656dc5ff1d269de4d11ee8542f2ffd15ab466c447c1f10e5b8aba6f561967276"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:cdf10af3e2e3279dc09fdc5f95deaa624850a53913f30350ceee824dc14fc1a6"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:3875c2b2ea752816a4d7ae59d45bb546e7c4c79093c83e3ba7f4d9051dd02928"},
{file = "Cython-0.29.32-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:79e3bab19cf1b021b613567c22eb18b76c0c547b9bc3903881a07bfd9e7e64cf"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0595aee62809ba353cebc5c7978e0e443760c3e882e2c7672c73ffe46383673"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0ea8267fc373a2c5064ad77d8ff7bf0ea8b88f7407098ff51829381f8ec1d5d9"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:c8e8025f496b5acb6ba95da2fb3e9dacffc97d9a92711aacfdd42f9c5927e094"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:afbce249133a830f121b917f8c9404a44f2950e0e4f5d1e68f043da4c2e9f457"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:513e9707407608ac0d306c8b09d55a28be23ea4152cbd356ceaec0f32ef08d65"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e83228e0994497900af954adcac27f64c9a57cd70a9ec768ab0cb2c01fd15cf1"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:ea1dcc07bfb37367b639415333cfbfe4a93c3be340edf1db10964bc27d42ed64"},
{file = "Cython-0.29.32-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8669cadeb26d9a58a5e6b8ce34d2c8986cc3b5c0bfa77eda6ceb471596cb2ec3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:ed087eeb88a8cf96c60fb76c5c3b5fb87188adee5e179f89ec9ad9a43c0c54b3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:3f85eb2343d20d91a4ea9cf14e5748092b376a64b7e07fc224e85b2753e9070b"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:63b79d9e1f7c4d1f498ab1322156a0d7dc1b6004bf981a8abda3f66800e140cd"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e1958e0227a4a6a2c06fd6e35b7469de50adf174102454db397cec6e1403cce3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:856d2fec682b3f31583719cb6925c6cdbb9aa30f03122bcc45c65c8b6f515754"},
{file = "Cython-0.29.32-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:479690d2892ca56d34812fe6ab8f58e4b2e0129140f3d94518f15993c40553da"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:67fdd2f652f8d4840042e2d2d91e15636ba2bcdcd92e7e5ffbc68e6ef633a754"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:4a4b03ab483271f69221c3210f7cde0dcc456749ecf8243b95bc7a701e5677e0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:40eff7aa26e91cf108fd740ffd4daf49f39b2fdffadabc7292b4b7dc5df879f0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0bbc27abdf6aebfa1bce34cd92bd403070356f28b0ecb3198ff8a182791d58b9"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:cddc47ec746a08603037731f5d10aebf770ced08666100bd2cdcaf06a85d4d1b"},
{file = "Cython-0.29.32-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:eca3065a1279456e81c615211d025ea11bfe4e19f0c5650b859868ca04b3fcbd"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d968ffc403d92addf20b68924d95428d523436adfd25cf505d427ed7ba3bee8b"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f3fd44cc362eee8ae569025f070d56208908916794b6ab21e139cea56470a2b3"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:b6da3063c5c476f5311fd76854abae6c315f1513ef7d7904deed2e774623bbb9"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:061e25151c38f2361bc790d3bcf7f9d9828a0b6a4d5afa56fbed3bd33fb2373a"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f9944013588a3543fca795fffb0a070a31a243aa4f2d212f118aa95e69485831"},
{file = "Cython-0.29.32-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:07d173d3289415bb496e72cb0ddd609961be08fe2968c39094d5712ffb78672b"},
{file = "Cython-0.29.32-py2.py3-none-any.whl", hash = "sha256:eeb475eb6f0ccf6c039035eb4f0f928eb53ead88777e0a760eccb140ad90930b"},
{file = "Cython-0.29.32.tar.gz", hash = "sha256:8733cf4758b79304f2a4e39ebfac5e92341bce47bcceb26c1254398b2f8c1af7"},
]
dask = [
{file = "dask-2021.11.2-py3-none-any.whl", hash = "sha256:2b0ad7beba8950add4fdc7c5cb94fa9444915ddb00c711d5743e2c4bb0a95ef5"},
{file = "dask-2021.11.2.tar.gz", hash = "sha256:e12bfe272928d62fa99623d98d0e0b0c045b33a47509ef31a22175aa5fd10917"},
]
debugpy = [
{file = "debugpy-1.6.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:c4b2bd5c245eeb49824bf7e539f95fb17f9a756186e51c3e513e32999d8846f3"},
{file = "debugpy-1.6.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b8deaeb779699350deeed835322730a3efec170b88927debc9ba07a1a38e2585"},
{file = "debugpy-1.6.3-cp310-cp310-win32.whl", hash = "sha256:fc233a0160f3b117b20216f1169e7211b83235e3cd6749bcdd8dbb72177030c7"},
{file = "debugpy-1.6.3-cp310-cp310-win_amd64.whl", hash = "sha256:dda8652520eae3945833e061cbe2993ad94a0b545aebd62e4e6b80ee616c76b2"},
{file = "debugpy-1.6.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:d5c814596a170a0a58fa6fad74947e30bfd7e192a5d2d7bd6a12156c2899e13a"},
{file = "debugpy-1.6.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c4cd6f37e3c168080d61d698390dfe2cd9e74ebf80b448069822a15dadcda57d"},
{file = "debugpy-1.6.3-cp37-cp37m-win32.whl", hash = "sha256:3c9f985944a30cfc9ae4306ac6a27b9c31dba72ca943214dad4a0ab3840f6161"},
{file = "debugpy-1.6.3-cp37-cp37m-win_amd64.whl", hash = "sha256:5ad571a36cec137ae6ed951d0ff75b5e092e9af6683da084753231150cbc5b25"},
{file = "debugpy-1.6.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:adcfea5ea06d55d505375995e150c06445e2b20cd12885bcae566148c076636b"},
{file = "debugpy-1.6.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:daadab4403427abd090eccb38d8901afd8b393e01fd243048fab3f1d7132abb4"},
{file = "debugpy-1.6.3-cp38-cp38-win32.whl", hash = "sha256:6efc30325b68e451118b795eff6fe8488253ca3958251d5158106d9c87581bc6"},
{file = "debugpy-1.6.3-cp38-cp38-win_amd64.whl", hash = "sha256:86d784b72c5411c833af1cd45b83d80c252b77c3bfdb43db17c441d772f4c734"},
{file = "debugpy-1.6.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:4e255982552b0edfe3a6264438dbd62d404baa6556a81a88f9420d3ed79b06ae"},
{file = "debugpy-1.6.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:cca23cb6161ac89698d629d892520327dd1be9321c0960e610bbcb807232b45d"},
{file = "debugpy-1.6.3-cp39-cp39-win32.whl", hash = "sha256:7c302095a81be0d5c19f6529b600bac971440db3e226dce85347cc27e6a61908"},
{file = "debugpy-1.6.3-cp39-cp39-win_amd64.whl", hash = "sha256:34d2cdd3a7c87302ba5322b86e79c32c2115be396f3f09ca13306d8a04fe0f16"},
{file = "debugpy-1.6.3-py2.py3-none-any.whl", hash = "sha256:84c39940a0cac410bf6aa4db00ba174f973eef521fbe9dd058e26bcabad89c4f"},
{file = "debugpy-1.6.3.zip", hash = "sha256:e8922090514a890eec99cfb991bab872dd2e353ebb793164d5f01c362b9a40bf"},
]
decorator = [
{file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"},
{file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"},
]
defusedxml = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
]
dill = [
{file = "dill-0.3.6-py3-none-any.whl", hash = "sha256:a07ffd2351b8c678dfc4a856a3005f8067aea51d6ba6c700796a4d9e280f39f0"},
{file = "dill-0.3.6.tar.gz", hash = "sha256:e5db55f3687856d8fbdab002ed78544e1c4559a130302693d839dfe8f93f2373"},
]
distributed = [
{file = "distributed-2021.11.2-py3-none-any.whl", hash = "sha256:af1f7b98d85d43886fefe2354379c848c7a5aa6ae4d2313a7aca9ab9081a7e56"},
{file = "distributed-2021.11.2.tar.gz", hash = "sha256:f86a01a2e1e678865d2e42300c47552b5012cd81a2d354e47827a1fd074cc302"},
]
docutils = [
{file = "docutils-0.17.1-py2.py3-none-any.whl", hash = "sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61"},
{file = "docutils-0.17.1.tar.gz", hash = "sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125"},
]
econml = [
{file = "econml-0.14.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9c2fc1d67d98774d00bfe8e76d76af3de5ebc8d5f7a440da3c667d5ad244f971"},
{file = "econml-0.14.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9b02aca395eaa905bff080c3efd4f74bf281f168c674d74bdf899fc9467311e1"},
{file = "econml-0.14.0-cp310-cp310-win_amd64.whl", hash = "sha256:d2cca82486826c2b13f47ed0140f3fc85d8016fb43153a1b2de025345b190c6c"},
{file = "econml-0.14.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:ce98668ba93d33856b60750e23312b9a6d503af6890b5588ab708db9de05ff49"},
{file = "econml-0.14.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3b6b9938a2f48bf3055ae0ea47ac5a627d1c180f22e62531943961427769b0ef"},
{file = "econml-0.14.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3c780c49a97bd688475f8863a7bdad2cbe19fdb4417708e3874f2bdae102852f"},
{file = "econml-0.14.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7f2930eb311ea576195718b97fde83b4f2d29f3f3dc57ce0834b52fee410bfac"},
{file = "econml-0.14.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:36be15da6ff3b295bc5cf80b95753e19bc123a1103bf53a2a0744daef49273e5"},
{file = "econml-0.14.0-cp38-cp38-win_amd64.whl", hash = "sha256:f71ab406f37b64dead4bee1b4c4869204faf9c55887dc8117bd9396d977edaf3"},
{file = "econml-0.14.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1b0e67419c4eff2acdf8138f208de333a85c3e6fded831a6664bb02d6f4bcbe1"},
{file = "econml-0.14.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:376724e0535ad9cbc585f768110eb23bfd3b3218032a61cac8793a09ee3bce95"},
{file = "econml-0.14.0-cp39-cp39-win_amd64.whl", hash = "sha256:6e1f0554d0f930dc639dbf3d7cb171297aa113dd64b7db322e0abb7d12eaa4dc"},
{file = "econml-0.14.0.tar.gz", hash = "sha256:5637d36c7548fb3ad01956d091cc6a9f788b090bc8b892bd527012e5bdbce041"},
]
entrypoints = [
{file = "entrypoints-0.4-py3-none-any.whl", hash = "sha256:f174b5ff827504fd3cd97cc3f8649f3693f51538c7e4bdf3ef002c8429d42f9f"},
{file = "entrypoints-0.4.tar.gz", hash = "sha256:b706eddaa9218a19ebcd67b56818f05bb27589b1ca9e8d797b74affad4ccacd4"},
]
exceptiongroup = [
{file = "exceptiongroup-1.0.4-py3-none-any.whl", hash = "sha256:542adf9dea4055530d6e1279602fa5cb11dab2395fa650b8674eaec35fc4a828"},
{file = "exceptiongroup-1.0.4.tar.gz", hash = "sha256:bd14967b79cd9bdb54d97323216f8fdf533e278df937aa2a90089e7d6e06e5ec"},
]
executing = [
{file = "executing-1.2.0-py2.py3-none-any.whl", hash = "sha256:0314a69e37426e3608aada02473b4161d4caf5a4b244d1d0c48072b8fee7bacc"},
{file = "executing-1.2.0.tar.gz", hash = "sha256:19da64c18d2d851112f09c287f8d3dbbdf725ab0e569077efb6cdcbd3497c107"},
]
fastai = [
{file = "fastai-2.7.10-py3-none-any.whl", hash = "sha256:db3709d6ff9ede9cd29111420b3669238248fa4f5a29d98daf37d52d122d9424"},
{file = "fastai-2.7.10.tar.gz", hash = "sha256:ccef6a185ae3a637efc9bcd9fea8e48b75f454d0ebad3b6df426f22fae20039d"},
]
fastcore = [
{file = "fastcore-1.5.27-py3-none-any.whl", hash = "sha256:79dffaa3de96066e4d7f2b8793f1a8a9468c82bc97d3d48ec002de34097b2a9f"},
{file = "fastcore-1.5.27.tar.gz", hash = "sha256:c6b66b35569d17251e25999bafc7d9bcdd6446c1e710503c08670c3ff1eef271"},
]
fastdownload = [
{file = "fastdownload-0.0.7-py3-none-any.whl", hash = "sha256:b791fa3406a2da003ba64615f03c60e2ea041c3c555796450b9a9a601bc0bbac"},
{file = "fastdownload-0.0.7.tar.gz", hash = "sha256:20507edb8e89406a1fbd7775e6e2a3d81a4dd633dd506b0e9cf0e1613e831d6a"},
]
fastjsonschema = [
{file = "fastjsonschema-2.16.2-py3-none-any.whl", hash = "sha256:21f918e8d9a1a4ba9c22e09574ba72267a6762d47822db9add95f6454e51cc1c"},
{file = "fastjsonschema-2.16.2.tar.gz", hash = "sha256:01e366f25d9047816fe3d288cbfc3e10541daf0af2044763f3d0ade42476da18"},
]
fastprogress = [
{file = "fastprogress-1.0.3-py3-none-any.whl", hash = "sha256:6dfea88f7a4717b0a8d6ee2048beae5dbed369f932a368c5dd9caff34796f7c5"},
{file = "fastprogress-1.0.3.tar.gz", hash = "sha256:7a17d2b438890f838c048eefce32c4ded47197ecc8ea042cecc33d3deb8022f5"},
]
flake8 = [
{file = "flake8-4.0.1-py2.py3-none-any.whl", hash = "sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d"},
{file = "flake8-4.0.1.tar.gz", hash = "sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"},
]
flaky = [
{file = "flaky-3.7.0-py2.py3-none-any.whl", hash = "sha256:d6eda73cab5ae7364504b7c44670f70abed9e75f77dd116352f662817592ec9c"},
{file = "flaky-3.7.0.tar.gz", hash = "sha256:3ad100780721a1911f57a165809b7ea265a7863305acb66708220820caf8aa0d"},
]
flatbuffers = [
{file = "flatbuffers-22.11.23-py2.py3-none-any.whl", hash = "sha256:13043a5deba77e55b73064750195d2c5b494754d52b7d4ad01bc52cad5c3c9f2"},
{file = "flatbuffers-22.11.23.tar.gz", hash = "sha256:2a82b85eea7f6712ab41077086dae1a89382862fe64414c8ebdf976123d1a095"},
]
fonttools = [
{file = "fonttools-4.38.0-py3-none-any.whl", hash = "sha256:820466f43c8be8c3009aef8b87e785014133508f0de64ec469e4efb643ae54fb"},
{file = "fonttools-4.38.0.zip", hash = "sha256:2bb244009f9bf3fa100fc3ead6aeb99febe5985fa20afbfbaa2f8946c2fbdaf1"},
]
forestci = [
{file = "forestci-0.6-py3-none-any.whl", hash = "sha256:025e76b20e23ddbdfc0a9c9c7f261751ee376b33a7b257b86e72fbad8312d650"},
{file = "forestci-0.6.tar.gz", hash = "sha256:f74f51eba9a7c189fdb673203cea10383f0a34504d2d28dee0fd712d19945b5a"},
]
fsspec = [
{file = "fsspec-2022.11.0-py3-none-any.whl", hash = "sha256:d6e462003e3dcdcb8c7aa84c73a228f8227e72453cd22570e2363e8844edfe7b"},
{file = "fsspec-2022.11.0.tar.gz", hash = "sha256:259d5fd5c8e756ff2ea72f42e7613c32667dc2049a4ac3d84364a7ca034acb8b"},
]
future = [
{file = "future-0.18.2.tar.gz", hash = "sha256:b1bead90b70cf6ec3f0710ae53a525360fa360d306a86583adc6bf83a4db537d"},
]
gast = [
{file = "gast-0.4.0-py3-none-any.whl", hash = "sha256:b7adcdd5adbebf1adf17378da5ba3f543684dbec47b1cda1f3997e573cd542c4"},
{file = "gast-0.4.0.tar.gz", hash = "sha256:40feb7b8b8434785585ab224d1568b857edb18297e5a3047f1ba012bc83b42c1"},
]
google-auth = [
{file = "google-auth-2.14.1.tar.gz", hash = "sha256:ccaa901f31ad5cbb562615eb8b664b3dd0bf5404a67618e642307f00613eda4d"},
{file = "google_auth-2.14.1-py2.py3-none-any.whl", hash = "sha256:f5d8701633bebc12e0deea4df8abd8aff31c28b355360597f7f2ee60f2e4d016"},
]
google-auth-oauthlib = [
{file = "google-auth-oauthlib-0.4.6.tar.gz", hash = "sha256:a90a072f6993f2c327067bf65270046384cda5a8ecb20b94ea9a687f1f233a7a"},
{file = "google_auth_oauthlib-0.4.6-py2.py3-none-any.whl", hash = "sha256:3f2a6e802eebbb6fb736a370fbf3b055edcb6b52878bf2f26330b5e041316c73"},
]
google-pasta = [
{file = "google-pasta-0.2.0.tar.gz", hash = "sha256:c9f2c8dfc8f96d0d5808299920721be30c9eec37f2389f28904f454565c8a16e"},
{file = "google_pasta-0.2.0-py2-none-any.whl", hash = "sha256:4612951da876b1a10fe3960d7226f0c7682cf901e16ac06e473b267a5afa8954"},
{file = "google_pasta-0.2.0-py3-none-any.whl", hash = "sha256:b32482794a366b5366a32c92a9a9201b107821889935a02b3e51f6b432ea84ed"},
]
graphviz = [
{file = "graphviz-0.20.1-py3-none-any.whl", hash = "sha256:587c58a223b51611c0cf461132da386edd896a029524ca61a1462b880bf97977"},
{file = "graphviz-0.20.1.zip", hash = "sha256:8c58f14adaa3b947daf26c19bc1e98c4e0702cdc31cf99153e6f06904d492bf8"},
]
grpcio = [
{file = "grpcio-1.50.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:906f4d1beb83b3496be91684c47a5d870ee628715227d5d7c54b04a8de802974"},
{file = "grpcio-1.50.0-cp310-cp310-macosx_12_0_x86_64.whl", hash = "sha256:2d9fd6e38b16c4d286a01e1776fdf6c7a4123d99ae8d6b3f0b4a03a34bf6ce45"},
{file = "grpcio-1.50.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:4b123fbb7a777a2fedec684ca0b723d85e1d2379b6032a9a9b7851829ed3ca9a"},
{file = "grpcio-1.50.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b2f77a90ba7b85bfb31329f8eab9d9540da2cf8a302128fb1241d7ea239a5469"},
{file = "grpcio-1.50.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9eea18a878cffc804506d39c6682d71f6b42ec1c151d21865a95fae743fda500"},
{file = "grpcio-1.50.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:2b71916fa8f9eb2abd93151fafe12e18cebb302686b924bd4ec39266211da525"},
{file = "grpcio-1.50.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:95ce51f7a09491fb3da8cf3935005bff19983b77c4e9437ef77235d787b06842"},
{file = "grpcio-1.50.0-cp310-cp310-win32.whl", hash = "sha256:f7025930039a011ed7d7e7ef95a1cb5f516e23c5a6ecc7947259b67bea8e06ca"},
{file = "grpcio-1.50.0-cp310-cp310-win_amd64.whl", hash = "sha256:05f7c248e440f538aaad13eee78ef35f0541e73498dd6f832fe284542ac4b298"},
{file = "grpcio-1.50.0-cp311-cp311-linux_armv7l.whl", hash = "sha256:ca8a2254ab88482936ce941485c1c20cdeaef0efa71a61dbad171ab6758ec998"},
{file = "grpcio-1.50.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:3b611b3de3dfd2c47549ca01abfa9bbb95937eb0ea546ea1d762a335739887be"},
{file = "grpcio-1.50.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1a4cd8cb09d1bc70b3ea37802be484c5ae5a576108bad14728f2516279165dd7"},
{file = "grpcio-1.50.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:156f8009e36780fab48c979c5605eda646065d4695deea4cfcbcfdd06627ddb6"},
{file = "grpcio-1.50.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:de411d2b030134b642c092e986d21aefb9d26a28bf5a18c47dd08ded411a3bc5"},
{file = "grpcio-1.50.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d144ad10eeca4c1d1ce930faa105899f86f5d99cecfe0d7224f3c4c76265c15e"},
{file = "grpcio-1.50.0-cp311-cp311-win32.whl", hash = "sha256:92d7635d1059d40d2ec29c8bf5ec58900120b3ce5150ef7414119430a4b2dd5c"},
{file = "grpcio-1.50.0-cp311-cp311-win_amd64.whl", hash = "sha256:ce8513aee0af9c159319692bfbf488b718d1793d764798c3d5cff827a09e25ef"},
{file = "grpcio-1.50.0-cp37-cp37m-linux_armv7l.whl", hash = "sha256:8e8999a097ad89b30d584c034929f7c0be280cd7851ac23e9067111167dcbf55"},
{file = "grpcio-1.50.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:a50a1be449b9e238b9bd43d3857d40edf65df9416dea988929891d92a9f8a778"},
{file = "grpcio-1.50.0-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:cf151f97f5f381163912e8952eb5b3afe89dec9ed723d1561d59cabf1e219a35"},
{file = "grpcio-1.50.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a23d47f2fc7111869f0ff547f771733661ff2818562b04b9ed674fa208e261f4"},
{file = "grpcio-1.50.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d84d04dec64cc4ed726d07c5d17b73c343c8ddcd6b59c7199c801d6bbb9d9ed1"},
{file = "grpcio-1.50.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:67dd41a31f6fc5c7db097a5c14a3fa588af54736ffc174af4411d34c4f306f68"},
{file = "grpcio-1.50.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8d4c8e73bf20fb53fe5a7318e768b9734cf122fe671fcce75654b98ba12dfb75"},
{file = "grpcio-1.50.0-cp37-cp37m-win32.whl", hash = "sha256:7489dbb901f4fdf7aec8d3753eadd40839c9085967737606d2c35b43074eea24"},
{file = "grpcio-1.50.0-cp37-cp37m-win_amd64.whl", hash = "sha256:531f8b46f3d3db91d9ef285191825d108090856b3bc86a75b7c3930f16ce432f"},
{file = "grpcio-1.50.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:d534d169673dd5e6e12fb57cc67664c2641361e1a0885545495e65a7b761b0f4"},
{file = "grpcio-1.50.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:1d8d02dbb616c0a9260ce587eb751c9c7dc689bc39efa6a88cc4fa3e9c138a7b"},
{file = "grpcio-1.50.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:baab51dcc4f2aecabf4ed1e2f57bceab240987c8b03533f1cef90890e6502067"},
{file = "grpcio-1.50.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40838061e24f960b853d7bce85086c8e1b81c6342b1f4c47ff0edd44bbae2722"},
{file = "grpcio-1.50.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:931e746d0f75b2a5cff0a1197d21827a3a2f400c06bace036762110f19d3d507"},
{file = "grpcio-1.50.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:15f9e6d7f564e8f0776770e6ef32dac172c6f9960c478616c366862933fa08b4"},
{file = "grpcio-1.50.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:a4c23e54f58e016761b576976da6a34d876420b993f45f66a2bfb00363ecc1f9"},
{file = "grpcio-1.50.0-cp38-cp38-win32.whl", hash = "sha256:3e4244c09cc1b65c286d709658c061f12c61c814be0b7030a2d9966ff02611e0"},
{file = "grpcio-1.50.0-cp38-cp38-win_amd64.whl", hash = "sha256:8e69aa4e9b7f065f01d3fdcecbe0397895a772d99954bb82eefbb1682d274518"},
{file = "grpcio-1.50.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:af98d49e56605a2912cf330b4627e5286243242706c3a9fa0bcec6e6f68646fc"},
{file = "grpcio-1.50.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:080b66253f29e1646ac53ef288c12944b131a2829488ac3bac8f52abb4413c0d"},
{file = "grpcio-1.50.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:ab5d0e3590f0a16cb88de4a3fa78d10eb66a84ca80901eb2c17c1d2c308c230f"},
{file = "grpcio-1.50.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb11464f480e6103c59d558a3875bd84eed6723f0921290325ebe97262ae1347"},
{file = "grpcio-1.50.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e07fe0d7ae395897981d16be61f0db9791f482f03fee7d1851fe20ddb4f69c03"},
{file = "grpcio-1.50.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:d75061367a69808ab2e84c960e9dce54749bcc1e44ad3f85deee3a6c75b4ede9"},
{file = "grpcio-1.50.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ae23daa7eda93c1c49a9ecc316e027ceb99adbad750fbd3a56fa9e4a2ffd5ae0"},
{file = "grpcio-1.50.0-cp39-cp39-win32.whl", hash = "sha256:177afaa7dba3ab5bfc211a71b90da1b887d441df33732e94e26860b3321434d9"},
{file = "grpcio-1.50.0-cp39-cp39-win_amd64.whl", hash = "sha256:ea8ccf95e4c7e20419b7827aa5b6da6f02720270686ac63bd3493a651830235c"},
{file = "grpcio-1.50.0.tar.gz", hash = "sha256:12b479839a5e753580b5e6053571de14006157f2ef9b71f38c56dc9b23b95ad6"},
]
h5py = [
{file = "h5py-3.7.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d77af42cb751ad6cc44f11bae73075a07429a5cf2094dfde2b1e716e059b3911"},
{file = "h5py-3.7.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:63beb8b7b47d0896c50de6efb9a1eaa81dbe211f3767e7dd7db159cea51ba37a"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:04e2e1e2fc51b8873e972a08d2f89625ef999b1f2d276199011af57bb9fc7851"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f73307c876af49aa869ec5df1818e9bb0bdcfcf8a5ba773cc45a4fba5a286a5c"},
{file = "h5py-3.7.0-cp310-cp310-win_amd64.whl", hash = "sha256:f514b24cacdd983e61f8d371edac8c1b780c279d0acb8485639e97339c866073"},
{file = "h5py-3.7.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:43fed4d13743cf02798a9a03a360a88e589d81285e72b83f47d37bb64ed44881"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c038399ce09a58ff8d89ec3e62f00aa7cb82d14f34e24735b920e2a811a3a426"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03d64fb86bb86b978928bad923b64419a23e836499ec6363e305ad28afd9d287"},
{file = "h5py-3.7.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e5b7820b75f9519499d76cc708e27242ccfdd9dfb511d6deb98701961d0445aa"},
{file = "h5py-3.7.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a9351d729ea754db36d175098361b920573fdad334125f86ac1dd3a083355e20"},
{file = "h5py-3.7.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6776d896fb90c5938de8acb925e057e2f9f28755f67ec3edcbc8344832616c38"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0a047fddbe6951bce40e9cde63373c838a978c5e05a011a682db9ba6334b8e85"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0798a9c0ff45f17d0192e4d7114d734cac9f8b2b2c76dd1d923c4d0923f27bb6"},
{file = "h5py-3.7.0-cp38-cp38-win_amd64.whl", hash = "sha256:0d8de8cb619fc597da7cf8cdcbf3b7ff8c5f6db836568afc7dc16d21f59b2b49"},
{file = "h5py-3.7.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f084bbe816907dfe59006756f8f2d16d352faff2d107f4ffeb1d8de126fc5dc7"},
{file = "h5py-3.7.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1fcb11a2dc8eb7ddcae08afd8fae02ba10467753a857fa07a404d700a93f3d53"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ed43e2cc4f511756fd664fb45d6b66c3cbed4e3bd0f70e29c37809b2ae013c44"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e7535df5ee3dc3e5d1f408fdfc0b33b46bc9b34db82743c82cd674d8239b9ad"},
{file = "h5py-3.7.0-cp39-cp39-win_amd64.whl", hash = "sha256:9e2ad2aa000f5b1e73b5dfe22f358ca46bf1a2b6ca394d9659874d7fc251731a"},
{file = "h5py-3.7.0.tar.gz", hash = "sha256:3fcf37884383c5da64846ab510190720027dca0768def34dd8dcb659dbe5cbf3"},
]
HeapDict = [
{file = "HeapDict-1.0.1-py3-none-any.whl", hash = "sha256:6065f90933ab1bb7e50db403b90cab653c853690c5992e69294c2de2b253fc92"},
{file = "HeapDict-1.0.1.tar.gz", hash = "sha256:8495f57b3e03d8e46d5f1b2cc62ca881aca392fd5cc048dc0aa2e1a6d23ecdb6"},
]
idna = [
{file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
{file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
]
imagesize = [
{file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
{file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
]
importlib-metadata = [
{file = "importlib_metadata-5.1.0-py3-none-any.whl", hash = "sha256:d84d17e21670ec07990e1044a99efe8d615d860fd176fc29ef5c306068fda313"},
{file = "importlib_metadata-5.1.0.tar.gz", hash = "sha256:d5059f9f1e8e41f80e9c56c2ee58811450c31984dfa625329ffd7c0dad88a73b"},
]
importlib-resources = [
{file = "importlib_resources-5.10.0-py3-none-any.whl", hash = "sha256:ee17ec648f85480d523596ce49eae8ead87d5631ae1551f913c0100b5edd3437"},
{file = "importlib_resources-5.10.0.tar.gz", hash = "sha256:c01b1b94210d9849f286b86bb51bcea7cd56dde0600d8db721d7b81330711668"},
]
iniconfig = [
{file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"},
{file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"},
]
ipykernel = [
{file = "ipykernel-6.18.1-py3-none-any.whl", hash = "sha256:18c298565218e602939dd03b56206912433ebdb6b5800afd9177bbce8d96318b"},
{file = "ipykernel-6.18.1.tar.gz", hash = "sha256:71f21ce281da5a4e73ec4a7ecdf98802d9e65d58cdb7e22ff824ca994ce5114b"},
]
ipython = [
{file = "ipython-8.7.0-py3-none-any.whl", hash = "sha256:352042ddcb019f7c04e48171b4dd78e4c4bb67bf97030d170e154aac42b656d9"},
{file = "ipython-8.7.0.tar.gz", hash = "sha256:882899fe78d5417a0aa07f995db298fa28b58faeba2112d2e3a4c95fe14bb738"},
]
ipython_genutils = [
{file = "ipython_genutils-0.2.0-py2.py3-none-any.whl", hash = "sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8"},
{file = "ipython_genutils-0.2.0.tar.gz", hash = "sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"},
]
ipywidgets = [
{file = "ipywidgets-8.0.2-py3-none-any.whl", hash = "sha256:1dc3dd4ee19ded045ea7c86eb273033d238d8e43f9e7872c52d092683f263891"},
{file = "ipywidgets-8.0.2.tar.gz", hash = "sha256:08cb75c6e0a96836147cbfdc55580ae04d13e05d26ffbc377b4e1c68baa28b1f"},
]
isort = [
{file = "isort-5.10.1-py3-none-any.whl", hash = "sha256:6f62d78e2f89b4500b080fe3a81690850cd254227f27f75c3a0c491a1f351ba7"},
{file = "isort-5.10.1.tar.gz", hash = "sha256:e8443a5e7a020e9d7f97f1d7d9cd17c88bcb3bc7e218bf9cf5095fe550be2951"},
]
jedi = [
{file = "jedi-0.18.2-py2.py3-none-any.whl", hash = "sha256:203c1fd9d969ab8f2119ec0a3342e0b49910045abe6af0a3ae83a5764d54639e"},
{file = "jedi-0.18.2.tar.gz", hash = "sha256:bae794c30d07f6d910d32a7048af09b5a39ed740918da923c6b780790ebac612"},
]
Jinja2 = [
{file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
{file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
]
jmespath = [
{file = "jmespath-1.0.1-py3-none-any.whl", hash = "sha256:02e2e4cc71b5bcab88332eebf907519190dd9e6e82107fa7f83b1003a6252980"},
{file = "jmespath-1.0.1.tar.gz", hash = "sha256:90261b206d6defd58fdd5e85f478bf633a2901798906be2ad389150c5c60edbe"},
]
joblib = [
{file = "joblib-1.2.0-py3-none-any.whl", hash = "sha256:091138ed78f800342968c523bdde947e7a305b8594b910a0fea2ab83c3c6d385"},
{file = "joblib-1.2.0.tar.gz", hash = "sha256:e1cee4a79e4af22881164f218d4311f60074197fb707e082e803b61f6d137018"},
]
jsonschema = [
{file = "jsonschema-4.17.1-py3-none-any.whl", hash = "sha256:410ef23dcdbca4eaedc08b850079179883c2ed09378bd1f760d4af4aacfa28d7"},
{file = "jsonschema-4.17.1.tar.gz", hash = "sha256:05b2d22c83640cde0b7e0aa329ca7754fbd98ea66ad8ae24aa61328dfe057fa3"},
]
jupyter = [
{file = "jupyter-1.0.0-py2.py3-none-any.whl", hash = "sha256:5b290f93b98ffbc21c0c7e749f054b3267782166d72fa5e3ed1ed4eaf34a2b78"},
{file = "jupyter-1.0.0.tar.gz", hash = "sha256:d9dc4b3318f310e34c82951ea5d6683f67bed7def4b259fafbfe4f1beb1d8e5f"},
{file = "jupyter-1.0.0.zip", hash = "sha256:3e1f86076bbb7c8c207829390305a2b1fe836d471ed54be66a3b8c41e7f46cc7"},
]
jupyter-client = [
{file = "jupyter_client-7.4.7-py3-none-any.whl", hash = "sha256:df56ae23b8e1da1b66f89dee1368e948b24a7f780fa822c5735187589fc4c157"},
{file = "jupyter_client-7.4.7.tar.gz", hash = "sha256:330f6b627e0b4bf2f54a3a0dd9e4a22d2b649c8518168afedce2c96a1ceb2860"},
]
jupyter-console = [
{file = "jupyter_console-6.4.4-py3-none-any.whl", hash = "sha256:756df7f4f60c986e7bc0172e4493d3830a7e6e75c08750bbe59c0a5403ad6dee"},
{file = "jupyter_console-6.4.4.tar.gz", hash = "sha256:172f5335e31d600df61613a97b7f0352f2c8250bbd1092ef2d658f77249f89fb"},
]
jupyter-core = [
{file = "jupyter_core-5.1.0-py3-none-any.whl", hash = "sha256:f5740d99606958544396914b08e67b668f45e7eff99ab47a7f4bcead419c02f4"},
{file = "jupyter_core-5.1.0.tar.gz", hash = "sha256:a5ae7c09c55c0b26f692ec69323ba2b62e8d7295354d20f6cd57b749de4a05bf"},
]
jupyter-server = [
{file = "jupyter_server-1.23.3-py3-none-any.whl", hash = "sha256:438496cac509709cc85e60172e5538ca45b4c8a0862bb97cd73e49f2ace419cb"},
{file = "jupyter_server-1.23.3.tar.gz", hash = "sha256:f7f7a2f9d36f4150ad125afef0e20b1c76c8ff83eb5e39fb02d3b9df0f9b79ab"},
]
jupyterlab-pygments = [
{file = "jupyterlab_pygments-0.2.2-py2.py3-none-any.whl", hash = "sha256:2405800db07c9f770863bcf8049a529c3dd4d3e28536638bd7c1c01d2748309f"},
{file = "jupyterlab_pygments-0.2.2.tar.gz", hash = "sha256:7405d7fde60819d905a9fa8ce89e4cd830e318cdad22a0030f7a901da705585d"},
]
jupyterlab-widgets = [
{file = "jupyterlab_widgets-3.0.3-py3-none-any.whl", hash = "sha256:6aa1bc0045470d54d76b9c0b7609a8f8f0087573bae25700a370c11f82cb38c8"},
{file = "jupyterlab_widgets-3.0.3.tar.gz", hash = "sha256:c767181399b4ca8b647befe2d913b1260f51bf9d8ef9b7a14632d4c1a7b536bd"},
]
keras = [
{file = "keras-2.11.0-py2.py3-none-any.whl", hash = "sha256:38c6fff0ea9a8b06a2717736565c92a73c8cd9b1c239e7125ccb188b7848f65e"},
]
kiwisolver = [
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:2f5e60fabb7343a836360c4f0919b8cd0d6dbf08ad2ca6b9cf90bf0c76a3c4f6"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:10ee06759482c78bdb864f4109886dff7b8a56529bc1609d4f1112b93fe6423c"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c79ebe8f3676a4c6630fd3f777f3cfecf9289666c84e775a67d1d358578dc2e3"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abbe9fa13da955feb8202e215c4018f4bb57469b1b78c7a4c5c7b93001699938"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7577c1987baa3adc4b3c62c33bd1118c3ef5c8ddef36f0f2c950ae0b199e100d"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8ad8285b01b0d4695102546b342b493b3ccc6781fc28c8c6a1bb63e95d22f09"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8ed58b8acf29798b036d347791141767ccf65eee7f26bde03a71c944449e53de"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a68b62a02953b9841730db7797422f983935aeefceb1679f0fc85cbfbd311c32"},
{file = "kiwisolver-1.4.4-cp310-cp310-win32.whl", hash = "sha256:e92a513161077b53447160b9bd8f522edfbed4bd9759e4c18ab05d7ef7e49408"},
{file = "kiwisolver-1.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:3fe20f63c9ecee44560d0e7f116b3a747a5d7203376abeea292ab3152334d004"},
{file = "kiwisolver-1.4.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:e0ea21f66820452a3f5d1655f8704a60d66ba1191359b96541eaf457710a5fc6"},
{file = "kiwisolver-1.4.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:bc9db8a3efb3e403e4ecc6cd9489ea2bac94244f80c78e27c31dcc00d2790ac2"},
{file = "kiwisolver-1.4.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d5b61785a9ce44e5a4b880272baa7cf6c8f48a5180c3e81c59553ba0cb0821ca"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c2dbb44c3f7e6c4d3487b31037b1bdbf424d97687c1747ce4ff2895795c9bf69"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6295ecd49304dcf3bfbfa45d9a081c96509e95f4b9d0eb7ee4ec0530c4a96514"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4bd472dbe5e136f96a4b18f295d159d7f26fd399136f5b17b08c4e5f498cd494"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bf7d9fce9bcc4752ca4a1b80aabd38f6d19009ea5cbda0e0856983cf6d0023f5"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78d6601aed50c74e0ef02f4204da1816147a6d3fbdc8b3872d263338a9052c51"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:877272cf6b4b7e94c9614f9b10140e198d2186363728ed0f701c6eee1baec1da"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:db608a6757adabb32f1cfe6066e39b3706d8c3aa69bbc353a5b61edad36a5cb4"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:5853eb494c71e267912275e5586fe281444eb5e722de4e131cddf9d442615626"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:f0a1dbdb5ecbef0d34eb77e56fcb3e95bbd7e50835d9782a45df81cc46949750"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:283dffbf061a4ec60391d51e6155e372a1f7a4f5b15d59c8505339454f8989e4"},
{file = "kiwisolver-1.4.4-cp311-cp311-win32.whl", hash = "sha256:d06adcfa62a4431d404c31216f0f8ac97397d799cd53800e9d3efc2fbb3cf14e"},
{file = "kiwisolver-1.4.4-cp311-cp311-win_amd64.whl", hash = "sha256:e7da3fec7408813a7cebc9e4ec55afed2d0fd65c4754bc376bf03498d4e92686"},
{file = "kiwisolver-1.4.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:62ac9cc684da4cf1778d07a89bf5f81b35834cb96ca523d3a7fb32509380cbf6"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41dae968a94b1ef1897cb322b39360a0812661dba7c682aa45098eb8e193dbdf"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:02f79693ec433cb4b5f51694e8477ae83b3205768a6fb48ffba60549080e295b"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d0611a0a2a518464c05ddd5a3a1a0e856ccc10e67079bb17f265ad19ab3c7597"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:db5283d90da4174865d520e7366801a93777201e91e79bacbac6e6927cbceede"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:1041feb4cda8708ce73bb4dcb9ce1ccf49d553bf87c3954bdfa46f0c3f77252c"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win32.whl", hash = "sha256:a553dadda40fef6bfa1456dc4be49b113aa92c2a9a9e8711e955618cd69622e3"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:03baab2d6b4a54ddbb43bba1a3a2d1627e82d205c5cf8f4c924dc49284b87166"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:841293b17ad704d70c578f1f0013c890e219952169ce8a24ebc063eecf775454"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f4f270de01dd3e129a72efad823da90cc4d6aafb64c410c9033aba70db9f1ff0"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f9f39e2f049db33a908319cf46624a569b36983c7c78318e9726a4cb8923b26c"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c97528e64cb9ebeff9701e7938653a9951922f2a38bd847787d4a8e498cc83ae"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d1573129aa0fd901076e2bfb4275a35f5b7aa60fbfb984499d661ec950320b0"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad881edc7ccb9d65b0224f4e4d05a1e85cf62d73aab798943df6d48ab0cd79a1"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b428ef021242344340460fa4c9185d0b1f66fbdbfecc6c63eff4b7c29fad429d"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:2e407cb4bd5a13984a6c2c0fe1845e4e41e96f183e5e5cd4d77a857d9693494c"},
{file = "kiwisolver-1.4.4-cp38-cp38-win32.whl", hash = "sha256:75facbe9606748f43428fc91a43edb46c7ff68889b91fa31f53b58894503a191"},
{file = "kiwisolver-1.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:5bce61af018b0cb2055e0e72e7d65290d822d3feee430b7b8203d8a855e78766"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8c808594c88a025d4e322d5bb549282c93c8e1ba71b790f539567932722d7bd8"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f0a71d85ecdd570ded8ac3d1c0f480842f49a40beb423bb8014539a9f32a5897"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b533558eae785e33e8c148a8d9921692a9fe5aa516efbdff8606e7d87b9d5824"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:efda5fc8cc1c61e4f639b8067d118e742b812c930f708e6667a5ce0d13499e29"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7c43e1e1206cd421cd92e6b3280d4385d41d7166b3ed577ac20444b6995a445f"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc8d3bd6c72b2dd9decf16ce70e20abcb3274ba01b4e1c96031e0c4067d1e7cd"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4ea39b0ccc4f5d803e3337dd46bcce60b702be4d86fd0b3d7531ef10fd99a1ac"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:968f44fdbf6dd757d12920d63b566eeb4d5b395fd2d00d29d7ef00a00582aac9"},
{file = "kiwisolver-1.4.4-cp39-cp39-win32.whl", hash = "sha256:da7e547706e69e45d95e116e6939488d62174e033b763ab1496b4c29b76fabea"},
{file = "kiwisolver-1.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:ba59c92039ec0a66103b1d5fe588fa546373587a7d68f5c96f743c3396afc04b"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:91672bacaa030f92fc2f43b620d7b337fd9a5af28b0d6ed3f77afc43c4a64b5a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:787518a6789009c159453da4d6b683f468ef7a65bbde796bcea803ccf191058d"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da152d8cdcab0e56e4f45eb08b9aea6455845ec83172092f09b0e077ece2cf7a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ecb1fa0db7bf4cff9dac752abb19505a233c7f16684c5826d1f11ebd9472b871"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:28bc5b299f48150b5f822ce68624e445040595a4ac3d59251703779836eceff9"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:81e38381b782cc7e1e46c4e14cd997ee6040768101aefc8fa3c24a4cc58e98f8"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2a66fdfb34e05b705620dd567f5a03f239a088d5a3f321e7b6ac3239d22aa286"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:872b8ca05c40d309ed13eb2e582cab0c5a05e81e987ab9c521bf05ad1d5cf5cb"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:70e7c2e7b750585569564e2e5ca9845acfaa5da56ac46df68414f29fea97be9f"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:9f85003f5dfa867e86d53fac6f7e6f30c045673fa27b603c397753bebadc3008"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2e307eb9bd99801f82789b44bb45e9f541961831c7311521b13a6c85afc09767"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b1792d939ec70abe76f5054d3f36ed5656021dcad1322d1cc996d4e54165cef9"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6cb459eea32a4e2cf18ba5fcece2dbdf496384413bc1bae15583f19e567f3b2"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:36dafec3d6d6088d34e2de6b85f9d8e2324eb734162fba59d2ba9ed7a2043d5b"},
{file = "kiwisolver-1.4.4.tar.gz", hash = "sha256:d41997519fcba4a1e46eb4a2fe31bc12f0ff957b2b81bac28db24744f333e955"},
]
langcodes = [
{file = "langcodes-3.3.0-py3-none-any.whl", hash = "sha256:4d89fc9acb6e9c8fdef70bcdf376113a3db09b67285d9e1d534de6d8818e7e69"},
{file = "langcodes-3.3.0.tar.gz", hash = "sha256:794d07d5a28781231ac335a1561b8442f8648ca07cd518310aeb45d6f0807ef6"},
]
libclang = [
{file = "libclang-14.0.6-py2.py3-none-macosx_10_9_x86_64.whl", hash = "sha256:8791cf3c3b087c373a6d61e9199da7a541da922c9ddcfed1122090586b996d6e"},
{file = "libclang-14.0.6-py2.py3-none-macosx_11_0_arm64.whl", hash = "sha256:7b06fc76bd1e67c8b04b5719bf2ac5d6a323b289b245dfa9e468561d99538188"},
{file = "libclang-14.0.6-py2.py3-none-manylinux1_x86_64.whl", hash = "sha256:e429853939423f276a25140b0b702442d7da9a09e001c05e48df888336947614"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2010_x86_64.whl", hash = "sha256:206d2789e4450a37d054e63b70451a6fc1873466397443fa13de2b3d4adb2796"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_aarch64.whl", hash = "sha256:e2add1703129b2abe066fb1890afa880870a89fd6ab4ec5d2a7a8dc8d271677e"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_armv7l.whl", hash = "sha256:5dd3c6fca1b007d308a4114afa8e4e9d32f32b2572520701d45fcc626ac5cd6c"},
{file = "libclang-14.0.6-py2.py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:cfb0e892ebb5dff6bd498ab5778adb8581f26a00fd8347b3c76c989fe2fd04f7"},
{file = "libclang-14.0.6-py2.py3-none-win_amd64.whl", hash = "sha256:ea03c12675151837660cdd5dce65bd89320896ac3421efef43a36678f113ce95"},
{file = "libclang-14.0.6-py2.py3-none-win_arm64.whl", hash = "sha256:2e4303e04517fcd11173cb2e51a7070eed71e16ef45d4e26a82c5e881cac3d27"},
{file = "libclang-14.0.6.tar.gz", hash = "sha256:9052a8284d8846984f6fa826b1d7460a66d3b23a486d782633b42b6e3b418789"},
]
lightgbm = [
{file = "lightgbm-3.3.3-py3-none-macosx_10_15_x86_64.macosx_11_6_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:27b0ae82549d6c59ede4fa3245f4b21a6bf71ab5ec5c55601cf5a962a18c6f80"},
{file = "lightgbm-3.3.3-py3-none-manylinux1_x86_64.whl", hash = "sha256:389edda68b7f24a1755a6af4dad06e16236e374e9de64253a105b12982b153e2"},
{file = "lightgbm-3.3.3-py3-none-manylinux2014_aarch64.whl", hash = "sha256:b0af55bd476785726eaacbd3c880f8168d362d4bba098790f55cd10fe928591b"},
{file = "lightgbm-3.3.3-py3-none-win_amd64.whl", hash = "sha256:b334dbcd670e3d87f4ff3cfe31d652ab18eb88ad9092a02010916320549b7d10"},
{file = "lightgbm-3.3.3.tar.gz", hash = "sha256:857e559ae84a22963ce2b62168292969d21add30bc9246a84d4e7eedae67966d"},
]
llvmlite = [
{file = "llvmlite-0.36.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cc0f9b9644b4ab0e4a5edb17f1531d791630c88858220d3cc688d6edf10da100"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f7918dbac02b1ebbfd7302ad8e8307d7877ab57d782d5f04b70ff9696b53c21b"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:7768658646c418b9b3beccb7044277a608bc8c62b82a85e73c7e5c065e4157c2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win32.whl", hash = "sha256:05f807209a360d39526d98141b6f281b9c7c771c77a4d1fc22002440642c8de2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win_amd64.whl", hash = "sha256:d1fdd63c371626c25ad834e1c6297eb76cf2f093a40dbb401a87b6476ab4e34e"},
{file = "llvmlite-0.36.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7c4e7066447305d5095d0b0a9cae7b835d2f0fde143456b3124110eab0856426"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:9dad7e4bb042492914292aea3f4172eca84db731f9478250240955aedba95e08"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:1ce5bc0a638d874a08d4222be0a7e48e5df305d094c2ff8dec525ef32b581551"},
{file = "llvmlite-0.36.0-cp37-cp37m-win32.whl", hash = "sha256:dbedff0f6d417b374253a6bab39aa4b5364f1caab30c06ba8726904776fcf1cb"},
{file = "llvmlite-0.36.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3b17fc4b0dd17bd29d7297d054e2915fad535889907c3f65232ee21f483447c5"},
{file = "llvmlite-0.36.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b3a77e46e6053e2a86e607e87b97651dda81e619febb914824a927bff4e88737"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:048a7c117641c9be87b90005684e64a6f33ea0897ebab1df8a01214a10d6e79a"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:7db4b0eef93125af1c4092c64a3c73c7dc904101117ef53f8d78a1a499b8d5f4"},
{file = "llvmlite-0.36.0-cp38-cp38-win32.whl", hash = "sha256:50b1828bde514b31431b2bba1aa20b387f5625b81ad6e12fede430a04645e47a"},
{file = "llvmlite-0.36.0-cp38-cp38-win_amd64.whl", hash = "sha256:f608bae781b2d343e15e080c546468c5a6f35f57f0446923ea198dd21f23757e"},
{file = "llvmlite-0.36.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6a3abc8a8889aeb06bf9c4a7e5df5bc7bb1aa0aedd91a599813809abeec80b5a"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:705f0323d931684428bb3451549603299bb5e17dd60fb979d67c3807de0debc1"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:5a6548b4899facb182145147185e9166c69826fb424895f227e6b7cf924a8da1"},
{file = "llvmlite-0.36.0-cp39-cp39-win32.whl", hash = "sha256:ff52fb9c2be66b95b0e67d56fce11038397e5be1ea410ee53f5f1175fdbb107a"},
{file = "llvmlite-0.36.0-cp39-cp39-win_amd64.whl", hash = "sha256:1dee416ea49fd338c74ec15c0c013e5273b0961528169af06ff90772614f7f6c"},
{file = "llvmlite-0.36.0.tar.gz", hash = "sha256:765128fdf5f149ed0b889ffbe2b05eb1717f8e20a5c87fa2b4018fbcce0fcfc9"},
]
locket = [
{file = "locket-1.0.0-py2.py3-none-any.whl", hash = "sha256:b6c819a722f7b6bd955b80781788e4a66a55628b858d347536b7e81325a3a5e3"},
{file = "locket-1.0.0.tar.gz", hash = "sha256:5c0d4c052a8bbbf750e056a8e65ccd309086f4f0f18a2eac306a8dfa4112a632"},
]
Markdown = [
{file = "Markdown-3.4.1-py3-none-any.whl", hash = "sha256:08fb8465cffd03d10b9dd34a5c3fea908e20391a2a90b88d66362cb05beed186"},
{file = "Markdown-3.4.1.tar.gz", hash = "sha256:3b809086bb6efad416156e00a0da66fe47618a5d6918dd688f53f40c8e4cfeff"},
]
MarkupSafe = [
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:86b1f75c4e7c2ac2ccdaec2b9022845dbb81880ca318bb7a0a01fbf7813e3812"},
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f121a1420d4e173a5d96e47e9a0c0dcff965afdf1626d28de1460815f7c4ee7a"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a49907dd8420c5685cfa064a1335b6754b74541bbb3706c259c02ed65b644b3e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10c1bfff05d95783da83491be968e8fe789263689c02724e0c691933c52994f5"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7bd98b796e2b6553da7225aeb61f447f80a1ca64f41d83612e6139ca5213aa4"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b09bf97215625a311f669476f44b8b318b075847b49316d3e28c08e41a7a573f"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:694deca8d702d5db21ec83983ce0bb4b26a578e71fbdbd4fdcd387daa90e4d5e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:efc1913fd2ca4f334418481c7e595c00aad186563bbc1ec76067848c7ca0a933"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win32.whl", hash = "sha256:4a33dea2b688b3190ee12bd7cfa29d39c9ed176bda40bfa11099a3ce5d3a7ac6"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:dda30ba7e87fbbb7eab1ec9f58678558fd9a6b8b853530e176eabd064da81417"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:671cd1187ed5e62818414afe79ed29da836dde67166a9fac6d435873c44fdd02"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3799351e2336dc91ea70b034983ee71cf2f9533cdff7c14c90ea126bfd95d65a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e72591e9ecd94d7feb70c1cbd7be7b3ebea3f548870aa91e2732960fa4d57a37"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6fbf47b5d3728c6aea2abb0589b5d30459e369baa772e0f37a0320185e87c980"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d5ee4f386140395a2c818d149221149c54849dfcfcb9f1debfe07a8b8bd63f9a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:bcb3ed405ed3222f9904899563d6fc492ff75cce56cba05e32eff40e6acbeaa3"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:e1c0b87e09fa55a220f058d1d49d3fb8df88fbfab58558f1198e08c1e1de842a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win32.whl", hash = "sha256:8dc1c72a69aa7e082593c4a203dcf94ddb74bb5c8a731e4e1eb68d031e8498ff"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:97a68e6ada378df82bc9f16b800ab77cbf4b2fada0081794318520138c088e4a"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e8c843bbcda3a2f1e3c2ab25913c80a3c5376cd00c6e8c4a86a89a28c8dc5452"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0212a68688482dc52b2d45013df70d169f542b7394fc744c02a57374a4207003"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e576a51ad59e4bfaac456023a78f6b5e6e7651dcd383bcc3e18d06f9b55d6d1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b9fe39a2ccc108a4accc2676e77da025ce383c108593d65cc909add5c3bd601"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96e37a3dc86e80bf81758c152fe66dbf60ed5eca3d26305edf01892257049925"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6d0072fea50feec76a4c418096652f2c3238eaa014b2f94aeb1d56a66b41403f"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:089cf3dbf0cd6c100f02945abeb18484bd1ee57a079aefd52cffd17fba910b88"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a074d34ee7a5ce3effbc526b7083ec9731bb3cbf921bbe1d3005d4d2bdb3a63"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win32.whl", hash = "sha256:421be9fbf0ffe9ffd7a378aafebbf6f4602d564d34be190fc19a193232fd12b1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:fc7b548b17d238737688817ab67deebb30e8073c95749d55538ed473130ec0c7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e04e26803c9c3851c931eac40c695602c6295b8d432cbe78609649ad9bd2da8a"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b87db4360013327109564f0e591bd2a3b318547bcef31b468a92ee504d07ae4f"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99a2a507ed3ac881b975a2976d59f38c19386d128e7a9a18b7df6fff1fd4c1d6"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56442863ed2b06d19c37f94d999035e15ee982988920e12a5b4ba29b62ad1f77"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3ce11ee3f23f79dbd06fb3d63e2f6af7b12db1d46932fe7bd8afa259a5996603"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:33b74d289bd2f5e527beadcaa3f401e0df0a89927c1559c8566c066fa4248ab7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:43093fb83d8343aac0b1baa75516da6092f58f41200907ef92448ecab8825135"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8e3dcf21f367459434c18e71b2a9532d96547aef8a871872a5bd69a715c15f96"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win32.whl", hash = "sha256:d4306c36ca495956b6d568d276ac11fdd9c30a36f1b6eb928070dc5360b22e1c"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:46d00d6cfecdde84d40e572d63735ef81423ad31184100411e6e3388d405e247"},
{file = "MarkupSafe-2.1.1.tar.gz", hash = "sha256:7f91197cc9e48f989d12e4e6fbc46495c446636dfc81b9ccf50bb0ec74b91d4b"},
]
matplotlib = [
{file = "matplotlib-3.6.2-cp310-cp310-macosx_10_12_universal2.whl", hash = "sha256:8d0068e40837c1d0df6e3abf1cdc9a34a6d2611d90e29610fa1d2455aeb4e2e5"},
{file = "matplotlib-3.6.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:252957e208c23db72ca9918cb33e160c7833faebf295aaedb43f5b083832a267"},
{file = "matplotlib-3.6.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d50e8c1e571ee39b5dfbc295c11ad65988879f68009dd281a6e1edbc2ff6c18c"},
{file = "matplotlib-3.6.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d840adcad7354be6f2ec28d0706528b0026e4c3934cc6566b84eac18633eab1b"},
{file = "matplotlib-3.6.2-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:78ec3c3412cf277e6252764ee4acbdbec6920cc87ad65862272aaa0e24381eee"},
{file = "matplotlib-3.6.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9347cc6822f38db2b1d1ce992f375289670e595a2d1c15961aacbe0977407dfc"},
{file = "matplotlib-3.6.2-cp310-cp310-win32.whl", hash = "sha256:e0bbee6c2a5bf2a0017a9b5e397babb88f230e6f07c3cdff4a4c4bc75ed7c617"},
{file = "matplotlib-3.6.2-cp310-cp310-win_amd64.whl", hash = "sha256:8a0ae37576ed444fe853709bdceb2be4c7df6f7acae17b8378765bd28e61b3ae"},
{file = "matplotlib-3.6.2-cp311-cp311-macosx_10_12_universal2.whl", hash = "sha256:5ecfc6559132116dedfc482d0ad9df8a89dc5909eebffd22f3deb684132d002f"},
{file = "matplotlib-3.6.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:9f335e5625feb90e323d7e3868ec337f7b9ad88b5d633f876e3b778813021dab"},
{file = "matplotlib-3.6.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b2604c6450f9dd2c42e223b1f5dca9643a23cfecc9fde4a94bb38e0d2693b136"},
{file = "matplotlib-3.6.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e5afe0a7ea0e3a7a257907060bee6724a6002b7eec55d0db16fd32409795f3e1"},
{file = "matplotlib-3.6.2-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca0e7a658fbafcddcaefaa07ba8dae9384be2343468a8e011061791588d839fa"},
{file = "matplotlib-3.6.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:32d29c8c26362169c80c5718ce367e8c64f4dd068a424e7110df1dd2ed7bd428"},
{file = "matplotlib-3.6.2-cp311-cp311-win32.whl", hash = "sha256:5024b8ed83d7f8809982d095d8ab0b179bebc07616a9713f86d30cf4944acb73"},
{file = "matplotlib-3.6.2-cp311-cp311-win_amd64.whl", hash = "sha256:52c2bdd7cd0bf9d5ccdf9c1816568fd4ccd51a4d82419cc5480f548981b47dd0"},
{file = "matplotlib-3.6.2-cp38-cp38-macosx_10_12_universal2.whl", hash = "sha256:8a8dbe2cb7f33ff54b16bb5c500673502a35f18ac1ed48625e997d40c922f9cc"},
{file = "matplotlib-3.6.2-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:380d48c15ec41102a2b70858ab1dedfa33eb77b2c0982cb65a200ae67a48e9cb"},
{file = "matplotlib-3.6.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0844523dfaaff566e39dbfa74e6f6dc42e92f7a365ce80929c5030b84caa563a"},
{file = "matplotlib-3.6.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7f716b6af94dc1b6b97c46401774472f0867e44595990fe80a8ba390f7a0a028"},
{file = "matplotlib-3.6.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:74153008bd24366cf099d1f1e83808d179d618c4e32edb0d489d526523a94d9f"},
{file = "matplotlib-3.6.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f41e57ad63d336fe50d3a67bb8eaa26c09f6dda6a59f76777a99b8ccd8e26aec"},
{file = "matplotlib-3.6.2-cp38-cp38-win32.whl", hash = "sha256:d0e9ac04065a814d4cf2c6791a2ad563f739ae3ae830d716d54245c2b96fead6"},
{file = "matplotlib-3.6.2-cp38-cp38-win_amd64.whl", hash = "sha256:8a9d899953c722b9afd7e88dbefd8fb276c686c3116a43c577cfabf636180558"},
{file = "matplotlib-3.6.2-cp39-cp39-macosx_10_12_universal2.whl", hash = "sha256:f04f97797df35e442ed09f529ad1235d1f1c0f30878e2fe09a2676b71a8801e0"},
{file = "matplotlib-3.6.2-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:3964934731fd7a289a91d315919cf757f293969a4244941ab10513d2351b4e83"},
{file = "matplotlib-3.6.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:168093410b99f647ba61361b208f7b0d64dde1172b5b1796d765cd243cadb501"},
{file = "matplotlib-3.6.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e16dcaecffd55b955aa5e2b8a804379789c15987e8ebd2f32f01398a81e975b"},
{file = "matplotlib-3.6.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:83dc89c5fd728fdb03b76f122f43b4dcee8c61f1489e232d9ad0f58020523e1c"},
{file = "matplotlib-3.6.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:795ad83940732b45d39b82571f87af0081c120feff2b12e748d96bb191169e33"},
{file = "matplotlib-3.6.2-cp39-cp39-win32.whl", hash = "sha256:19d61ee6414c44a04addbe33005ab1f87539d9f395e25afcbe9a3c50ce77c65c"},
{file = "matplotlib-3.6.2-cp39-cp39-win_amd64.whl", hash = "sha256:5ba73aa3aca35d2981e0b31230d58abb7b5d7ca104e543ae49709208d8ce706a"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:1836f366272b1557a613f8265db220eb8dd883202bbbabe01bad5a4eadfd0c95"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0eda9d1b43f265da91fb9ae10d6922b5a986e2234470a524e6b18f14095b20d2"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ec9be0f4826cdb3a3a517509dcc5f87f370251b76362051ab59e42b6b765f8c4"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:3cef89888a466228fc4e4b2954e740ce8e9afde7c4315fdd18caa1b8de58ca17"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:54fa9fe27f5466b86126ff38123261188bed568c1019e4716af01f97a12fe812"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e68be81cd8c22b029924b6d0ee814c337c0e706b8d88495a617319e5dd5441c3"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b0ca2c60d3966dfd6608f5f8c49b8a0fcf76de6654f2eda55fc6ef038d5a6f27"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4426c74761790bff46e3d906c14c7aab727543293eed5a924300a952e1a3a3c1"},
{file = "matplotlib-3.6.2.tar.gz", hash = "sha256:b03fd10a1709d0101c054883b550f7c4c5e974f751e2680318759af005964990"},
]
matplotlib-inline = [
{file = "matplotlib-inline-0.1.6.tar.gz", hash = "sha256:f887e5f10ba98e8d2b150ddcf4702c1e5f8b3a20005eb0f74bfdbd360ee6f304"},
{file = "matplotlib_inline-0.1.6-py3-none-any.whl", hash = "sha256:f1f41aab5328aa5aaea9b16d083b128102f8712542f819fe7e6a420ff581b311"},
]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
]
mistune = [
{file = "mistune-2.0.4-py2.py3-none-any.whl", hash = "sha256:182cc5ee6f8ed1b807de6b7bb50155df7b66495412836b9a74c8fbdfc75fe36d"},
{file = "mistune-2.0.4.tar.gz", hash = "sha256:9ee0a66053e2267aba772c71e06891fa8f1af6d4b01d5e84e267b4570d4d9808"},
]
mpmath = [
{file = "mpmath-1.2.1-py3-none-any.whl", hash = "sha256:604bc21bd22d2322a177c73bdb573994ef76e62edd595d17e00aff24b0667e5c"},
{file = "mpmath-1.2.1.tar.gz", hash = "sha256:79ffb45cf9f4b101a807595bcb3e72e0396202e0b1d25d689134b48c4216a81a"},
]
msgpack = [
{file = "msgpack-1.0.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:4ab251d229d10498e9a2f3b1e68ef64cb393394ec477e3370c457f9430ce9250"},
{file = "msgpack-1.0.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:112b0f93202d7c0fef0b7810d465fde23c746a2d482e1e2de2aafd2ce1492c88"},
{file = "msgpack-1.0.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:002b5c72b6cd9b4bafd790f364b8480e859b4712e91f43014fe01e4f957b8467"},
{file = "msgpack-1.0.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:35bc0faa494b0f1d851fd29129b2575b2e26d41d177caacd4206d81502d4c6a6"},
{file = "msgpack-1.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4733359808c56d5d7756628736061c432ded018e7a1dff2d35a02439043321aa"},
{file = "msgpack-1.0.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb514ad14edf07a1dbe63761fd30f89ae79b42625731e1ccf5e1f1092950eaa6"},
{file = "msgpack-1.0.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:c23080fdeec4716aede32b4e0ef7e213c7b1093eede9ee010949f2a418ced6ba"},
{file = "msgpack-1.0.4-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:49565b0e3d7896d9ea71d9095df15b7f75a035c49be733051c34762ca95bbf7e"},
{file = "msgpack-1.0.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:aca0f1644d6b5a73eb3e74d4d64d5d8c6c3d577e753a04c9e9c87d07692c58db"},
{file = "msgpack-1.0.4-cp310-cp310-win32.whl", hash = "sha256:0dfe3947db5fb9ce52aaea6ca28112a170db9eae75adf9339a1aec434dc954ef"},
{file = "msgpack-1.0.4-cp310-cp310-win_amd64.whl", hash = "sha256:4dea20515f660aa6b7e964433b1808d098dcfcabbebeaaad240d11f909298075"},
{file = "msgpack-1.0.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e83f80a7fec1a62cf4e6c9a660e39c7f878f603737a0cdac8c13131d11d97f52"},
{file = "msgpack-1.0.4-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c11a48cf5e59026ad7cb0dc29e29a01b5a66a3e333dc11c04f7e991fc5510a9"},
{file = "msgpack-1.0.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1276e8f34e139aeff1c77a3cefb295598b504ac5314d32c8c3d54d24fadb94c9"},
{file = "msgpack-1.0.4-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6c9566f2c39ccced0a38d37c26cc3570983b97833c365a6044edef3574a00c08"},
{file = "msgpack-1.0.4-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:fcb8a47f43acc113e24e910399376f7277cf8508b27e5b88499f053de6b115a8"},
{file = "msgpack-1.0.4-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:76ee788122de3a68a02ed6f3a16bbcd97bc7c2e39bd4d94be2f1821e7c4a64e6"},
{file = "msgpack-1.0.4-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:0a68d3ac0104e2d3510de90a1091720157c319ceeb90d74f7b5295a6bee51bae"},
{file = "msgpack-1.0.4-cp36-cp36m-win32.whl", hash = "sha256:85f279d88d8e833ec015650fd15ae5eddce0791e1e8a59165318f371158efec6"},
{file = "msgpack-1.0.4-cp36-cp36m-win_amd64.whl", hash = "sha256:c1683841cd4fa45ac427c18854c3ec3cd9b681694caf5bff04edb9387602d661"},
{file = "msgpack-1.0.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a75dfb03f8b06f4ab093dafe3ddcc2d633259e6c3f74bb1b01996f5d8aa5868c"},
{file = "msgpack-1.0.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9667bdfdf523c40d2511f0e98a6c9d3603be6b371ae9a238b7ef2dc4e7a427b0"},
{file = "msgpack-1.0.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:11184bc7e56fd74c00ead4f9cc9a3091d62ecb96e97653add7a879a14b003227"},
{file = "msgpack-1.0.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ac5bd7901487c4a1dd51a8c58f2632b15d838d07ceedaa5e4c080f7190925bff"},
{file = "msgpack-1.0.4-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:1e91d641d2bfe91ba4c52039adc5bccf27c335356055825c7f88742c8bb900dd"},
{file = "msgpack-1.0.4-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:2a2df1b55a78eb5f5b7d2a4bb221cd8363913830145fad05374a80bf0877cb1e"},
{file = "msgpack-1.0.4-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:545e3cf0cf74f3e48b470f68ed19551ae6f9722814ea969305794645da091236"},
{file = "msgpack-1.0.4-cp37-cp37m-win32.whl", hash = "sha256:2cc5ca2712ac0003bcb625c96368fd08a0f86bbc1a5578802512d87bc592fe44"},
{file = "msgpack-1.0.4-cp37-cp37m-win_amd64.whl", hash = "sha256:eba96145051ccec0ec86611fe9cf693ce55f2a3ce89c06ed307de0e085730ec1"},
{file = "msgpack-1.0.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:7760f85956c415578c17edb39eed99f9181a48375b0d4a94076d84148cf67b2d"},
{file = "msgpack-1.0.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:449e57cc1ff18d3b444eb554e44613cffcccb32805d16726a5494038c3b93dab"},
{file = "msgpack-1.0.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d603de2b8d2ea3f3bcb2efe286849aa7a81531abc52d8454da12f46235092bcb"},
{file = "msgpack-1.0.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:48f5d88c99f64c456413d74a975bd605a9b0526293218a3b77220a2c15458ba9"},
{file = "msgpack-1.0.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6916c78f33602ecf0509cc40379271ba0f9ab572b066bd4bdafd7434dee4bc6e"},
{file = "msgpack-1.0.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:81fc7ba725464651190b196f3cd848e8553d4d510114a954681fd0b9c479d7e1"},
{file = "msgpack-1.0.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:d5b5b962221fa2c5d3a7f8133f9abffc114fe218eb4365e40f17732ade576c8e"},
{file = "msgpack-1.0.4-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:77ccd2af37f3db0ea59fb280fa2165bf1b096510ba9fe0cc2bf8fa92a22fdb43"},
{file = "msgpack-1.0.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:b17be2478b622939e39b816e0aa8242611cc8d3583d1cd8ec31b249f04623243"},
{file = "msgpack-1.0.4-cp38-cp38-win32.whl", hash = "sha256:2bb8cdf50dd623392fa75525cce44a65a12a00c98e1e37bf0fb08ddce2ff60d2"},
{file = "msgpack-1.0.4-cp38-cp38-win_amd64.whl", hash = "sha256:26b8feaca40a90cbe031b03d82b2898bf560027160d3eae1423f4a67654ec5d6"},
{file = "msgpack-1.0.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:462497af5fd4e0edbb1559c352ad84f6c577ffbbb708566a0abaaa84acd9f3ae"},
{file = "msgpack-1.0.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2999623886c5c02deefe156e8f869c3b0aaeba14bfc50aa2486a0415178fce55"},
{file = "msgpack-1.0.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f0029245c51fd9473dc1aede1160b0a29f4a912e6b1dd353fa6d317085b219da"},
{file = "msgpack-1.0.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ed6f7b854a823ea44cf94919ba3f727e230da29feb4a99711433f25800cf747f"},
{file = "msgpack-1.0.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0df96d6eaf45ceca04b3f3b4b111b86b33785683d682c655063ef8057d61fd92"},
{file = "msgpack-1.0.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6a4192b1ab40f8dca3f2877b70e63799d95c62c068c84dc028b40a6cb03ccd0f"},
{file = "msgpack-1.0.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0e3590f9fb9f7fbc36df366267870e77269c03172d086fa76bb4eba8b2b46624"},
{file = "msgpack-1.0.4-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:1576bd97527a93c44fa856770197dec00d223b0b9f36ef03f65bac60197cedf8"},
{file = "msgpack-1.0.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:63e29d6e8c9ca22b21846234913c3466b7e4ee6e422f205a2988083de3b08cae"},
{file = "msgpack-1.0.4-cp39-cp39-win32.whl", hash = "sha256:fb62ea4b62bfcb0b380d5680f9a4b3f9a2d166d9394e9bbd9666c0ee09a3645c"},
{file = "msgpack-1.0.4-cp39-cp39-win_amd64.whl", hash = "sha256:4d5834a2a48965a349da1c5a79760d94a1a0172fbb5ab6b5b33cbf8447e109ce"},
{file = "msgpack-1.0.4.tar.gz", hash = "sha256:f5d869c18f030202eb412f08b28d2afeea553d6613aee89e200d7aca7ef01f5f"},
]
multiprocess = [
{file = "multiprocess-0.70.14-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:560a27540daef4ce8b24ed3cc2496a3c670df66c96d02461a4da67473685adf3"},
{file = "multiprocess-0.70.14-pp37-pypy37_pp73-manylinux_2_24_i686.whl", hash = "sha256:bfbbfa36f400b81d1978c940616bc77776424e5e34cb0c94974b178d727cfcd5"},
{file = "multiprocess-0.70.14-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:89fed99553a04ec4f9067031f83a886d7fdec5952005551a896a4b6a59575bb9"},
{file = "multiprocess-0.70.14-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:40a5e3685462079e5fdee7c6789e3ef270595e1755199f0d50685e72523e1d2a"},
{file = "multiprocess-0.70.14-pp38-pypy38_pp73-manylinux_2_24_i686.whl", hash = "sha256:44936b2978d3f2648727b3eaeab6d7fa0bedf072dc5207bf35a96d5ee7c004cf"},
{file = "multiprocess-0.70.14-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:e628503187b5d494bf29ffc52d3e1e57bb770ce7ce05d67c4bbdb3a0c7d3b05f"},
{file = "multiprocess-0.70.14-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:0d5da0fc84aacb0e4bd69c41b31edbf71b39fe2fb32a54eaedcaea241050855c"},
{file = "multiprocess-0.70.14-pp39-pypy39_pp73-manylinux_2_24_i686.whl", hash = "sha256:6a7b03a5b98e911a7785b9116805bd782815c5e2bd6c91c6a320f26fd3e7b7ad"},
{file = "multiprocess-0.70.14-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:cea5bdedd10aace3c660fedeac8b087136b4366d4ee49a30f1ebf7409bce00ae"},
{file = "multiprocess-0.70.14-py310-none-any.whl", hash = "sha256:7dc1f2f6a1d34894c8a9a013fbc807971e336e7cc3f3ff233e61b9dc679b3b5c"},
{file = "multiprocess-0.70.14-py37-none-any.whl", hash = "sha256:93a8208ca0926d05cdbb5b9250a604c401bed677579e96c14da3090beb798193"},
{file = "multiprocess-0.70.14-py38-none-any.whl", hash = "sha256:6725bc79666bbd29a73ca148a0fb5f4ea22eed4a8f22fce58296492a02d18a7b"},
{file = "multiprocess-0.70.14-py39-none-any.whl", hash = "sha256:63cee628b74a2c0631ef15da5534c8aedbc10c38910b9c8b18dcd327528d1ec7"},
{file = "multiprocess-0.70.14.tar.gz", hash = "sha256:3eddafc12f2260d27ae03fe6069b12570ab4764ab59a75e81624fac453fbf46a"},
]
murmurhash = [
{file = "murmurhash-1.0.9-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:697ed01454d92681c7ae26eb1adcdc654b54062bcc59db38ed03cad71b23d449"},
{file = "murmurhash-1.0.9-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5ef31b5c11be2c064dbbdd0e22ab3effa9ceb5b11ae735295c717c120087dd94"},
{file = "murmurhash-1.0.9-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d7a2bd203377a31bbb2d83fe3f968756d6c9bbfa36c64c6ebfc3c6494fc680bc"},
{file = "murmurhash-1.0.9-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0eb0f8e652431ea238c11bcb671fef5c03aff0544bf7e098df81ea4b6d495405"},
{file = "murmurhash-1.0.9-cp310-cp310-win_amd64.whl", hash = "sha256:cf0b3fe54dca598f5b18c9951e70812e070ecb4c0672ad2cc32efde8a33b3df6"},
{file = "murmurhash-1.0.9-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5dc41be79ba4d09aab7e9110a8a4d4b37b184b63767b1b247411667cdb1057a3"},
{file = "murmurhash-1.0.9-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c0f84ecdf37c06eda0222f2f9e81c0974e1a7659c35b755ab2fdc642ebd366db"},
{file = "murmurhash-1.0.9-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:241693c1c819148eac29d7882739b1099c891f1f7431127b2652c23f81722cec"},
{file = "murmurhash-1.0.9-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:47f5ca56c430230d3b581dfdbc54eb3ad8b0406dcc9afdd978da2e662c71d370"},
{file = "murmurhash-1.0.9-cp311-cp311-win_amd64.whl", hash = "sha256:660ae41fc6609abc05130543011a45b33ca5d8318ae5c70e66bbd351ca936063"},
{file = "murmurhash-1.0.9-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:01137d688a6b259bde642513506b062364ea4e1609f886d9bd095c3ae6da0b94"},
{file = "murmurhash-1.0.9-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b70bbf55d89713873a35bd4002bc231d38e530e1051d57ca5d15f96c01fd778"},
{file = "murmurhash-1.0.9-cp36-cp36m-win_amd64.whl", hash = "sha256:3e802fa5b0e618ee99e8c114ce99fc91677f14e9de6e18b945d91323a93c84e8"},
{file = "murmurhash-1.0.9-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:213d0248e586082e1cab6157d9945b846fd2b6be34357ad5ea0d03a1931d82ba"},
{file = "murmurhash-1.0.9-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94b89d02aeab5e6bad5056f9d08df03ac7cfe06e61ff4b6340feb227fda80ce8"},
{file = "murmurhash-1.0.9-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0c2e2ee2d91a87952fe0f80212e86119aa1fd7681f03e6c99b279e50790dc2b3"},
{file = "murmurhash-1.0.9-cp37-cp37m-win_amd64.whl", hash = "sha256:8c3d69fb649c77c74a55624ebf7a0df3c81629e6ea6e80048134f015da57b2ea"},
{file = "murmurhash-1.0.9-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ab78675510f83e7a3c6bd0abdc448a9a2b0b385b0d7ee766cbbfc5cc278a3042"},
{file = "murmurhash-1.0.9-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0ac5530c250d2b0073ed058555847c8d88d2d00229e483d45658c13b32398523"},
{file = "murmurhash-1.0.9-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69157e8fa6b25c4383645227069f6a1f8738d32ed2a83558961019ca3ebef56a"},
{file = "murmurhash-1.0.9-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2aebe2ae016525a662ff772b72a2c9244a673e3215fcd49897f494258b96f3e7"},
{file = "murmurhash-1.0.9-cp38-cp38-win_amd64.whl", hash = "sha256:a5952f9c18a717fa17579e27f57bfa619299546011a8378a8f73e14eece332f6"},
{file = "murmurhash-1.0.9-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ef79202feeac68e83971239169a05fa6514ecc2815ce04c8302076d267870f6e"},
{file = "murmurhash-1.0.9-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:799fcbca5693ad6a40f565ae6b8e9718e5875a63deddf343825c0f31c32348fa"},
{file = "murmurhash-1.0.9-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9b995bc82eaf9223e045210207b8878fdfe099a788dd8abd708d9ee58459a9d"},
{file = "murmurhash-1.0.9-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b129e1c5ebd772e6ff5ef925bcce695df13169bd885337e6074b923ab6edcfc8"},
{file = "murmurhash-1.0.9-cp39-cp39-win_amd64.whl", hash = "sha256:379bf6b414bd27dd36772dd1570565a7d69918e980457370838bd514df0d91e9"},
{file = "murmurhash-1.0.9.tar.gz", hash = "sha256:fe7a38cb0d3d87c14ec9dddc4932ffe2dbc77d75469ab80fd5014689b0e07b58"},
]
mypy = [
{file = "mypy-0.971-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f2899a3cbd394da157194f913a931edfd4be5f274a88041c9dc2d9cdcb1c315c"},
{file = "mypy-0.971-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:98e02d56ebe93981c41211c05adb630d1d26c14195d04d95e49cd97dbc046dc5"},
{file = "mypy-0.971-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:19830b7dba7d5356d3e26e2427a2ec91c994cd92d983142cbd025ebe81d69cf3"},
{file = "mypy-0.971-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:02ef476f6dcb86e6f502ae39a16b93285fef97e7f1ff22932b657d1ef1f28655"},
{file = "mypy-0.971-cp310-cp310-win_amd64.whl", hash = "sha256:25c5750ba5609a0c7550b73a33deb314ecfb559c350bb050b655505e8aed4103"},
{file = "mypy-0.971-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d3348e7eb2eea2472db611486846742d5d52d1290576de99d59edeb7cd4a42ca"},
{file = "mypy-0.971-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3fa7a477b9900be9b7dd4bab30a12759e5abe9586574ceb944bc29cddf8f0417"},
{file = "mypy-0.971-cp36-cp36m-win_amd64.whl", hash = "sha256:2ad53cf9c3adc43cf3bea0a7d01a2f2e86db9fe7596dfecb4496a5dda63cbb09"},
{file = "mypy-0.971-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:855048b6feb6dfe09d3353466004490b1872887150c5bb5caad7838b57328cc8"},
{file = "mypy-0.971-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:23488a14a83bca6e54402c2e6435467a4138785df93ec85aeff64c6170077fb0"},
{file = "mypy-0.971-cp37-cp37m-win_amd64.whl", hash = "sha256:4b21e5b1a70dfb972490035128f305c39bc4bc253f34e96a4adf9127cf943eb2"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:9796a2ba7b4b538649caa5cecd398d873f4022ed2333ffde58eaf604c4d2cb27"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a361d92635ad4ada1b1b2d3630fc2f53f2127d51cf2def9db83cba32e47c856"},
{file = "mypy-0.971-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b793b899f7cf563b1e7044a5c97361196b938e92f0a4343a5d27966a53d2ec71"},
{file = "mypy-0.971-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d1ea5d12c8e2d266b5fb8c7a5d2e9c0219fedfeb493b7ed60cd350322384ac27"},
{file = "mypy-0.971-cp38-cp38-win_amd64.whl", hash = "sha256:23c7ff43fff4b0df93a186581885c8512bc50fc4d4910e0f838e35d6bb6b5e58"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1f7656b69974a6933e987ee8ffb951d836272d6c0f81d727f1d0e2696074d9e6"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d2022bfadb7a5c2ef410d6a7c9763188afdb7f3533f22a0a32be10d571ee4bbe"},
{file = "mypy-0.971-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ef943c72a786b0f8d90fd76e9b39ce81fb7171172daf84bf43eaf937e9f220a9"},
{file = "mypy-0.971-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d744f72eb39f69312bc6c2abf8ff6656973120e2eb3f3ec4f758ed47e414a4bf"},
{file = "mypy-0.971-cp39-cp39-win_amd64.whl", hash = "sha256:77a514ea15d3007d33a9e2157b0ba9c267496acf12a7f2b9b9f8446337aac5b0"},
{file = "mypy-0.971-py3-none-any.whl", hash = "sha256:0d054ef16b071149917085f51f89555a576e2618d5d9dd70bd6eea6410af3ac9"},
{file = "mypy-0.971.tar.gz", hash = "sha256:40b0f21484238269ae6a57200c807d80debc6459d444c0489a102d7c6a75fa56"},
]
mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
nbclassic = [
{file = "nbclassic-0.4.8-py3-none-any.whl", hash = "sha256:cbf05df5842b420d5cece0143462380ea9d308ff57c2dc0eb4d6e035b18fbfb3"},
{file = "nbclassic-0.4.8.tar.gz", hash = "sha256:c74d8a500f8e058d46b576a41e5bc640711e1032cf7541dde5f73ea49497e283"},
]
nbclient = [
{file = "nbclient-0.7.0-py3-none-any.whl", hash = "sha256:434c91385cf3e53084185334d675a0d33c615108b391e260915d1aa8e86661b8"},
{file = "nbclient-0.7.0.tar.gz", hash = "sha256:a1d844efd6da9bc39d2209bf996dbd8e07bf0f36b796edfabaa8f8a9ab77c3aa"},
]
nbconvert = [
{file = "nbconvert-7.0.0rc3-py3-none-any.whl", hash = "sha256:6774a0bf293d76fa2e886255812d953b750059330c3d7305ad271c02590f1957"},
{file = "nbconvert-7.0.0rc3.tar.gz", hash = "sha256:efb9aae47dad2eae02dd9e7d2cc8add6b7e8f15c6548c0de3363f6d2f8a39146"},
]
nbformat = [
{file = "nbformat-5.7.0-py3-none-any.whl", hash = "sha256:1b05ec2c552c2f1adc745f4eddce1eac8ca9ffd59bb9fd859e827eaa031319f9"},
{file = "nbformat-5.7.0.tar.gz", hash = "sha256:1d4760c15c1a04269ef5caf375be8b98dd2f696e5eb9e603ec2bf091f9b0d3f3"},
]
nbsphinx = [
{file = "nbsphinx-0.8.10-py3-none-any.whl", hash = "sha256:6076fba58020420927899362579f12779a43091eb238f414519ec25b4a8cfc96"},
{file = "nbsphinx-0.8.10.tar.gz", hash = "sha256:a8d68046f8aab916e2940b9b3819bd3ef9ddce868aa38845ea366645cabb6254"},
]
nest-asyncio = [
{file = "nest_asyncio-1.5.6-py3-none-any.whl", hash = "sha256:b9a953fb40dceaa587d109609098db21900182b16440652454a146cffb06e8b8"},
{file = "nest_asyncio-1.5.6.tar.gz", hash = "sha256:d267cc1ff794403f7df692964d1d2a3fa9418ffea2a3f6859a439ff482fef290"},
]
networkx = [
{file = "networkx-2.8.8-py3-none-any.whl", hash = "sha256:e435dfa75b1d7195c7b8378c3859f0445cd88c6b0375c181ed66823a9ceb7524"},
{file = "networkx-2.8.8.tar.gz", hash = "sha256:230d388117af870fce5647a3c52401fcf753e94720e6ea6b4197a5355648885e"},
]
notebook = [
{file = "notebook-6.5.2-py3-none-any.whl", hash = "sha256:e04f9018ceb86e4fa841e92ea8fb214f8d23c1cedfde530cc96f92446924f0e4"},
{file = "notebook-6.5.2.tar.gz", hash = "sha256:c1897e5317e225fc78b45549a6ab4b668e4c996fd03a04e938fe5e7af2bfffd0"},
]
notebook-shim = [
{file = "notebook_shim-0.2.2-py3-none-any.whl", hash = "sha256:9c6c30f74c4fbea6fce55c1be58e7fd0409b1c681b075dcedceb005db5026949"},
{file = "notebook_shim-0.2.2.tar.gz", hash = "sha256:090e0baf9a5582ff59b607af523ca2db68ff216da0c69956b62cab2ef4fc9c3f"},
]
numba = [
{file = "numba-0.53.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:b23de6b6837c132087d06b8b92d343edb54b885873b824a037967fbd5272ebb7"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:6545b9e9b0c112b81de7f88a3c787469a357eeff8211e90b8f45ee243d521cc2"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:8fa5c963a43855050a868106a87cd614f3c3f459951c8fc468aec263ef80d063"},
{file = "numba-0.53.1-cp36-cp36m-win32.whl", hash = "sha256:aaa6ebf56afb0b6752607b9f3bf39e99b0efe3c1fa6849698373925ee6838fd7"},
{file = "numba-0.53.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b08b3df38aab769df79ed948d70f0a54a3cdda49d58af65369235c204ec5d0f3"},
{file = "numba-0.53.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:bf5c463b62d013e3f709cc8277adf2f4f4d8cc6757293e29c6db121b77e6b760"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:74df02e73155f669e60dcff07c4eef4a03dbf5b388594db74142ab40914fe4f5"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:5165709bf62f28667e10b9afe6df0ce1037722adab92d620f59cb8bbb8104641"},
{file = "numba-0.53.1-cp37-cp37m-win32.whl", hash = "sha256:2e96958ed2ca7e6d967b2ce29c8da0ca47117e1de28e7c30b2c8c57386506fa5"},
{file = "numba-0.53.1-cp37-cp37m-win_amd64.whl", hash = "sha256:276f9d1674fe08d95872d81b97267c6b39dd830f05eb992608cbede50fcf48a9"},
{file = "numba-0.53.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:4c4c8d102512ae472af52c76ad9522da718c392cb59f4cd6785d711fa5051a2a"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:691adbeac17dbdf6ed7c759e9e33a522351f07d2065fe926b264b6b2c15fd89b"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:94aab3e0e9e8754116325ce026e1b29ae72443c706a3104cf7f3368dc3012912"},
{file = "numba-0.53.1-cp38-cp38-win32.whl", hash = "sha256:aabeec89bb3e3162136eea492cea7ee8882ddcda2201f05caecdece192c40896"},
{file = "numba-0.53.1-cp38-cp38-win_amd64.whl", hash = "sha256:1895ebd256819ff22256cd6fe24aa8f7470b18acc73e7917e8e93c9ac7f565dc"},
{file = "numba-0.53.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:224d197a46a9e602a16780d87636e199e2cdef528caef084a4d8fd8909c2455c"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:aba7acb247a09d7f12bd17a8e28bbb04e8adef9fc20ca29835d03b7894e1b49f"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:bd126f1f49da6fc4b3169cf1d96f1c3b3f84a7badd11fe22da344b923a00e744"},
{file = "numba-0.53.1-cp39-cp39-win32.whl", hash = "sha256:0ef9d1f347b251282ae46e5a5033600aa2d0dfa1ee8c16cb8137b8cd6f79e221"},
{file = "numba-0.53.1-cp39-cp39-win_amd64.whl", hash = "sha256:17146885cbe4e89c9d4abd4fcb8886dee06d4591943dc4343500c36ce2fcfa69"},
{file = "numba-0.53.1.tar.gz", hash = "sha256:9cd4e5216acdc66c4e9dab2dfd22ddb5bef151185c070d4a3cd8e78638aff5b0"},
]
numpy = [
{file = "numpy-1.23.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9c88793f78fca17da0145455f0d7826bcb9f37da4764af27ac945488116efe63"},
{file = "numpy-1.23.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e9f4c4e51567b616be64e05d517c79a8a22f3606499941d97bb76f2ca59f982d"},
{file = "numpy-1.23.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7903ba8ab592b82014713c491f6c5d3a1cde5b4a3bf116404e08f5b52f6daf43"},
{file = "numpy-1.23.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5e05b1c973a9f858c74367553e236f287e749465f773328c8ef31abe18f691e1"},
{file = "numpy-1.23.5-cp310-cp310-win32.whl", hash = "sha256:522e26bbf6377e4d76403826ed689c295b0b238f46c28a7251ab94716da0b280"},
{file = "numpy-1.23.5-cp310-cp310-win_amd64.whl", hash = "sha256:dbee87b469018961d1ad79b1a5d50c0ae850000b639bcb1b694e9981083243b6"},
{file = "numpy-1.23.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ce571367b6dfe60af04e04a1834ca2dc5f46004ac1cc756fb95319f64c095a96"},
{file = "numpy-1.23.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:56e454c7833e94ec9769fa0f86e6ff8e42ee38ce0ce1fa4cbb747ea7e06d56aa"},
{file = "numpy-1.23.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5039f55555e1eab31124a5768898c9e22c25a65c1e0037f4d7c495a45778c9f2"},
{file = "numpy-1.23.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:58f545efd1108e647604a1b5aa809591ccd2540f468a880bedb97247e72db387"},
{file = "numpy-1.23.5-cp311-cp311-win32.whl", hash = "sha256:b2a9ab7c279c91974f756c84c365a669a887efa287365a8e2c418f8b3ba73fb0"},
{file = "numpy-1.23.5-cp311-cp311-win_amd64.whl", hash = "sha256:0cbe9848fad08baf71de1a39e12d1b6310f1d5b2d0ea4de051058e6e1076852d"},
{file = "numpy-1.23.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f063b69b090c9d918f9df0a12116029e274daf0181df392839661c4c7ec9018a"},
{file = "numpy-1.23.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0aaee12d8883552fadfc41e96b4c82ee7d794949e2a7c3b3a7201e968c7ecab9"},
{file = "numpy-1.23.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:92c8c1e89a1f5028a4c6d9e3ccbe311b6ba53694811269b992c0b224269e2398"},
{file = "numpy-1.23.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d208a0f8729f3fb790ed18a003f3a57895b989b40ea4dce4717e9cf4af62c6bb"},
{file = "numpy-1.23.5-cp38-cp38-win32.whl", hash = "sha256:06005a2ef6014e9956c09ba07654f9837d9e26696a0470e42beedadb78c11b07"},
{file = "numpy-1.23.5-cp38-cp38-win_amd64.whl", hash = "sha256:ca51fcfcc5f9354c45f400059e88bc09215fb71a48d3768fb80e357f3b457e1e"},
{file = "numpy-1.23.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8969bfd28e85c81f3f94eb4a66bc2cf1dbdc5c18efc320af34bffc54d6b1e38f"},
{file = "numpy-1.23.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a7ac231a08bb37f852849bbb387a20a57574a97cfc7b6cabb488a4fc8be176de"},
{file = "numpy-1.23.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bf837dc63ba5c06dc8797c398db1e223a466c7ece27a1f7b5232ba3466aafe3d"},
{file = "numpy-1.23.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:33161613d2269025873025b33e879825ec7b1d831317e68f4f2f0f84ed14c719"},
{file = "numpy-1.23.5-cp39-cp39-win32.whl", hash = "sha256:af1da88f6bc3d2338ebbf0e22fe487821ea4d8e89053e25fa59d1d79786e7481"},
{file = "numpy-1.23.5-cp39-cp39-win_amd64.whl", hash = "sha256:09b7847f7e83ca37c6e627682f145856de331049013853f344f37b0c9690e3df"},
{file = "numpy-1.23.5-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:abdde9f795cf292fb9651ed48185503a2ff29be87770c3b8e2a14b0cd7aa16f8"},
{file = "numpy-1.23.5-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f9a909a8bae284d46bbfdefbdd4a262ba19d3bc9921b1e76126b1d21c3c34135"},
{file = "numpy-1.23.5-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:01dd17cbb340bf0fc23981e52e1d18a9d4050792e8fb8363cecbf066a84b827d"},
{file = "numpy-1.23.5.tar.gz", hash = "sha256:1b1766d6f397c18153d40015ddfc79ddb715cabadc04d2d228d4e5a8bc4ded1a"},
]
oauthlib = [
{file = "oauthlib-3.2.2-py3-none-any.whl", hash = "sha256:8139f29aac13e25d502680e9e19963e83f16838d48a0d71c287fe40e7067fbca"},
{file = "oauthlib-3.2.2.tar.gz", hash = "sha256:9859c40929662bec5d64f34d01c99e093149682a3f38915dc0655d5a633dd918"},
]
opt-einsum = [
{file = "opt_einsum-3.3.0-py3-none-any.whl", hash = "sha256:2455e59e3947d3c275477df7f5205b30635e266fe6dc300e3d9f9646bfcea147"},
{file = "opt_einsum-3.3.0.tar.gz", hash = "sha256:59f6475f77bbc37dcf7cd748519c0ec60722e91e63ca114e68821c0c54a46549"},
]
packaging = [
{file = "packaging-21.3-py3-none-any.whl", hash = "sha256:ef103e05f519cdc783ae24ea4e2e0f508a9c99b2d4969652eed6a2e1ea5bd522"},
{file = "packaging-21.3.tar.gz", hash = "sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb"},
]
pandas = [
{file = "pandas-1.5.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e9dbacd22555c2d47f262ef96bb4e30880e5956169741400af8b306bbb24a273"},
{file = "pandas-1.5.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e2b83abd292194f350bb04e188f9379d36b8dfac24dd445d5c87575f3beaf789"},
{file = "pandas-1.5.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2552bffc808641c6eb471e55aa6899fa002ac94e4eebfa9ec058649122db5824"},
{file = "pandas-1.5.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fc87eac0541a7d24648a001d553406f4256e744d92df1df8ebe41829a915028"},
{file = "pandas-1.5.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d0d8fd58df5d17ddb8c72a5075d87cd80d71b542571b5f78178fb067fa4e9c72"},
{file = "pandas-1.5.2-cp310-cp310-win_amd64.whl", hash = "sha256:4aed257c7484d01c9a194d9a94758b37d3d751849c05a0050c087a358c41ad1f"},
{file = "pandas-1.5.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:375262829c8c700c3e7cbb336810b94367b9c4889818bbd910d0ecb4e45dc261"},
{file = "pandas-1.5.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:cc3cd122bea268998b79adebbb8343b735a5511ec14efb70a39e7acbc11ccbdc"},
{file = "pandas-1.5.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b4f5a82afa4f1ff482ab8ded2ae8a453a2cdfde2001567b3ca24a4c5c5ca0db3"},
{file = "pandas-1.5.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8092a368d3eb7116e270525329a3e5c15ae796ccdf7ccb17839a73b4f5084a39"},
{file = "pandas-1.5.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6257b314fc14958f8122779e5a1557517b0f8e500cfb2bd53fa1f75a8ad0af2"},
{file = "pandas-1.5.2-cp311-cp311-win_amd64.whl", hash = "sha256:82ae615826da838a8e5d4d630eb70c993ab8636f0eff13cb28aafc4291b632b5"},
{file = "pandas-1.5.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:457d8c3d42314ff47cc2d6c54f8fc0d23954b47977b2caed09cd9635cb75388b"},
{file = "pandas-1.5.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c009a92e81ce836212ce7aa98b219db7961a8b95999b97af566b8dc8c33e9519"},
{file = "pandas-1.5.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:71f510b0efe1629bf2f7c0eadb1ff0b9cf611e87b73cd017e6b7d6adb40e2b3a"},
{file = "pandas-1.5.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a40dd1e9f22e01e66ed534d6a965eb99546b41d4d52dbdb66565608fde48203f"},
{file = "pandas-1.5.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ae7e989f12628f41e804847a8cc2943d362440132919a69429d4dea1f164da0"},
{file = "pandas-1.5.2-cp38-cp38-win32.whl", hash = "sha256:530948945e7b6c95e6fa7aa4be2be25764af53fba93fe76d912e35d1c9ee46f5"},
{file = "pandas-1.5.2-cp38-cp38-win_amd64.whl", hash = "sha256:73f219fdc1777cf3c45fde7f0708732ec6950dfc598afc50588d0d285fddaefc"},
{file = "pandas-1.5.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:9608000a5a45f663be6af5c70c3cbe634fa19243e720eb380c0d378666bc7702"},
{file = "pandas-1.5.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:315e19a3e5c2ab47a67467fc0362cb36c7c60a93b6457f675d7d9615edad2ebe"},
{file = "pandas-1.5.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e18bc3764cbb5e118be139b3b611bc3fbc5d3be42a7e827d1096f46087b395eb"},
{file = "pandas-1.5.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0183cb04a057cc38fde5244909fca9826d5d57c4a5b7390c0cc3fa7acd9fa883"},
{file = "pandas-1.5.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:344021ed3e639e017b452aa8f5f6bf38a8806f5852e217a7594417fb9bbfa00e"},
{file = "pandas-1.5.2-cp39-cp39-win32.whl", hash = "sha256:e7469271497960b6a781eaa930cba8af400dd59b62ec9ca2f4d31a19f2f91090"},
{file = "pandas-1.5.2-cp39-cp39-win_amd64.whl", hash = "sha256:c218796d59d5abd8780170c937b812c9637e84c32f8271bbf9845970f8c1351f"},
{file = "pandas-1.5.2.tar.gz", hash = "sha256:220b98d15cee0b2cd839a6358bd1f273d0356bf964c1a1aeb32d47db0215488b"},
]
pandocfilters = [
{file = "pandocfilters-1.5.0-py2.py3-none-any.whl", hash = "sha256:33aae3f25fd1a026079f5d27bdd52496f0e0803b3469282162bafdcbdf6ef14f"},
{file = "pandocfilters-1.5.0.tar.gz", hash = "sha256:0b679503337d233b4339a817bfc8c50064e2eff681314376a47cb582305a7a38"},
]
parso = [
{file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"},
{file = "parso-0.8.3.tar.gz", hash = "sha256:8c07be290bb59f03588915921e29e8a50002acaf2cdc5fa0e0114f91709fafa0"},
]
partd = [
{file = "partd-1.3.0-py3-none-any.whl", hash = "sha256:6393a0c898a0ad945728e34e52de0df3ae295c5aff2e2926ba7cc3c60a734a15"},
{file = "partd-1.3.0.tar.gz", hash = "sha256:ce91abcdc6178d668bcaa431791a5a917d902341cb193f543fe445d494660485"},
]
pastel = [
{file = "pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364"},
{file = "pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d"},
]
pathos = [
{file = "pathos-0.2.9-py2-none-any.whl", hash = "sha256:6a6ddb514ce2719f63fb88d5ec4f4490e436b636b54f1102d952c9f7c52f18e2"},
{file = "pathos-0.2.9-py3-none-any.whl", hash = "sha256:1c44373d8692897d5d15a8aa3b3a442ddc0814c5e848f4ff0ded5491f34b1dac"},
{file = "pathos-0.2.9.tar.gz", hash = "sha256:a8dbddcd3d9af32ada7c6dc088d845588c513a29a0ba19ab9f64c5cd83692934"},
]
pathspec = [
{file = "pathspec-0.10.2-py3-none-any.whl", hash = "sha256:88c2606f2c1e818b978540f73ecc908e13999c6c3a383daf3705652ae79807a5"},
{file = "pathspec-0.10.2.tar.gz", hash = "sha256:8f6bf73e5758fd365ef5d58ce09ac7c27d2833a8d7da51712eac6e27e35141b0"},
]
pathy = [
{file = "pathy-0.10.0-py3-none-any.whl", hash = "sha256:205d6da31c47334227d364ad8c13b848eb3254701553eb179f3faaec3abd0204"},
{file = "pathy-0.10.0.tar.gz", hash = "sha256:939822c326913cd0ab48f5928c8c40afcc59c5b093eac328348dd16700ab49e9"},
]
patsy = [
{file = "patsy-0.5.3-py2.py3-none-any.whl", hash = "sha256:7eb5349754ed6aa982af81f636479b1b8db9d5b1a6e957a6016ec0534b5c86b7"},
{file = "patsy-0.5.3.tar.gz", hash = "sha256:bdc18001875e319bc91c812c1eb6a10be4bb13cb81eb763f466179dca3b67277"},
]
pexpect = [
{file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
{file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
]
pickleshare = [
{file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
{file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
]
Pillow = [
{file = "Pillow-9.3.0-1-cp37-cp37m-win32.whl", hash = "sha256:e6ea6b856a74d560d9326c0f5895ef8050126acfdc7ca08ad703eb0081e82b74"},
{file = "Pillow-9.3.0-1-cp37-cp37m-win_amd64.whl", hash = "sha256:32a44128c4bdca7f31de5be641187367fe2a450ad83b833ef78910397db491aa"},
{file = "Pillow-9.3.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:0b7257127d646ff8676ec8a15520013a698d1fdc48bc2a79ba4e53df792526f2"},
{file = "Pillow-9.3.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b90f7616ea170e92820775ed47e136208e04c967271c9ef615b6fbd08d9af0e3"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68943d632f1f9e3dce98908e873b3a090f6cba1cbb1b892a9e8d97c938871fbe"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:be55f8457cd1eac957af0c3f5ece7bc3f033f89b114ef30f710882717670b2a8"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d77adcd56a42d00cc1be30843d3426aa4e660cab4a61021dc84467123f7a00c"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:829f97c8e258593b9daa80638aee3789b7df9da5cf1336035016d76f03b8860c"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:801ec82e4188e935c7f5e22e006d01611d6b41661bba9fe45b60e7ac1a8f84de"},
{file = "Pillow-9.3.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:871b72c3643e516db4ecf20efe735deb27fe30ca17800e661d769faab45a18d7"},
{file = "Pillow-9.3.0-cp310-cp310-win32.whl", hash = "sha256:655a83b0058ba47c7c52e4e2df5ecf484c1b0b0349805896dd350cbc416bdd91"},
{file = "Pillow-9.3.0-cp310-cp310-win_amd64.whl", hash = "sha256:9f47eabcd2ded7698106b05c2c338672d16a6f2a485e74481f524e2a23c2794b"},
{file = "Pillow-9.3.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:57751894f6618fd4308ed8e0c36c333e2f5469744c34729a27532b3db106ee20"},
{file = "Pillow-9.3.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7db8b751ad307d7cf238f02101e8e36a128a6cb199326e867d1398067381bff4"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3033fbe1feb1b59394615a1cafaee85e49d01b51d54de0cbf6aa8e64182518a1"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:22b012ea2d065fd163ca096f4e37e47cd8b59cf4b0fd47bfca6abb93df70b34c"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b9a65733d103311331875c1dca05cb4606997fd33d6acfed695b1232ba1df193"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:502526a2cbfa431d9fc2a079bdd9061a2397b842bb6bc4239bb176da00993812"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:90fb88843d3902fe7c9586d439d1e8c05258f41da473952aa8b328d8b907498c"},
{file = "Pillow-9.3.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:89dca0ce00a2b49024df6325925555d406b14aa3efc2f752dbb5940c52c56b11"},
{file = "Pillow-9.3.0-cp311-cp311-win32.whl", hash = "sha256:3168434d303babf495d4ba58fc22d6604f6e2afb97adc6a423e917dab828939c"},
{file = "Pillow-9.3.0-cp311-cp311-win_amd64.whl", hash = "sha256:18498994b29e1cf86d505edcb7edbe814d133d2232d256db8c7a8ceb34d18cef"},
{file = "Pillow-9.3.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:772a91fc0e03eaf922c63badeca75e91baa80fe2f5f87bdaed4280662aad25c9"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa4107d1b306cdf8953edde0534562607fe8811b6c4d9a486298ad31de733b2"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b4012d06c846dc2b80651b120e2cdd787b013deb39c09f407727ba90015c684f"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77ec3e7be99629898c9a6d24a09de089fa5356ee408cdffffe62d67bb75fdd72"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:6c738585d7a9961d8c2821a1eb3dcb978d14e238be3d70f0a706f7fa9316946b"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:828989c45c245518065a110434246c44a56a8b2b2f6347d1409c787e6e4651ee"},
{file = "Pillow-9.3.0-cp37-cp37m-win32.whl", hash = "sha256:82409ffe29d70fd733ff3c1025a602abb3e67405d41b9403b00b01debc4c9a29"},
{file = "Pillow-9.3.0-cp37-cp37m-win_amd64.whl", hash = "sha256:41e0051336807468be450d52b8edd12ac60bebaa97fe10c8b660f116e50b30e4"},
{file = "Pillow-9.3.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:b03ae6f1a1878233ac620c98f3459f79fd77c7e3c2b20d460284e1fb370557d4"},
{file = "Pillow-9.3.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4390e9ce199fc1951fcfa65795f239a8a4944117b5935a9317fb320e7767b40f"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40e1ce476a7804b0fb74bcfa80b0a2206ea6a882938eaba917f7a0f004b42502"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a0a06a052c5f37b4ed81c613a455a81f9a3a69429b4fd7bb913c3fa98abefc20"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:03150abd92771742d4a8cd6f2fa6246d847dcd2e332a18d0c15cc75bf6703040"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:15c42fb9dea42465dfd902fb0ecf584b8848ceb28b41ee2b58f866411be33f07"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:51e0e543a33ed92db9f5ef69a0356e0b1a7a6b6a71b80df99f1d181ae5875636"},
{file = "Pillow-9.3.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:3dd6caf940756101205dffc5367babf288a30043d35f80936f9bfb37f8355b32"},
{file = "Pillow-9.3.0-cp38-cp38-win32.whl", hash = "sha256:f1ff2ee69f10f13a9596480335f406dd1f70c3650349e2be67ca3139280cade0"},
{file = "Pillow-9.3.0-cp38-cp38-win_amd64.whl", hash = "sha256:276a5ca930c913f714e372b2591a22c4bd3b81a418c0f6635ba832daec1cbcfc"},
{file = "Pillow-9.3.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:73bd195e43f3fadecfc50c682f5055ec32ee2c933243cafbfdec69ab1aa87cad"},
{file = "Pillow-9.3.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1c7c8ae3864846fc95f4611c78129301e203aaa2af813b703c55d10cc1628535"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2e0918e03aa0c72ea56edbb00d4d664294815aa11291a11504a377ea018330d3"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b0915e734b33a474d76c28e07292f196cdf2a590a0d25bcc06e64e545f2d146c"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:af0372acb5d3598f36ec0914deed2a63f6bcdb7b606da04dc19a88d31bf0c05b"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:ad58d27a5b0262c0c19b47d54c5802db9b34d38bbf886665b626aff83c74bacd"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:97aabc5c50312afa5e0a2b07c17d4ac5e865b250986f8afe2b02d772567a380c"},
{file = "Pillow-9.3.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:9aaa107275d8527e9d6e7670b64aabaaa36e5b6bd71a1015ddd21da0d4e06448"},
{file = "Pillow-9.3.0-cp39-cp39-win32.whl", hash = "sha256:bac18ab8d2d1e6b4ce25e3424f709aceef668347db8637c2296bcf41acb7cf48"},
{file = "Pillow-9.3.0-cp39-cp39-win_amd64.whl", hash = "sha256:b472b5ea442148d1c3e2209f20f1e0bb0eb556538690fa70b5e1f79fa0ba8dc2"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:ab388aaa3f6ce52ac1cb8e122c4bd46657c15905904b3120a6248b5b8b0bc228"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dbb8e7f2abee51cef77673be97760abff1674ed32847ce04b4af90f610144c7b"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bca31dd6014cb8b0b2db1e46081b0ca7d936f856da3b39744aef499db5d84d02"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c7025dce65566eb6e89f56c9509d4f628fddcedb131d9465cacd3d8bac337e7e"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ebf2029c1f464c59b8bdbe5143c79fa2045a581ac53679733d3a91d400ff9efb"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b59430236b8e58840a0dfb4099a0e8717ffb779c952426a69ae435ca1f57210c"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:12ce4932caf2ddf3e41d17fc9c02d67126935a44b86df6a206cf0d7161548627"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ae5331c23ce118c53b172fa64a4c037eb83c9165aba3a7ba9ddd3ec9fa64a699"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:0b07fffc13f474264c336298d1b4ce01d9c5a011415b79d4ee5527bb69ae6f65"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:073adb2ae23431d3b9bcbcff3fe698b62ed47211d0716b067385538a1b0f28b8"},
{file = "Pillow-9.3.0.tar.gz", hash = "sha256:c935a22a557a560108d780f9a0fc426dd7459940dc54faa49d83249c8d3e760f"},
]
pip = [
{file = "pip-22.3.1-py3-none-any.whl", hash = "sha256:908c78e6bc29b676ede1c4d57981d490cb892eb45cd8c214ab6298125119e077"},
{file = "pip-22.3.1.tar.gz", hash = "sha256:65fd48317359f3af8e593943e6ae1506b66325085ea64b706a998c6e83eeaf38"},
]
pkgutil_resolve_name = [
{file = "pkgutil_resolve_name-1.3.10-py3-none-any.whl", hash = "sha256:ca27cc078d25c5ad71a9de0a7a330146c4e014c2462d9af19c6b828280649c5e"},
{file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
]
platformdirs = [
{file = "platformdirs-2.5.4-py3-none-any.whl", hash = "sha256:af0276409f9a02373d540bf8480021a048711d572745aef4b7842dad245eba10"},
{file = "platformdirs-2.5.4.tar.gz", hash = "sha256:1006647646d80f16130f052404c6b901e80ee4ed6bef6792e1f238a8969106f7"},
]
plotly = [
{file = "plotly-5.11.0-py2.py3-none-any.whl", hash = "sha256:52fd74b08aa4fd5a55b9d3034a30dbb746e572d7ed84897422f927fdf687ea5f"},
{file = "plotly-5.11.0.tar.gz", hash = "sha256:4efef479c2ec1d86dcdac8405b6ca70ca65649a77408e39a7e84a1ea2db6c787"},
]
pluggy = [
{file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"},
{file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"},
]
poethepoet = [
{file = "poethepoet-0.16.5-py3-none-any.whl", hash = "sha256:493d5d47b4cb0894dde6a69d14129ba39ef3f124fabda1f83ebb39bbf737a40e"},
{file = "poethepoet-0.16.5.tar.gz", hash = "sha256:3c958792ce488661ba09df67ba832a1b3141aa640236505ee60c23f4b1db4dbc"},
]
pox = [
{file = "pox-0.3.2-py3-none-any.whl", hash = "sha256:56fe2f099ecd8a557b8948082504492de90e8598c34733c9b1fdeca8f7b6de61"},
{file = "pox-0.3.2.tar.gz", hash = "sha256:e825225297638d6e3d49415f8cfb65407a5d15e56f2fb7fe9d9b9e3050c65ee1"},
]
ppft = [
{file = "ppft-1.7.6.6-py3-none-any.whl", hash = "sha256:f355d2caeed8bd7c9e4a860c471f31f7e66d1ada2791ab5458ea7dca15a51e41"},
{file = "ppft-1.7.6.6.tar.gz", hash = "sha256:f933f0404f3e808bc860745acb3b79cd4fe31ea19a20889a645f900415be60f1"},
]
preshed = [
{file = "preshed-3.0.8-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ea4b6df8ef7af38e864235256793bc3056e9699d991afcf6256fa298858582fc"},
{file = "preshed-3.0.8-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8e945fc814bdc29564a2ce137c237b3a9848aa1e76a1160369b6e0d328151fdd"},
{file = "preshed-3.0.8-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9a4833530fe53001c351974e0c8bb660211b8d0358e592af185fec1ae12b2d0"},
{file = "preshed-3.0.8-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e1472ee231f323b4f4368b1b5f8f08481ed43af89697d45450c6ae4af46ac08a"},
{file = "preshed-3.0.8-cp310-cp310-win_amd64.whl", hash = "sha256:c8a2e2931eea7e500fbf8e014b69022f3fab2e35a70da882e2fc753e5e487ae3"},
{file = "preshed-3.0.8-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0e1bb8701df7861af26a312225bdf7c4822ac06fcf75aeb60fe2b0a20e64c222"},
{file = "preshed-3.0.8-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e9aef2b0b7687aecef48b1c6ff657d407ff24e75462877dcb888fa904c4a9c6d"},
{file = "preshed-3.0.8-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:854d58a8913ebf3b193b0dc8064155b034e8987de25f26838dfeca09151fda8a"},
{file = "preshed-3.0.8-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:135e2ac0db1a3948d6ec295598c7e182b52c394663f2fcfe36a97ae51186be21"},
{file = "preshed-3.0.8-cp311-cp311-win_amd64.whl", hash = "sha256:019d8fa4161035811fb2804d03214143298739e162d0ad24e087bd46c50970f5"},
{file = "preshed-3.0.8-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6a49ce52856fbb3ef4f1cc744c53f5d7e1ca370b1939620ac2509a6d25e02a50"},
{file = "preshed-3.0.8-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fdbc2957b36115a576c515ffe963919f19d2683f3c76c9304ae88ef59f6b5ca6"},
{file = "preshed-3.0.8-cp36-cp36m-win_amd64.whl", hash = "sha256:09cc9da2ac1b23010ce7d88a5e20f1033595e6dd80be14318e43b9409f4c7697"},
{file = "preshed-3.0.8-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e19c8069f1a1450f835f23d47724530cf716d581fcafb398f534d044f806b8c2"},
{file = "preshed-3.0.8-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25b5ef5e387a0e17ff41202a8c1816184ab6fb3c0d0b847bf8add0ed5941eb8d"},
{file = "preshed-3.0.8-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:53d3e2456a085425c66af7baba62d7eaa24aa5e460e1a9e02c401a2ed59abd7b"},
{file = "preshed-3.0.8-cp37-cp37m-win_amd64.whl", hash = "sha256:85e98a618fb36cdcc37501d8b9b8c1246651cc2f2db3a70702832523e0ae12f4"},
{file = "preshed-3.0.8-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7f8837bf616335464f3713cbf562a3dcaad22c3ca9193f957018964ef871a68b"},
{file = "preshed-3.0.8-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:720593baf2c2e295f855192974799e486da5f50d4548db93c44f5726a43cefb9"},
{file = "preshed-3.0.8-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e0ad3d860b9ce88a74cf7414bb4b1c6fd833813e7b818e76f49272c4974b19ce"},
{file = "preshed-3.0.8-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fd19d48440b152657966a52e627780c0ddbe9d907b8d7ee4598505e80a3c55c7"},
{file = "preshed-3.0.8-cp38-cp38-win_amd64.whl", hash = "sha256:246e7c6890dc7fe9b10f0e31de3346b906e3862b6ef42fcbede37968f46a73bf"},
{file = "preshed-3.0.8-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:67643e66691770dc3434b01671648f481e3455209ce953727ef2330b16790aaa"},
{file = "preshed-3.0.8-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0ae25a010c9f551aa2247ee621457f679e07c57fc99d3fd44f84cb40b925f12c"},
{file = "preshed-3.0.8-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5a6a7fcf7dd2e7711051b3f0432da9ec9c748954c989f49d2cd8eabf8c2d953e"},
{file = "preshed-3.0.8-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5942858170c4f53d9afc6352a86bbc72fc96cc4d8964b6415492114a5920d3ed"},
{file = "preshed-3.0.8-cp39-cp39-win_amd64.whl", hash = "sha256:06793022a56782ef51d74f1399925a2ba958e50c5cfbc6fa5b25c4945e158a07"},
{file = "preshed-3.0.8.tar.gz", hash = "sha256:6c74c70078809bfddda17be96483c41d06d717934b07cab7921011d81758b357"},
]
progressbar2 = [
{file = "progressbar2-4.2.0-py2.py3-none-any.whl", hash = "sha256:1a8e201211f99a85df55f720b3b6da7fb5c8cdef56792c4547205be2de5ea606"},
{file = "progressbar2-4.2.0.tar.gz", hash = "sha256:1393922fcb64598944ad457569fbeb4b3ac189ef50b5adb9cef3284e87e394ce"},
]
prometheus-client = [
{file = "prometheus_client-0.15.0-py3-none-any.whl", hash = "sha256:db7c05cbd13a0f79975592d112320f2605a325969b270a94b71dcabc47b931d2"},
{file = "prometheus_client-0.15.0.tar.gz", hash = "sha256:be26aa452490cfcf6da953f9436e95a9f2b4d578ca80094b4458930e5f584ab1"},
]
prompt-toolkit = [
{file = "prompt_toolkit-3.0.33-py3-none-any.whl", hash = "sha256:ced598b222f6f4029c0800cefaa6a17373fb580cd093223003475ce32805c35b"},
{file = "prompt_toolkit-3.0.33.tar.gz", hash = "sha256:535c29c31216c77302877d5120aef6c94ff573748a5b5ca5b1b1f76f5e700c73"},
]
protobuf = [
{file = "protobuf-3.19.6-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:010be24d5a44be7b0613750ab40bc8b8cedc796db468eae6c779b395f50d1fa1"},
{file = "protobuf-3.19.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:11478547958c2dfea921920617eb457bc26867b0d1aa065ab05f35080c5d9eb6"},
{file = "protobuf-3.19.6-cp310-cp310-win32.whl", hash = "sha256:559670e006e3173308c9254d63facb2c03865818f22204037ab76f7a0ff70b5f"},
{file = "protobuf-3.19.6-cp310-cp310-win_amd64.whl", hash = "sha256:347b393d4dd06fb93a77620781e11c058b3b0a5289262f094379ada2920a3730"},
{file = "protobuf-3.19.6-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:a8ce5ae0de28b51dff886fb922012dad885e66176663950cb2344c0439ecb473"},
{file = "protobuf-3.19.6-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90b0d02163c4e67279ddb6dc25e063db0130fc299aefabb5d481053509fae5c8"},
{file = "protobuf-3.19.6-cp36-cp36m-win32.whl", hash = "sha256:30f5370d50295b246eaa0296533403961f7e64b03ea12265d6dfce3a391d8992"},
{file = "protobuf-3.19.6-cp36-cp36m-win_amd64.whl", hash = "sha256:0c0714b025ec057b5a7600cb66ce7c693815f897cfda6d6efb58201c472e3437"},
{file = "protobuf-3.19.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5057c64052a1f1dd7d4450e9aac25af6bf36cfbfb3a1cd89d16393a036c49157"},
{file = "protobuf-3.19.6-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:bb6776bd18f01ffe9920e78e03a8676530a5d6c5911934c6a1ac6eb78973ecb6"},
{file = "protobuf-3.19.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:84a04134866861b11556a82dd91ea6daf1f4925746b992f277b84013a7cc1229"},
{file = "protobuf-3.19.6-cp37-cp37m-win32.whl", hash = "sha256:4bc98de3cdccfb5cd769620d5785b92c662b6bfad03a202b83799b6ed3fa1fa7"},
{file = "protobuf-3.19.6-cp37-cp37m-win_amd64.whl", hash = "sha256:aa3b82ca1f24ab5326dcf4ea00fcbda703e986b22f3d27541654f749564d778b"},
{file = "protobuf-3.19.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2b2d2913bcda0e0ec9a784d194bc490f5dc3d9d71d322d070b11a0ade32ff6ba"},
{file = "protobuf-3.19.6-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:d0b635cefebd7a8a0f92020562dead912f81f401af7e71f16bf9506ff3bdbb38"},
{file = "protobuf-3.19.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7a552af4dc34793803f4e735aabe97ffc45962dfd3a237bdde242bff5a3de684"},
{file = "protobuf-3.19.6-cp38-cp38-win32.whl", hash = "sha256:0469bc66160180165e4e29de7f445e57a34ab68f49357392c5b2f54c656ab25e"},
{file = "protobuf-3.19.6-cp38-cp38-win_amd64.whl", hash = "sha256:91d5f1e139ff92c37e0ff07f391101df77e55ebb97f46bbc1535298d72019462"},
{file = "protobuf-3.19.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c0ccd3f940fe7f3b35a261b1dd1b4fc850c8fde9f74207015431f174be5976b3"},
{file = "protobuf-3.19.6-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:30a15015d86b9c3b8d6bf78d5b8c7749f2512c29f168ca259c9d7727604d0e39"},
{file = "protobuf-3.19.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:878b4cd080a21ddda6ac6d1e163403ec6eea2e206cf225982ae04567d39be7b0"},
{file = "protobuf-3.19.6-cp39-cp39-win32.whl", hash = "sha256:5a0d7539a1b1fb7e76bf5faa0b44b30f812758e989e59c40f77a7dab320e79b9"},
{file = "protobuf-3.19.6-cp39-cp39-win_amd64.whl", hash = "sha256:bbf5cea5048272e1c60d235c7bd12ce1b14b8a16e76917f371c718bd3005f045"},
{file = "protobuf-3.19.6-py2.py3-none-any.whl", hash = "sha256:14082457dc02be946f60b15aad35e9f5c69e738f80ebbc0900a19bc83734a5a4"},
{file = "protobuf-3.19.6.tar.gz", hash = "sha256:5f5540d57a43042389e87661c6eaa50f47c19c6176e8cf1c4f287aeefeccb5c4"},
]
psutil = [
{file = "psutil-5.9.4-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:c1ca331af862803a42677c120aff8a814a804e09832f166f226bfd22b56feee8"},
{file = "psutil-5.9.4-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:68908971daf802203f3d37e78d3f8831b6d1014864d7a85937941bb35f09aefe"},
{file = "psutil-5.9.4-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:3ff89f9b835100a825b14c2808a106b6fdcc4b15483141482a12c725e7f78549"},
{file = "psutil-5.9.4-cp27-cp27m-win32.whl", hash = "sha256:852dd5d9f8a47169fe62fd4a971aa07859476c2ba22c2254d4a1baa4e10b95ad"},
{file = "psutil-5.9.4-cp27-cp27m-win_amd64.whl", hash = "sha256:9120cd39dca5c5e1c54b59a41d205023d436799b1c8c4d3ff71af18535728e94"},
{file = "psutil-5.9.4-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:6b92c532979bafc2df23ddc785ed116fced1f492ad90a6830cf24f4d1ea27d24"},
{file = "psutil-5.9.4-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:efeae04f9516907be44904cc7ce08defb6b665128992a56957abc9b61dca94b7"},
{file = "psutil-5.9.4-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:54d5b184728298f2ca8567bf83c422b706200bcbbfafdc06718264f9393cfeb7"},
{file = "psutil-5.9.4-cp36-abi3-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:16653106f3b59386ffe10e0bad3bb6299e169d5327d3f187614b1cb8f24cf2e1"},
{file = "psutil-5.9.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:54c0d3d8e0078b7666984e11b12b88af2db11d11249a8ac8920dd5ef68a66e08"},
{file = "psutil-5.9.4-cp36-abi3-win32.whl", hash = "sha256:149555f59a69b33f056ba1c4eb22bb7bf24332ce631c44a319cec09f876aaeff"},
{file = "psutil-5.9.4-cp36-abi3-win_amd64.whl", hash = "sha256:fd8522436a6ada7b4aad6638662966de0d61d241cb821239b2ae7013d41a43d4"},
{file = "psutil-5.9.4-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:6001c809253a29599bc0dfd5179d9f8a5779f9dffea1da0f13c53ee568115e1e"},
{file = "psutil-5.9.4.tar.gz", hash = "sha256:3d7f9739eb435d4b1338944abe23f49584bde5395f27487d2ee25ad9a8774a62"},
]
ptyprocess = [
{file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
{file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
]
pure-eval = [
{file = "pure_eval-0.2.2-py3-none-any.whl", hash = "sha256:01eaab343580944bc56080ebe0a674b39ec44a945e6d09ba7db3cb8cec289350"},
{file = "pure_eval-0.2.2.tar.gz", hash = "sha256:2b45320af6dfaa1750f543d714b6d1c520a1688dec6fd24d339063ce0aaa9ac3"},
]
py = [
{file = "py-1.11.0-py2.py3-none-any.whl", hash = "sha256:607c53218732647dff4acdfcd50cb62615cedf612e72d1724fb1a0cc6405b378"},
{file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"},
]
pyasn1 = [
{file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"},
{file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"},
]
pyasn1-modules = [
{file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"},
{file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"},
]
pycodestyle = [
{file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"},
{file = "pycodestyle-2.8.0.tar.gz", hash = "sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"},
]
pycparser = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pydantic = [
{file = "pydantic-1.10.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bb6ad4489af1bac6955d38ebcb95079a836af31e4c4f74aba1ca05bb9f6027bd"},
{file = "pydantic-1.10.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a1f5a63a6dfe19d719b1b6e6106561869d2efaca6167f84f5ab9347887d78b98"},
{file = "pydantic-1.10.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:352aedb1d71b8b0736c6d56ad2bd34c6982720644b0624462059ab29bd6e5912"},
{file = "pydantic-1.10.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:19b3b9ccf97af2b7519c42032441a891a5e05c68368f40865a90eb88833c2559"},
{file = "pydantic-1.10.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e9069e1b01525a96e6ff49e25876d90d5a563bc31c658289a8772ae186552236"},
{file = "pydantic-1.10.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:355639d9afc76bcb9b0c3000ddcd08472ae75318a6eb67a15866b87e2efa168c"},
{file = "pydantic-1.10.2-cp310-cp310-win_amd64.whl", hash = "sha256:ae544c47bec47a86bc7d350f965d8b15540e27e5aa4f55170ac6a75e5f73b644"},
{file = "pydantic-1.10.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a4c805731c33a8db4b6ace45ce440c4ef5336e712508b4d9e1aafa617dc9907f"},
{file = "pydantic-1.10.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d49f3db871575e0426b12e2f32fdb25e579dea16486a26e5a0474af87cb1ab0a"},
{file = "pydantic-1.10.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:37c90345ec7dd2f1bcef82ce49b6235b40f282b94d3eec47e801baf864d15525"},
{file = "pydantic-1.10.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b5ba54d026c2bd2cb769d3468885f23f43710f651688e91f5fb1edcf0ee9283"},
{file = "pydantic-1.10.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:05e00dbebbe810b33c7a7362f231893183bcc4251f3f2ff991c31d5c08240c42"},
{file = "pydantic-1.10.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:2d0567e60eb01bccda3a4df01df677adf6b437958d35c12a3ac3e0f078b0ee52"},
{file = "pydantic-1.10.2-cp311-cp311-win_amd64.whl", hash = "sha256:c6f981882aea41e021f72779ce2a4e87267458cc4d39ea990729e21ef18f0f8c"},
{file = "pydantic-1.10.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c4aac8e7103bf598373208f6299fa9a5cfd1fc571f2d40bf1dd1955a63d6eeb5"},
{file = "pydantic-1.10.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:81a7b66c3f499108b448f3f004801fcd7d7165fb4200acb03f1c2402da73ce4c"},
{file = "pydantic-1.10.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bedf309630209e78582ffacda64a21f96f3ed2e51fbf3962d4d488e503420254"},
{file = "pydantic-1.10.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:9300fcbebf85f6339a02c6994b2eb3ff1b9c8c14f502058b5bf349d42447dcf5"},
{file = "pydantic-1.10.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:216f3bcbf19c726b1cc22b099dd409aa371f55c08800bcea4c44c8f74b73478d"},
{file = "pydantic-1.10.2-cp37-cp37m-win_amd64.whl", hash = "sha256:dd3f9a40c16daf323cf913593083698caee97df2804aa36c4b3175d5ac1b92a2"},
{file = "pydantic-1.10.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b97890e56a694486f772d36efd2ba31612739bc6f3caeee50e9e7e3ebd2fdd13"},
{file = "pydantic-1.10.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:9cabf4a7f05a776e7793e72793cd92cc865ea0e83a819f9ae4ecccb1b8aa6116"},
{file = "pydantic-1.10.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:06094d18dd5e6f2bbf93efa54991c3240964bb663b87729ac340eb5014310624"},
{file = "pydantic-1.10.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc78cc83110d2f275ec1970e7a831f4e371ee92405332ebfe9860a715f8336e1"},
{file = "pydantic-1.10.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:1ee433e274268a4b0c8fde7ad9d58ecba12b069a033ecc4645bb6303c062d2e9"},
{file = "pydantic-1.10.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:7c2abc4393dea97a4ccbb4ec7d8658d4e22c4765b7b9b9445588f16c71ad9965"},
{file = "pydantic-1.10.2-cp38-cp38-win_amd64.whl", hash = "sha256:0b959f4d8211fc964772b595ebb25f7652da3f22322c007b6fed26846a40685e"},
{file = "pydantic-1.10.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c33602f93bfb67779f9c507e4d69451664524389546bacfe1bee13cae6dc7488"},
{file = "pydantic-1.10.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5760e164b807a48a8f25f8aa1a6d857e6ce62e7ec83ea5d5c5a802eac81bad41"},
{file = "pydantic-1.10.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6eb843dcc411b6a2237a694f5e1d649fc66c6064d02b204a7e9d194dff81eb4b"},
{file = "pydantic-1.10.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4b8795290deaae348c4eba0cebb196e1c6b98bdbe7f50b2d0d9a4a99716342fe"},
{file = "pydantic-1.10.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:e0bedafe4bc165ad0a56ac0bd7695df25c50f76961da29c050712596cf092d6d"},
{file = "pydantic-1.10.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:2e05aed07fa02231dbf03d0adb1be1d79cabb09025dd45aa094aa8b4e7b9dcda"},
{file = "pydantic-1.10.2-cp39-cp39-win_amd64.whl", hash = "sha256:c1ba1afb396148bbc70e9eaa8c06c1716fdddabaf86e7027c5988bae2a829ab6"},
{file = "pydantic-1.10.2-py3-none-any.whl", hash = "sha256:1b6ee725bd6e83ec78b1aa32c5b1fa67a3a65badddde3976bca5fe4568f27709"},
{file = "pydantic-1.10.2.tar.gz", hash = "sha256:91b8e218852ef6007c2b98cd861601c6a09f1aa32bbbb74fab5b1c33d4a1e410"},
]
pydata-sphinx-theme = [
{file = "pydata_sphinx_theme-0.9.0-py3-none-any.whl", hash = "sha256:b22b442a6d6437e5eaf0a1f057169ffcb31eaa9f10be7d5481a125e735c71c12"},
{file = "pydata_sphinx_theme-0.9.0.tar.gz", hash = "sha256:03598a86915b596f4bf80bef79a4d33276a83e670bf360def699dbb9f99dc57a"},
]
pydot = [
{file = "pydot-1.4.2-py2.py3-none-any.whl", hash = "sha256:66c98190c65b8d2e2382a441b4c0edfdb4f4c025ef9cb9874de478fb0793a451"},
{file = "pydot-1.4.2.tar.gz", hash = "sha256:248081a39bcb56784deb018977e428605c1c758f10897a339fce1dd728ff007d"},
]
pydotplus = [
{file = "pydotplus-2.0.2.tar.gz", hash = "sha256:91e85e9ee9b85d2391ead7d635e3d9c7f5f44fd60a60e59b13e2403fa66505c4"},
]
pyflakes = [
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
]
pygam = [
{file = "pygam-0.8.0-py2.py3-none-any.whl", hash = "sha256:198bd478700520b7c399cc4bcbc011e46850969c32fb09ef0b7a4bbb14e842a5"},
{file = "pygam-0.8.0.tar.gz", hash = "sha256:5cae01aea8b2fede72a6da0aba1490213af54b3476745666af26bbe700479166"},
]
Pygments = [
{file = "Pygments-2.13.0-py3-none-any.whl", hash = "sha256:f643f331ab57ba3c9d89212ee4a2dabc6e94f117cf4eefde99a0574720d14c42"},
{file = "Pygments-2.13.0.tar.gz", hash = "sha256:56a8508ae95f98e2b9bdf93a6be5ae3f7d8af858b43e02c5a2ff083726be40c1"},
]
pygraphviz = [
{file = "pygraphviz-1.10.zip", hash = "sha256:457e093a888128903251a266a8cc16b4ba93f3f6334b3ebfed92c7471a74d867"},
]
pyparsing = [
{file = "pyparsing-3.0.9-py3-none-any.whl", hash = "sha256:5026bae9a10eeaefb61dab2f09052b9f4307d44aee4eda64b309723d8d206bbc"},
{file = "pyparsing-3.0.9.tar.gz", hash = "sha256:2b020ecf7d21b687f219b71ecad3631f644a47f01403fa1d1036b0c6416d70fb"},
]
pyro-api = [
{file = "pyro-api-0.1.2.tar.gz", hash = "sha256:a1b900d9580aa1c2fab3b123ab7ff33413744da7c5f440bd4aadc4d40d14d920"},
{file = "pyro_api-0.1.2-py3-none-any.whl", hash = "sha256:10e0e42e9e4401ce464dab79c870e50dfb4f413d326fa777f3582928ef9caf8f"},
]
pyro-ppl = [
{file = "pyro-ppl-1.8.3.tar.gz", hash = "sha256:3edd4381b020d12e8ab50ebe0298c7a68d150b8a024f998ad86fdac7a308d50e"},
{file = "pyro_ppl-1.8.3-py3-none-any.whl", hash = "sha256:cf642cb8bd1a54ad9c69960a5910e423b33f5de3480589b5dcc5f11236b403fb"},
]
pyrsistent = [
{file = "pyrsistent-0.19.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d6982b5a0237e1b7d876b60265564648a69b14017f3b5f908c5be2de3f9abb7a"},
{file = "pyrsistent-0.19.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:187d5730b0507d9285a96fca9716310d572e5464cadd19f22b63a6976254d77a"},
{file = "pyrsistent-0.19.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:055ab45d5911d7cae397dc418808d8802fb95262751872c841c170b0dbf51eed"},
{file = "pyrsistent-0.19.2-cp310-cp310-win32.whl", hash = "sha256:456cb30ca8bff00596519f2c53e42c245c09e1a4543945703acd4312949bfd41"},
{file = "pyrsistent-0.19.2-cp310-cp310-win_amd64.whl", hash = "sha256:b39725209e06759217d1ac5fcdb510e98670af9e37223985f330b611f62e7425"},
{file = "pyrsistent-0.19.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:2aede922a488861de0ad00c7630a6e2d57e8023e4be72d9d7147a9fcd2d30712"},
{file = "pyrsistent-0.19.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:879b4c2f4d41585c42df4d7654ddffff1239dc4065bc88b745f0341828b83e78"},
{file = "pyrsistent-0.19.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c43bec251bbd10e3cb58ced80609c5c1eb238da9ca78b964aea410fb820d00d6"},
{file = "pyrsistent-0.19.2-cp37-cp37m-win32.whl", hash = "sha256:d690b18ac4b3e3cab73b0b7aa7dbe65978a172ff94970ff98d82f2031f8971c2"},
{file = "pyrsistent-0.19.2-cp37-cp37m-win_amd64.whl", hash = "sha256:3ba4134a3ff0fc7ad225b6b457d1309f4698108fb6b35532d015dca8f5abed73"},
{file = "pyrsistent-0.19.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a178209e2df710e3f142cbd05313ba0c5ebed0a55d78d9945ac7a4e09d923308"},
{file = "pyrsistent-0.19.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e371b844cec09d8dc424d940e54bba8f67a03ebea20ff7b7b0d56f526c71d584"},
{file = "pyrsistent-0.19.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:111156137b2e71f3a9936baf27cb322e8024dac3dc54ec7fb9f0bcf3249e68bb"},
{file = "pyrsistent-0.19.2-cp38-cp38-win32.whl", hash = "sha256:e5d8f84d81e3729c3b506657dddfe46e8ba9c330bf1858ee33108f8bb2adb38a"},
{file = "pyrsistent-0.19.2-cp38-cp38-win_amd64.whl", hash = "sha256:9cd3e9978d12b5d99cbdc727a3022da0430ad007dacf33d0bf554b96427f33ab"},
{file = "pyrsistent-0.19.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f1258f4e6c42ad0b20f9cfcc3ada5bd6b83374516cd01c0960e3cb75fdca6770"},
{file = "pyrsistent-0.19.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21455e2b16000440e896ab99e8304617151981ed40c29e9507ef1c2e4314ee95"},
{file = "pyrsistent-0.19.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bfd880614c6237243ff53a0539f1cb26987a6dc8ac6e66e0c5a40617296a045e"},
{file = "pyrsistent-0.19.2-cp39-cp39-win32.whl", hash = "sha256:71d332b0320642b3261e9fee47ab9e65872c2bd90260e5d225dabeed93cbd42b"},
{file = "pyrsistent-0.19.2-cp39-cp39-win_amd64.whl", hash = "sha256:dec3eac7549869365fe263831f576c8457f6c833937c68542d08fde73457d291"},
{file = "pyrsistent-0.19.2-py3-none-any.whl", hash = "sha256:ea6b79a02a28550c98b6ca9c35b9f492beaa54d7c5c9e9949555893c8a9234d0"},
{file = "pyrsistent-0.19.2.tar.gz", hash = "sha256:bfa0351be89c9fcbcb8c9879b826f4353be10f58f8a677efab0c017bf7137ec2"},
]
pytest = [
{file = "pytest-7.2.0-py3-none-any.whl", hash = "sha256:892f933d339f068883b6fd5a459f03d85bfcb355e4981e146d2c7616c21fef71"},
{file = "pytest-7.2.0.tar.gz", hash = "sha256:c4014eb40e10f11f355ad4e3c2fb2c6c6d1919c73f3b5a433de4708202cade59"},
]
pytest-cov = [
{file = "pytest-cov-3.0.0.tar.gz", hash = "sha256:e7f0f5b1617d2210a2cabc266dfe2f4c75a8d32fb89eafb7ad9d06f6d076d470"},
{file = "pytest_cov-3.0.0-py3-none-any.whl", hash = "sha256:578d5d15ac4a25e5f961c938b85a05b09fdaae9deef3bb6de9a6e766622ca7a6"},
]
pytest-split = [
{file = "pytest-split-0.8.0.tar.gz", hash = "sha256:8571a3f60ca8656c698ed86b0a3212bb9e79586ecb201daef9988c336ff0e6ff"},
{file = "pytest_split-0.8.0-py3-none-any.whl", hash = "sha256:2e06b8b1ab7ceb19d0b001548271abaf91d12415a8687086cf40581c555d309f"},
]
python-dateutil = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
]
python-utils = [
{file = "python-utils-3.4.5.tar.gz", hash = "sha256:7e329c427a6d23036cfcc4501638afb31b2ddc8896f25393562833874b8c6e0a"},
{file = "python_utils-3.4.5-py2.py3-none-any.whl", hash = "sha256:22990259324eae88faa3389d302861a825dbdd217ab40e3ec701851b3337d592"},
]
pytz = [
{file = "pytz-2022.6-py2.py3-none-any.whl", hash = "sha256:222439474e9c98fced559f1709d89e6c9cbf8d79c794ff3eb9f8800064291427"},
{file = "pytz-2022.6.tar.gz", hash = "sha256:e89512406b793ca39f5971bc999cc538ce125c0e51c27941bef4568b460095e2"},
]
pytz-deprecation-shim = [
{file = "pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl", hash = "sha256:8314c9692a636c8eb3bda879b9f119e350e93223ae83e70e80c31675a0fdc1a6"},
{file = "pytz_deprecation_shim-0.1.0.post0.tar.gz", hash = "sha256:af097bae1b616dde5c5744441e2ddc69e74dfdcb0c263129610d85b87445a59d"},
]
pywin32 = [
{file = "pywin32-305-cp310-cp310-win32.whl", hash = "sha256:421f6cd86e84bbb696d54563c48014b12a23ef95a14e0bdba526be756d89f116"},
{file = "pywin32-305-cp310-cp310-win_amd64.whl", hash = "sha256:73e819c6bed89f44ff1d690498c0a811948f73777e5f97c494c152b850fad478"},
{file = "pywin32-305-cp310-cp310-win_arm64.whl", hash = "sha256:742eb905ce2187133a29365b428e6c3b9001d79accdc30aa8969afba1d8470f4"},
{file = "pywin32-305-cp311-cp311-win32.whl", hash = "sha256:19ca459cd2e66c0e2cc9a09d589f71d827f26d47fe4a9d09175f6aa0256b51c2"},
{file = "pywin32-305-cp311-cp311-win_amd64.whl", hash = "sha256:326f42ab4cfff56e77e3e595aeaf6c216712bbdd91e464d167c6434b28d65990"},
{file = "pywin32-305-cp311-cp311-win_arm64.whl", hash = "sha256:4ecd404b2c6eceaca52f8b2e3e91b2187850a1ad3f8b746d0796a98b4cea04db"},
{file = "pywin32-305-cp36-cp36m-win32.whl", hash = "sha256:48d8b1659284f3c17b68587af047d110d8c44837736b8932c034091683e05863"},
{file = "pywin32-305-cp36-cp36m-win_amd64.whl", hash = "sha256:13362cc5aa93c2beaf489c9c9017c793722aeb56d3e5166dadd5ef82da021fe1"},
{file = "pywin32-305-cp37-cp37m-win32.whl", hash = "sha256:a55db448124d1c1484df22fa8bbcbc45c64da5e6eae74ab095b9ea62e6d00496"},
{file = "pywin32-305-cp37-cp37m-win_amd64.whl", hash = "sha256:109f98980bfb27e78f4df8a51a8198e10b0f347257d1e265bb1a32993d0c973d"},
{file = "pywin32-305-cp38-cp38-win32.whl", hash = "sha256:9dd98384da775afa009bc04863426cb30596fd78c6f8e4e2e5bbf4edf8029504"},
{file = "pywin32-305-cp38-cp38-win_amd64.whl", hash = "sha256:56d7a9c6e1a6835f521788f53b5af7912090674bb84ef5611663ee1595860fc7"},
{file = "pywin32-305-cp39-cp39-win32.whl", hash = "sha256:9d968c677ac4d5cbdaa62fd3014ab241718e619d8e36ef8e11fb930515a1e918"},
{file = "pywin32-305-cp39-cp39-win_amd64.whl", hash = "sha256:50768c6b7c3f0b38b7fb14dd4104da93ebced5f1a50dc0e834594bff6fbe1271"},
]
pywinpty = [
{file = "pywinpty-2.0.9-cp310-none-win_amd64.whl", hash = "sha256:30a7b371446a694a6ce5ef906d70ac04e569de5308c42a2bdc9c3bc9275ec51f"},
{file = "pywinpty-2.0.9-cp311-none-win_amd64.whl", hash = "sha256:d78ef6f4bd7a6c6f94dc1a39ba8fb028540cc39f5cb593e756506db17843125f"},
{file = "pywinpty-2.0.9-cp37-none-win_amd64.whl", hash = "sha256:5ed36aa087e35a3a183f833631b3e4c1ae92fe2faabfce0fa91b77ed3f0f1382"},
{file = "pywinpty-2.0.9-cp38-none-win_amd64.whl", hash = "sha256:2352f44ee913faaec0a02d3c112595e56b8af7feeb8100efc6dc1a8685044199"},
{file = "pywinpty-2.0.9-cp39-none-win_amd64.whl", hash = "sha256:ba75ec55f46c9e17db961d26485b033deb20758b1731e8e208e1e8a387fcf70c"},
{file = "pywinpty-2.0.9.tar.gz", hash = "sha256:01b6400dd79212f50a2f01af1c65b781290ff39610853db99bf03962eb9a615f"},
]
PyYAML = [
{file = "PyYAML-6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4db7c7aef085872ef65a8fd7d6d09a14ae91f691dec3e87ee5ee0539d516f53"},
{file = "PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9df7ed3b3d2e0ecfe09e14741b857df43adb5a3ddadc919a2d94fbdf78fea53c"},
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77f396e6ef4c73fdc33a9157446466f1cff553d979bd00ecb64385760c6babdc"},
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a80a78046a72361de73f8f395f1f1e49f956c6be882eed58505a15f3e430962b"},
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f84fbc98b019fef2ee9a1cb3ce93e3187a6df0b2538a651bfb890254ba9f90b5"},
{file = "PyYAML-6.0-cp310-cp310-win32.whl", hash = "sha256:2cd5df3de48857ed0544b34e2d40e9fac445930039f3cfe4bcc592a1f836d513"},
{file = "PyYAML-6.0-cp310-cp310-win_amd64.whl", hash = "sha256:daf496c58a8c52083df09b80c860005194014c3698698d1a57cbcfa182142a3a"},
{file = "PyYAML-6.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d4b0ba9512519522b118090257be113b9468d804b19d63c71dbcf4a48fa32358"},
{file = "PyYAML-6.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:81957921f441d50af23654aa6c5e5eaf9b06aba7f0a19c18a538dc7ef291c5a1"},
{file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa17f5bc4d1b10afd4466fd3a44dc0e245382deca5b3c353d8b757f9e3ecb8d"},
{file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dbad0e9d368bb989f4515da330b88a057617d16b6a8245084f1b05400f24609f"},
{file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:432557aa2c09802be39460360ddffd48156e30721f5e8d917f01d31694216782"},
{file = "PyYAML-6.0-cp311-cp311-win32.whl", hash = "sha256:bfaef573a63ba8923503d27530362590ff4f576c626d86a9fed95822a8255fd7"},
{file = "PyYAML-6.0-cp311-cp311-win_amd64.whl", hash = "sha256:01b45c0191e6d66c470b6cf1b9531a771a83c1c4208272ead47a3ae4f2f603bf"},
{file = "PyYAML-6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:897b80890765f037df3403d22bab41627ca8811ae55e9a722fd0392850ec4d86"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50602afada6d6cbfad699b0c7bb50d5ccffa7e46a3d738092afddc1f9758427f"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:48c346915c114f5fdb3ead70312bd042a953a8ce5c7106d5bfb1a5254e47da92"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:98c4d36e99714e55cfbaaee6dd5badbc9a1ec339ebfc3b1f52e293aee6bb71a4"},
{file = "PyYAML-6.0-cp36-cp36m-win32.whl", hash = "sha256:0283c35a6a9fbf047493e3a0ce8d79ef5030852c51e9d911a27badfde0605293"},
{file = "PyYAML-6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:07751360502caac1c067a8132d150cf3d61339af5691fe9e87803040dbc5db57"},
{file = "PyYAML-6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:819b3830a1543db06c4d4b865e70ded25be52a2e0631ccd2f6a47a2822f2fd7c"},
{file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:473f9edb243cb1935ab5a084eb238d842fb8f404ed2193a915d1784b5a6b5fc0"},
{file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ce82d761c532fe4ec3f87fc45688bdd3a4c1dc5e0b4a19814b9009a29baefd4"},
{file = "PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:231710d57adfd809ef5d34183b8ed1eeae3f76459c18fb4a0b373ad56bedcdd9"},
{file = "PyYAML-6.0-cp37-cp37m-win32.whl", hash = "sha256:c5687b8d43cf58545ade1fe3e055f70eac7a5a1a0bf42824308d868289a95737"},
{file = "PyYAML-6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:d15a181d1ecd0d4270dc32edb46f7cb7733c7c508857278d3d378d14d606db2d"},
{file = "PyYAML-6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0b4624f379dab24d3725ffde76559cff63d9ec94e1736b556dacdfebe5ab6d4b"},
{file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:213c60cd50106436cc818accf5baa1aba61c0189ff610f64f4a3e8c6726218ba"},
{file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9fa600030013c4de8165339db93d182b9431076eb98eb40ee068700c9c813e34"},
{file = "PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:277a0ef2981ca40581a47093e9e2d13b3f1fbbeffae064c1d21bfceba2030287"},
{file = "PyYAML-6.0-cp38-cp38-win32.whl", hash = "sha256:d4eccecf9adf6fbcc6861a38015c2a64f38b9d94838ac1810a9023a0609e1b78"},
{file = "PyYAML-6.0-cp38-cp38-win_amd64.whl", hash = "sha256:1e4747bc279b4f613a09eb64bba2ba602d8a6664c6ce6396a4d0cd413a50ce07"},
{file = "PyYAML-6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:055d937d65826939cb044fc8c9b08889e8c743fdc6a32b33e2390f66013e449b"},
{file = "PyYAML-6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e61ceaab6f49fb8bdfaa0f92c4b57bcfbea54c09277b1b4f7ac376bfb7a7c174"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d67d839ede4ed1b28a4e8909735fc992a923cdb84e618544973d7dfc71540803"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cba8c411ef271aa037d7357a2bc8f9ee8b58b9965831d9e51baf703280dc73d3"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:40527857252b61eacd1d9af500c3337ba8deb8fc298940291486c465c8b46ec0"},
{file = "PyYAML-6.0-cp39-cp39-win32.whl", hash = "sha256:b5b9eccad747aabaaffbc6064800670f0c297e52c12754eb1d976c57e4f74dcb"},
{file = "PyYAML-6.0-cp39-cp39-win_amd64.whl", hash = "sha256:b3d267842bf12586ba6c734f89d1f5b871df0273157918b0ccefa29deb05c21c"},
{file = "PyYAML-6.0.tar.gz", hash = "sha256:68fb519c14306fec9720a2a5b45bc9f0c8d1b9c72adf45c37baedfcd949c35a2"},
]
pyzmq = [
{file = "pyzmq-24.0.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:28b119ba97129d3001673a697b7cce47fe6de1f7255d104c2f01108a5179a066"},
{file = "pyzmq-24.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bcbebd369493d68162cddb74a9c1fcebd139dfbb7ddb23d8f8e43e6c87bac3a6"},
{file = "pyzmq-24.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ae61446166983c663cee42c852ed63899e43e484abf080089f771df4b9d272ef"},
{file = "pyzmq-24.0.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:87f7ac99b15270db8d53f28c3c7b968612993a90a5cf359da354efe96f5372b4"},
{file = "pyzmq-24.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9dca7c3956b03b7663fac4d150f5e6d4f6f38b2462c1e9afd83bcf7019f17913"},
{file = "pyzmq-24.0.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:8c78bfe20d4c890cb5580a3b9290f700c570e167d4cdcc55feec07030297a5e3"},
{file = "pyzmq-24.0.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:48f721f070726cd2a6e44f3c33f8ee4b24188e4b816e6dd8ba542c8c3bb5b246"},
{file = "pyzmq-24.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:afe1f3bc486d0ce40abb0a0c9adb39aed3bbac36ebdc596487b0cceba55c21c1"},
{file = "pyzmq-24.0.1-cp310-cp310-win32.whl", hash = "sha256:3e6192dbcefaaa52ed81be88525a54a445f4b4fe2fffcae7fe40ebb58bd06bfd"},
{file = "pyzmq-24.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:86de64468cad9c6d269f32a6390e210ca5ada568c7a55de8e681ca3b897bb340"},
{file = "pyzmq-24.0.1-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:838812c65ed5f7c2bd11f7b098d2e5d01685a3f6d1f82849423b570bae698c00"},
{file = "pyzmq-24.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:dfb992dbcd88d8254471760879d48fb20836d91baa90f181c957122f9592b3dc"},
{file = "pyzmq-24.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7abddb2bd5489d30ffeb4b93a428130886c171b4d355ccd226e83254fcb6b9ef"},
{file = "pyzmq-24.0.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:94010bd61bc168c103a5b3b0f56ed3b616688192db7cd5b1d626e49f28ff51b3"},
{file = "pyzmq-24.0.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:8242543c522d84d033fe79be04cb559b80d7eb98ad81b137ff7e0a9020f00ace"},
{file = "pyzmq-24.0.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ccb94342d13e3bf3ffa6e62f95b5e3f0bc6bfa94558cb37f4b3d09d6feb536ff"},
{file = "pyzmq-24.0.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:6640f83df0ae4ae1104d4c62b77e9ef39be85ebe53f636388707d532bee2b7b8"},
{file = "pyzmq-24.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:a180dbd5ea5d47c2d3b716d5c19cc3fb162d1c8db93b21a1295d69585bfddac1"},
{file = "pyzmq-24.0.1-cp311-cp311-win32.whl", hash = "sha256:624321120f7e60336be8ec74a172ae7fba5c3ed5bf787cc85f7e9986c9e0ebc2"},
{file = "pyzmq-24.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:1724117bae69e091309ffb8255412c4651d3f6355560d9af312d547f6c5bc8b8"},
{file = "pyzmq-24.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:15975747462ec49fdc863af906bab87c43b2491403ab37a6d88410635786b0f4"},
{file = "pyzmq-24.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b947e264f0e77d30dcbccbb00f49f900b204b922eb0c3a9f0afd61aaa1cedc3d"},
{file = "pyzmq-24.0.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0ec91f1bad66f3ee8c6deb65fa1fe418e8ad803efedd69c35f3b5502f43bd1dc"},
{file = "pyzmq-24.0.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:db03704b3506455d86ec72c3358a779e9b1d07b61220dfb43702b7b668edcd0d"},
{file = "pyzmq-24.0.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:e7e66b4e403c2836ac74f26c4b65d8ac0ca1eef41dfcac2d013b7482befaad83"},
{file = "pyzmq-24.0.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:7a23ccc1083c260fa9685c93e3b170baba45aeed4b524deb3f426b0c40c11639"},
{file = "pyzmq-24.0.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:fa0ae3275ef706c0309556061185dd0e4c4cd3b7d6f67ae617e4e677c7a41e2e"},
{file = "pyzmq-24.0.1-cp36-cp36m-win32.whl", hash = "sha256:f01de4ec083daebf210531e2cca3bdb1608dbbbe00a9723e261d92087a1f6ebc"},
{file = "pyzmq-24.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:de4217b9eb8b541cf2b7fde4401ce9d9a411cc0af85d410f9d6f4333f43640be"},
{file = "pyzmq-24.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:78068e8678ca023594e4a0ab558905c1033b2d3e806a0ad9e3094e231e115a33"},
{file = "pyzmq-24.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77c2713faf25a953c69cf0f723d1b7dd83827b0834e6c41e3fb3bbc6765914a1"},
{file = "pyzmq-24.0.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:8bb4af15f305056e95ca1bd086239b9ebc6ad55e9f49076d27d80027f72752f6"},
{file = "pyzmq-24.0.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0f14cffd32e9c4c73da66db97853a6aeceaac34acdc0fae9e5bbc9370281864c"},
{file = "pyzmq-24.0.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:0108358dab8c6b27ff6b985c2af4b12665c1bc659648284153ee501000f5c107"},
{file = "pyzmq-24.0.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:d66689e840e75221b0b290b0befa86f059fb35e1ee6443bce51516d4d61b6b99"},
{file = "pyzmq-24.0.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:ae08ac90aa8fa14caafc7a6251bd218bf6dac518b7bff09caaa5e781119ba3f2"},
{file = "pyzmq-24.0.1-cp37-cp37m-win32.whl", hash = "sha256:8421aa8c9b45ea608c205db9e1c0c855c7e54d0e9c2c2f337ce024f6843cab3b"},
{file = "pyzmq-24.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:54d8b9c5e288362ec8595c1d98666d36f2070fd0c2f76e2b3c60fbad9bd76227"},
{file = "pyzmq-24.0.1-cp38-cp38-macosx_10_15_universal2.whl", hash = "sha256:acbd0a6d61cc954b9f535daaa9ec26b0a60a0d4353c5f7c1438ebc88a359a47e"},
{file = "pyzmq-24.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:47b11a729d61a47df56346283a4a800fa379ae6a85870d5a2e1e4956c828eedc"},
{file = "pyzmq-24.0.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abe6eb10122f0d746a0d510c2039ae8edb27bc9af29f6d1b05a66cc2401353ff"},
{file = "pyzmq-24.0.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:07bec1a1b22dacf718f2c0e71b49600bb6a31a88f06527dfd0b5aababe3fa3f7"},
{file = "pyzmq-24.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f0d945a85b70da97ae86113faf9f1b9294efe66bd4a5d6f82f2676d567338b66"},
{file = "pyzmq-24.0.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:1b7928bb7580736ffac5baf814097be342ba08d3cfdfb48e52773ec959572287"},
{file = "pyzmq-24.0.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:b946da90dc2799bcafa682692c1d2139b2a96ec3c24fa9fc6f5b0da782675330"},
{file = "pyzmq-24.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:c8840f064b1fb377cffd3efeaad2b190c14d4c8da02316dae07571252d20b31f"},
{file = "pyzmq-24.0.1-cp38-cp38-win32.whl", hash = "sha256:4854f9edc5208f63f0841c0c667260ae8d6846cfa233c479e29fdc85d42ebd58"},
{file = "pyzmq-24.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:42d4f97b9795a7aafa152a36fe2ad44549b83a743fd3e77011136def512e6c2a"},
{file = "pyzmq-24.0.1-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:52afb0ac962963fff30cf1be775bc51ae083ef4c1e354266ab20e5382057dd62"},
{file = "pyzmq-24.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8bad8210ad4df68c44ff3685cca3cda448ee46e20d13edcff8909eba6ec01ca4"},
{file = "pyzmq-24.0.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:dabf1a05318d95b1537fd61d9330ef4313ea1216eea128a17615038859da3b3b"},
{file = "pyzmq-24.0.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5bd3d7dfd9cd058eb68d9a905dec854f86649f64d4ddf21f3ec289341386c44b"},
{file = "pyzmq-24.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e8012bce6836d3f20a6c9599f81dfa945f433dab4dbd0c4917a6fb1f998ab33d"},
{file = "pyzmq-24.0.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:c31805d2c8ade9b11feca4674eee2b9cce1fec3e8ddb7bbdd961a09dc76a80ea"},
{file = "pyzmq-24.0.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:3104f4b084ad5d9c0cb87445cc8cfd96bba710bef4a66c2674910127044df209"},
{file = "pyzmq-24.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:df0841f94928f8af9c7a1f0aaaffba1fb74607af023a152f59379c01c53aee58"},
{file = "pyzmq-24.0.1-cp39-cp39-win32.whl", hash = "sha256:a435ef8a3bd95c8a2d316d6e0ff70d0db524f6037411652803e118871d703333"},
{file = "pyzmq-24.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:2032d9cb994ce3b4cba2b8dfae08c7e25bc14ba484c770d4d3be33c27de8c45b"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:bb5635c851eef3a7a54becde6da99485eecf7d068bd885ac8e6d173c4ecd68b0"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:83ea1a398f192957cb986d9206ce229efe0ee75e3c6635baff53ddf39bd718d5"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:941fab0073f0a54dc33d1a0460cb04e0d85893cb0c5e1476c785000f8b359409"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0e8f482c44ccb5884bf3f638f29bea0f8dc68c97e38b2061769c4cb697f6140d"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:613010b5d17906c4367609e6f52e9a2595e35d5cc27d36ff3f1b6fa6e954d944"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:65c94410b5a8355cfcf12fd600a313efee46ce96a09e911ea92cf2acf6708804"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:20e7eeb1166087db636c06cae04a1ef59298627f56fb17da10528ab52a14c87f"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a2712aee7b3834ace51738c15d9ee152cc5a98dc7d57dd93300461b792ab7b43"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1a7c280185c4da99e0cc06c63bdf91f5b0b71deb70d8717f0ab870a43e376db8"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:858375573c9225cc8e5b49bfac846a77b696b8d5e815711b8d4ba3141e6e8879"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:80093b595921eed1a2cead546a683b9e2ae7f4a4592bb2ab22f70d30174f003a"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f3f3154fde2b1ff3aa7b4f9326347ebc89c8ef425ca1db8f665175e6d3bd42f"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abb756147314430bee5d10919b8493c0ccb109ddb7f5dfd2fcd7441266a25b75"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:44e706bac34e9f50779cb8c39f10b53a4d15aebb97235643d3112ac20bd577b4"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:687700f8371643916a1d2c61f3fdaa630407dd205c38afff936545d7b7466066"},
{file = "pyzmq-24.0.1.tar.gz", hash = "sha256:216f5d7dbb67166759e59b0479bca82b8acf9bed6015b526b8eb10143fb08e77"},
]
qtconsole = [
{file = "qtconsole-5.4.0-py3-none-any.whl", hash = "sha256:be13560c19bdb3b54ed9741a915aa701a68d424519e8341ac479a91209e694b2"},
{file = "qtconsole-5.4.0.tar.gz", hash = "sha256:57748ea2fd26320a0b77adba20131cfbb13818c7c96d83fafcb110ff55f58b35"},
]
QtPy = [
{file = "QtPy-2.3.0-py3-none-any.whl", hash = "sha256:8d6d544fc20facd27360ea189592e6135c614785f0dec0b4f083289de6beb408"},
{file = "QtPy-2.3.0.tar.gz", hash = "sha256:0603c9c83ccc035a4717a12908bf6bc6cb22509827ea2ec0e94c2da7c9ed57c5"},
]
requests = [
{file = "requests-2.28.1-py3-none-any.whl", hash = "sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349"},
{file = "requests-2.28.1.tar.gz", hash = "sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983"},
]
requests-oauthlib = [
{file = "requests-oauthlib-1.3.1.tar.gz", hash = "sha256:75beac4a47881eeb94d5ea5d6ad31ef88856affe2332b9aafb52c6452ccf0d7a"},
{file = "requests_oauthlib-1.3.1-py2.py3-none-any.whl", hash = "sha256:2577c501a2fb8d05a304c09d090d6e47c306fef15809d102b327cf8364bddab5"},
]
rpy2 = [
{file = "rpy2-3.5.6-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:7f56bb66d95aaa59f52c82bdff3bb268a5745cc3779839ca1ac9aecfc411c17a"},
{file = "rpy2-3.5.6-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:defff796b43fe230e1e698a1bc353b7a4a25d4d9de856ee1bcffd6831edc825c"},
{file = "rpy2-3.5.6-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:a3f74cd54bd2e21a94274ae5306113e24f8a15c034b15be931188939292b49f7"},
{file = "rpy2-3.5.6-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:6a2e4be001b98c00f084a561cfcf9ca52f938cd8fcd8acfa0fbfc6a8be219339"},
{file = "rpy2-3.5.6.tar.gz", hash = "sha256:3404f1031d2d8ff8a1002656ab8e394b8ac16dd34ca43af68deed102f396e771"},
]
rsa = [
{file = "rsa-4.9-py3-none-any.whl", hash = "sha256:90260d9058e514786967344d0ef75fa8727eed8a7d2e43ce9f4bcf1b536174f7"},
{file = "rsa-4.9.tar.gz", hash = "sha256:e38464a49c6c85d7f1351b0126661487a7e0a14a50f1675ec50eb34d4f20ef21"},
]
s3transfer = [
{file = "s3transfer-0.6.0-py3-none-any.whl", hash = "sha256:06176b74f3a15f61f1b4f25a1fc29a4429040b7647133a463da8fa5bd28d5ecd"},
{file = "s3transfer-0.6.0.tar.gz", hash = "sha256:2ed07d3866f523cc561bf4a00fc5535827981b117dd7876f036b0c1aca42c947"},
]
scikit-learn = [
{file = "scikit-learn-1.0.2.tar.gz", hash = "sha256:b5870959a5484b614f26d31ca4c17524b1b0317522199dc985c3b4256e030767"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:da3c84694ff693b5b3194d8752ccf935a665b8b5edc33a283122f4273ca3e687"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:75307d9ea39236cad7eea87143155eea24d48f93f3a2f9389c817f7019f00705"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f14517e174bd7332f1cca2c959e704696a5e0ba246eb8763e6c24876d8710049"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9aac97e57c196206179f674f09bc6bffcd0284e2ba95b7fe0b402ac3f986023"},
{file = "scikit_learn-1.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:d93d4c28370aea8a7cbf6015e8a669cd5d69f856cc2aa44e7a590fb805bb5583"},
{file = "scikit_learn-1.0.2-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:85260fb430b795d806251dd3bb05e6f48cdc777ac31f2bcf2bc8bbed3270a8f5"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a053a6a527c87c5c4fa7bf1ab2556fa16d8345cf99b6c5a19030a4a7cd8fd2c0"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:245c9b5a67445f6f044411e16a93a554edc1efdcce94d3fc0bc6a4b9ac30b752"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:158faf30684c92a78e12da19c73feff9641a928a8024b4fa5ec11d583f3d8a87"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:08ef968f6b72033c16c479c966bf37ccd49b06ea91b765e1cc27afefe723920b"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16455ace947d8d9e5391435c2977178d0ff03a261571e67f627c8fee0f9d431a"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win32.whl", hash = "sha256:2f3b453e0b149898577e301d27e098dfe1a36943f7bb0ad704d1e548efc3b448"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win_amd64.whl", hash = "sha256:46f431ec59dead665e1370314dbebc99ead05e1c0a9df42f22d6a0e00044820f"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:ff3fa8ea0e09e38677762afc6e14cad77b5e125b0ea70c9bba1992f02c93b028"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:9369b030e155f8188743eb4893ac17a27f81d28a884af460870c7c072f114243"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7d6b2475f1c23a698b48515217eb26b45a6598c7b1840ba23b3c5acece658dbb"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:285db0352e635b9e3392b0b426bc48c3b485512d3b4ac3c7a44ec2a2ba061e66"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cb33fe1dc6f73dc19e67b264dbb5dde2a0539b986435fdd78ed978c14654830"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b1391d1a6e2268485a63c3073111fe3ba6ec5145fc957481cfd0652be571226d"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc3744dabc56b50bec73624aeca02e0def06b03cb287de26836e730659c5d29c"},
{file = "scikit_learn-1.0.2-cp38-cp38-win32.whl", hash = "sha256:a999c9f02ff9570c783069f1074f06fe7386ec65b84c983db5aeb8144356a355"},
{file = "scikit_learn-1.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:7626a34eabbf370a638f32d1a3ad50526844ba58d63e3ab81ba91e2a7c6d037e"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:a90b60048f9ffdd962d2ad2fb16367a87ac34d76e02550968719eb7b5716fd10"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:7a93c1292799620df90348800d5ac06f3794c1316ca247525fa31169f6d25855"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:eabceab574f471de0b0eb3f2ecf2eee9f10b3106570481d007ed1c84ebf6d6a1"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:55f2f3a8414e14fbee03782f9fe16cca0f141d639d2b1c1a36779fa069e1db57"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80095a1e4b93bd33261ef03b9bc86d6db649f988ea4dbcf7110d0cded8d7213d"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fa38a1b9b38ae1fad2863eff5e0d69608567453fdfc850c992e6e47eb764e846"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ff746a69ff2ef25f62b36338c615dd15954ddc3ab8e73530237dd73235e76d62"},
{file = "scikit_learn-1.0.2-cp39-cp39-win32.whl", hash = "sha256:e174242caecb11e4abf169342641778f68e1bfaba80cd18acd6bc84286b9a534"},
{file = "scikit_learn-1.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:b54a62c6e318ddbfa7d22c383466d38d2ee770ebdb5ddb668d56a099f6eaf75f"},
]
scipy = [
{file = "scipy-1.8.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:65b77f20202599c51eb2771d11a6b899b97989159b7975e9b5259594f1d35ef4"},
{file = "scipy-1.8.1-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:e013aed00ed776d790be4cb32826adb72799c61e318676172495383ba4570aa4"},
{file = "scipy-1.8.1-cp310-cp310-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:02b567e722d62bddd4ac253dafb01ce7ed8742cf8031aea030a41414b86c1125"},
{file = "scipy-1.8.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1da52b45ce1a24a4a22db6c157c38b39885a990a566748fc904ec9f03ed8c6ba"},
{file = "scipy-1.8.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a0aa8220b89b2e3748a2836fbfa116194378910f1a6e78e4675a095bcd2c762d"},
{file = "scipy-1.8.1-cp310-cp310-win_amd64.whl", hash = "sha256:4e53a55f6a4f22de01ffe1d2f016e30adedb67a699a310cdcac312806807ca81"},
{file = "scipy-1.8.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:28d2cab0c6ac5aa131cc5071a3a1d8e1366dad82288d9ec2ca44df78fb50e649"},
{file = "scipy-1.8.1-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:6311e3ae9cc75f77c33076cb2794fb0606f14c8f1b1c9ff8ce6005ba2c283621"},
{file = "scipy-1.8.1-cp38-cp38-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:3b69b90c9419884efeffaac2c38376d6ef566e6e730a231e15722b0ab58f0328"},
{file = "scipy-1.8.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:6cc6b33139eb63f30725d5f7fa175763dc2df6a8f38ddf8df971f7c345b652dc"},
{file = "scipy-1.8.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9c4e3ae8a716c8b3151e16c05edb1daf4cb4d866caa385e861556aff41300c14"},
{file = "scipy-1.8.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23b22fbeef3807966ea42d8163322366dd89da9bebdc075da7034cee3a1441ca"},
{file = "scipy-1.8.1-cp38-cp38-win32.whl", hash = "sha256:4b93ec6f4c3c4d041b26b5f179a6aab8f5045423117ae7a45ba9710301d7e462"},
{file = "scipy-1.8.1-cp38-cp38-win_amd64.whl", hash = "sha256:70ebc84134cf0c504ce6a5f12d6db92cb2a8a53a49437a6bb4edca0bc101f11c"},
{file = "scipy-1.8.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f3e7a8867f307e3359cc0ed2c63b61a1e33a19080f92fe377bc7d49f646f2ec1"},
{file = "scipy-1.8.1-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:2ef0fbc8bcf102c1998c1f16f15befe7cffba90895d6e84861cd6c6a33fb54f6"},
{file = "scipy-1.8.1-cp39-cp39-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:83606129247e7610b58d0e1e93d2c5133959e9cf93555d3c27e536892f1ba1f2"},
{file = "scipy-1.8.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:93d07494a8900d55492401917a119948ed330b8c3f1d700e0b904a578f10ead4"},
{file = "scipy-1.8.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3b3c8924252caaffc54d4a99f1360aeec001e61267595561089f8b5900821bb"},
{file = "scipy-1.8.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70de2f11bf64ca9921fda018864c78af7147025e467ce9f4a11bc877266900a6"},
{file = "scipy-1.8.1-cp39-cp39-win32.whl", hash = "sha256:1166514aa3bbf04cb5941027c6e294a000bba0cf00f5cdac6c77f2dad479b434"},
{file = "scipy-1.8.1-cp39-cp39-win_amd64.whl", hash = "sha256:9dd4012ac599a1e7eb63c114d1eee1bcfc6dc75a29b589ff0ad0bb3d9412034f"},
{file = "scipy-1.8.1.tar.gz", hash = "sha256:9e3fb1b0e896f14a85aa9a28d5f755daaeeb54c897b746df7a55ccb02b340f33"},
{file = "scipy-1.9.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:1884b66a54887e21addf9c16fb588720a8309a57b2e258ae1c7986d4444d3bc0"},
{file = "scipy-1.9.3-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:83b89e9586c62e787f5012e8475fbb12185bafb996a03257e9675cd73d3736dd"},
{file = "scipy-1.9.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1a72d885fa44247f92743fc20732ae55564ff2a519e8302fb7e18717c5355a8b"},
{file = "scipy-1.9.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d01e1dd7b15bd2449c8bfc6b7cc67d630700ed655654f0dfcf121600bad205c9"},
{file = "scipy-1.9.3-cp310-cp310-win_amd64.whl", hash = "sha256:68239b6aa6f9c593da8be1509a05cb7f9efe98b80f43a5861cd24c7557e98523"},
{file = "scipy-1.9.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b41bc822679ad1c9a5f023bc93f6d0543129ca0f37c1ce294dd9d386f0a21096"},
{file = "scipy-1.9.3-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:90453d2b93ea82a9f434e4e1cba043e779ff67b92f7a0e85d05d286a3625df3c"},
{file = "scipy-1.9.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83c06e62a390a9167da60bedd4575a14c1f58ca9dfde59830fc42e5197283dab"},
{file = "scipy-1.9.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:abaf921531b5aeaafced90157db505e10345e45038c39e5d9b6c7922d68085cb"},
{file = "scipy-1.9.3-cp311-cp311-win_amd64.whl", hash = "sha256:06d2e1b4c491dc7d8eacea139a1b0b295f74e1a1a0f704c375028f8320d16e31"},
{file = "scipy-1.9.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a04cd7d0d3eff6ea4719371cbc44df31411862b9646db617c99718ff68d4840"},
{file = "scipy-1.9.3-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:545c83ffb518094d8c9d83cce216c0c32f8c04aaf28b92cc8283eda0685162d5"},
{file = "scipy-1.9.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0d54222d7a3ba6022fdf5773931b5d7c56efe41ede7f7128c7b1637700409108"},
{file = "scipy-1.9.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cff3a5295234037e39500d35316a4c5794739433528310e117b8a9a0c76d20fc"},
{file = "scipy-1.9.3-cp38-cp38-win_amd64.whl", hash = "sha256:2318bef588acc7a574f5bfdff9c172d0b1bf2c8143d9582e05f878e580a3781e"},
{file = "scipy-1.9.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d644a64e174c16cb4b2e41dfea6af722053e83d066da7343f333a54dae9bc31c"},
{file = "scipy-1.9.3-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:da8245491d73ed0a994ed9c2e380fd058ce2fa8a18da204681f2fe1f57f98f95"},
{file = "scipy-1.9.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4db5b30849606a95dcf519763dd3ab6fe9bd91df49eba517359e450a7d80ce2e"},
{file = "scipy-1.9.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c68db6b290cbd4049012990d7fe71a2abd9ffbe82c0056ebe0f01df8be5436b0"},
{file = "scipy-1.9.3-cp39-cp39-win_amd64.whl", hash = "sha256:5b88e6d91ad9d59478fafe92a7c757d00c59e3bdc3331be8ada76a4f8d683f58"},
{file = "scipy-1.9.3.tar.gz", hash = "sha256:fbc5c05c85c1a02be77b1ff591087c83bc44579c6d2bd9fb798bb64ea5e1a027"},
]
seaborn = [
{file = "seaborn-0.12.1-py3-none-any.whl", hash = "sha256:a9eb39cba095fcb1e4c89a7fab1c57137d70a715a7f2eefcd41c9913c4d4ed65"},
{file = "seaborn-0.12.1.tar.gz", hash = "sha256:bb1eb1d51d3097368c187c3ef089c0288ec1fe8aa1c69fb324c68aa1d02df4c1"},
]
Send2Trash = [
{file = "Send2Trash-1.8.0-py3-none-any.whl", hash = "sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08"},
{file = "Send2Trash-1.8.0.tar.gz", hash = "sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d"},
]
setuptools = [
{file = "setuptools-65.6.3-py3-none-any.whl", hash = "sha256:57f6f22bde4e042978bcd50176fdb381d7c21a9efa4041202288d3737a0c6a54"},
{file = "setuptools-65.6.3.tar.gz", hash = "sha256:a7620757bf984b58deaf32fc8a4577a9bbc0850cf92c20e1ce41c38c19e5fb75"},
]
setuptools-scm = [
{file = "setuptools_scm-7.0.5-py3-none-any.whl", hash = "sha256:7930f720905e03ccd1e1d821db521bff7ec2ac9cf0ceb6552dd73d24a45d3b02"},
{file = "setuptools_scm-7.0.5.tar.gz", hash = "sha256:031e13af771d6f892b941adb6ea04545bbf91ebc5ce68c78aaf3fff6e1fb4844"},
]
shap = [
{file = "shap-0.40.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:8bb8b4c01bd33592412dae5246286f62efbb24ad774b63e59b8b16969b915b6d"},
{file = "shap-0.40.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:d2844acab55e18bcb3d691237a720301223a38805e6e43752e6717f3a8b2cc28"},
{file = "shap-0.40.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:e7dd3040b0ec91bc9f477a354973d231d3a6beebe2fa7a5c6a565a79ba7746e8"},
{file = "shap-0.40.0-cp36-cp36m-win32.whl", hash = "sha256:86ea1466244c7e0d0c5dd91d26a90e0b645f5c9d7066810462a921263463529b"},
{file = "shap-0.40.0-cp36-cp36m-win_amd64.whl", hash = "sha256:bbf0cfa30cd8c51f8830d3f25c3881b9949e062124cd0d0b3d8efdc7e0cf5136"},
{file = "shap-0.40.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3d3c5ace8bd5222b455fa5650f9043146e19d80d701f95b25c4c5fb81f628547"},
{file = "shap-0.40.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:18b4ca36a43409b784dc76810f76aaa504c467eac17fa89ef5ee330cb460b2b7"},
{file = "shap-0.40.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:dbb1ec9b2c05c3939425529437c5f3cfba7a3929fed0e820fb84a42e82358cdd"},
{file = "shap-0.40.0-cp37-cp37m-win32.whl", hash = "sha256:0d12f7d86481afd000d5f144c10cadb31d52fb1f77f68659472d6f6d89f7843b"},
{file = "shap-0.40.0-cp37-cp37m-win_amd64.whl", hash = "sha256:dbd07e48fc7f4d5916f6cdd9dbb8d29b7711a265cc9beac92e7d4a4d9e738bc7"},
{file = "shap-0.40.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:399325caecc7306eb7de17ac19aa797abbf2fcda47d2bb4588d9492adb2dce65"},
{file = "shap-0.40.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:4ec50bd0aa24efe1add177371b8b62080484efb87c6dbcf321895c5a08cf68d6"},
{file = "shap-0.40.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:e2b5f2d3cac82de0c49afde6529bebb6d5b20334325640267bf25dce572175a1"},
{file = "shap-0.40.0-cp38-cp38-win32.whl", hash = "sha256:ba06256568747aaab9ad0091306550bfe826c1f195bf2cf57b405ae1de16faed"},
{file = "shap-0.40.0-cp38-cp38-win_amd64.whl", hash = "sha256:fb1b325a55fdf58061d332ed3308d44162084d4cb5f53f2c7774ce943d60b0ad"},
{file = "shap-0.40.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f282fa12ca6fc594bcadca389309d733f73fe071e29ab49cb6e51beaa8b01a1a"},
{file = "shap-0.40.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:2e72a47407f010f845b3ed6cb4f5160f0907ec8ab97df2bca164ebcb263b4205"},
{file = "shap-0.40.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:649c905f9a4629839142e1769235989fb61730eb789a70d27ec7593eb02186a7"},
{file = "shap-0.40.0-cp39-cp39-win32.whl", hash = "sha256:5c220632ba57426d450dcc8ca43c55f657fe18e18f5d223d2a4e2aa02d905047"},
{file = "shap-0.40.0-cp39-cp39-win_amd64.whl", hash = "sha256:46e7084ce021eea450306bf7434adaead53921fd32504f04d1804569839e2979"},
{file = "shap-0.40.0.tar.gz", hash = "sha256:add0a27bb4eb57f0a363c2c4265b1a1328a8c15b01c14c7d432d9cc387dd8579"},
]
six = [
{file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"},
{file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"},
]
slicer = [
{file = "slicer-0.0.7-py3-none-any.whl", hash = "sha256:0b94faa5251c0f23782c03f7b7eedda91d80144059645f452c4bc80fab875976"},
{file = "slicer-0.0.7.tar.gz", hash = "sha256:f5d5f7b45f98d155b9c0ba6554fa9770c6b26d5793a3e77a1030fb56910ebeec"},
]
smart-open = [
{file = "smart_open-5.2.1-py3-none-any.whl", hash = "sha256:71d14489da58b60ce12fc3ecb823facc59a8b23cd1b58edb97175640350d3a62"},
{file = "smart_open-5.2.1.tar.gz", hash = "sha256:75abf758717a92a8f53aa96953f0c245c8cedf8e1e4184903db3659b419d4c17"},
]
sniffio = [
{file = "sniffio-1.3.0-py3-none-any.whl", hash = "sha256:eecefdce1e5bbfb7ad2eeaabf7c1eeb404d7757c379bd1f7e5cce9d8bf425384"},
{file = "sniffio-1.3.0.tar.gz", hash = "sha256:e60305c5e5d314f5389259b7f22aaa33d8f7dee49763119234af3755c55b9101"},
]
snowballstemmer = [
{file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
{file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
]
sortedcontainers = [
{file = "sortedcontainers-2.4.0-py2.py3-none-any.whl", hash = "sha256:a163dcaede0f1c021485e957a39245190e74249897e2ae4b2aa38595db237ee0"},
{file = "sortedcontainers-2.4.0.tar.gz", hash = "sha256:25caa5a06cc30b6b83d11423433f65d1f9d76c4c6a0c90e3379eaa43b9bfdb88"},
]
soupsieve = [
{file = "soupsieve-2.3.2.post1-py3-none-any.whl", hash = "sha256:3b2503d3c7084a42b1ebd08116e5f81aadfaea95863628c80a3b774a11b7c759"},
{file = "soupsieve-2.3.2.post1.tar.gz", hash = "sha256:fc53893b3da2c33de295667a0e19f078c14bf86544af307354de5fcf12a3f30d"},
]
spacy = [
{file = "spacy-3.4.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e546b314f619502ae03e5eb9a0cfd09ca7a9db265bcdd8a3af83cfb0f1432e55"},
{file = "spacy-3.4.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ded11aa8966236aab145b4d2d024b3eb61ac50078362d77d9ed7d8c240ef0f4a"},
{file = "spacy-3.4.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:462e141f514d78cff85685b5b12eb8cadac0bad2f7820149cbe18d03ccb2e59c"},
{file = "spacy-3.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c966d25b3f3e49f5de08546b3638928f49678c365cbbebd0eec28f74e0adb539"},
{file = "spacy-3.4.3-cp310-cp310-win_amd64.whl", hash = "sha256:2ddba486c4c981abe6f1e3fd72648dc8811966e5f0e05808f9c9fab155c388d7"},
{file = "spacy-3.4.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3c87117dd335fba44d1c0d77602f0763c3addf4e7ef9bdbe9a495466c3484c69"},
{file = "spacy-3.4.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3ce3938720f48eaeeb360a7f623f15a0d9efd1a688d5d740e3d4cdcd6f6da8a3"},
{file = "spacy-3.4.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6ad6bf5e4e7f0bc2ef94b7ff6fe59abd766f74c192bca2f17430a3b3cd5bda5a"},
{file = "spacy-3.4.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6644c678bd7af567c6ce679f71d64119282e7d6f1a6f787162a91be3ea39333"},
{file = "spacy-3.4.3-cp311-cp311-win_amd64.whl", hash = "sha256:e6b871de8857a6820140358db3943180fdbe03d44ed792155cee6cb95f4ac4ea"},
{file = "spacy-3.4.3-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d211c2b8894354bf8d961af9a9dcab38f764e1dcddd7b80760e438fcd4c9fe43"},
{file = "spacy-3.4.3-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2ea41f9de30435456235c4182d8bc2eb54a0a64719856e66e780350bb4c8cfbe"},
{file = "spacy-3.4.3-cp36-cp36m-win_amd64.whl", hash = "sha256:afaf6e716cbac4a0fbfa9e9bf95decff223936597ddd03ea869118a7576aa1b1"},
{file = "spacy-3.4.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7115da36369b3c537caf2fe08e0b45528bd091c7f56ba3580af1e6fdfa9b1081"},
{file = "spacy-3.4.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3b3e629c889cac9656151286ec1232c6a948ce0d44a39f1ef5e60fed4f183a10"},
{file = "spacy-3.4.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9277cd0fcb96ee5dd885f7e96c639f21afd96198d61ca32100446afbff4dfbef"},
{file = "spacy-3.4.3-cp37-cp37m-win_amd64.whl", hash = "sha256:a36bd06a5a147350e5f5f6903c4777296c37b18199251bb41056c3a73aa4494f"},
{file = "spacy-3.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bdafcd0823ca804c39d0bed9e677eb7d0235b1259563d0fd4d3a201c71108af8"},
{file = "spacy-3.4.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0cdc23a48e6543402b4c56ebf2d36246001175c29fd56d3081efcec684651abc"},
{file = "spacy-3.4.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:455c2fbd1de24b6fe34fa121d87525134d7498f9f458ebc8274d7940b473999e"},
{file = "spacy-3.4.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d1c85279fbb6b75d7fb8d7c59c2b734502e51271cad90926e8df1d21b67da5aa"},
{file = "spacy-3.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:5c0d65f39184f522b4e67b965a42d121a3b2d799362682fe8847b64b0ce5bc7c"},
{file = "spacy-3.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a7b97ec21ed773edb2479ae5d6c7686b8034f418df6bccd9218f5c3c2b7cf888"},
{file = "spacy-3.4.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:36a9a506029842795099fd97ad95f0da2845c319020fcc7164cbf33650726f83"},
{file = "spacy-3.4.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5ab293eb1423fa05c7ee71b2fedda57c2b4a4ca8dc054ce678809457287b01dc"},
{file = "spacy-3.4.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bb6d0f185126decc8392cde7d28eb6e85ba4bca15424713288cccc49c2a3c52b"},
{file = "spacy-3.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:676ab9ab2cf94ba48caa306f185a166e85bd35b388ec24512c8ba7dfcbc7517e"},
{file = "spacy-3.4.3.tar.gz", hash = "sha256:22698cf5175e2b697e82699fcccee3092b42137a57d352df208d71657fd693bb"},
]
spacy-legacy = [
{file = "spacy-legacy-3.0.10.tar.gz", hash = "sha256:16104595d8ab1b7267f817a449ad1f986eb1f2a2edf1050748f08739a479679a"},
{file = "spacy_legacy-3.0.10-py2.py3-none-any.whl", hash = "sha256:8526a54d178dee9b7f218d43e5c21362c59056c5da23380b319b56043e9211f3"},
]
spacy-loggers = [
{file = "spacy-loggers-1.0.3.tar.gz", hash = "sha256:00f6fd554db9fd1fde6501b23e1f0e72f6eef14bb1e7fc15456d11d1d2de92ca"},
{file = "spacy_loggers-1.0.3-py3-none-any.whl", hash = "sha256:f74386b390a023f9615dcb499b7b4ad63338236a8187f0ec4dfe265a9f665ee8"},
]
sparse = [
{file = "sparse-0.13.0-py2.py3-none-any.whl", hash = "sha256:95ed0b649a0663b1488756ad4cf242b0a9bb2c9a25bc752a7c6ca9fbe8258966"},
{file = "sparse-0.13.0.tar.gz", hash = "sha256:685dc994aa770ee1b23f2d5392819c8429f27958771f8dceb2c4fb80210d5915"},
]
Sphinx = [
{file = "Sphinx-5.3.0.tar.gz", hash = "sha256:51026de0a9ff9fc13c05d74913ad66047e104f56a129ff73e174eb5c3ee794b5"},
{file = "sphinx-5.3.0-py3-none-any.whl", hash = "sha256:060ca5c9f7ba57a08a1219e547b269fadf125ae25b06b9fa7f66768efb652d6d"},
]
sphinx-copybutton = [
{file = "sphinx-copybutton-0.5.0.tar.gz", hash = "sha256:a0c059daadd03c27ba750da534a92a63e7a36a7736dcf684f26ee346199787f6"},
{file = "sphinx_copybutton-0.5.0-py3-none-any.whl", hash = "sha256:9684dec7434bd73f0eea58dda93f9bb879d24bff2d8b187b1f2ec08dfe7b5f48"},
]
sphinx_design = [
{file = "sphinx_design-0.3.0-py3-none-any.whl", hash = "sha256:823c1dd74f31efb3285ec2f1254caefed29d762a40cd676f58413a1e4ed5cc96"},
{file = "sphinx_design-0.3.0.tar.gz", hash = "sha256:7183fa1fae55b37ef01bda5125a21ee841f5bbcbf59a35382be598180c4cefba"},
]
sphinx-rtd-theme = [
{file = "sphinx_rtd_theme-1.1.1-py2.py3-none-any.whl", hash = "sha256:31faa07d3e97c8955637fc3f1423a5ab2c44b74b8cc558a51498c202ce5cbda7"},
{file = "sphinx_rtd_theme-1.1.1.tar.gz", hash = "sha256:6146c845f1e1947b3c3dd4432c28998a1693ccc742b4f9ad7c63129f0757c103"},
]
sphinxcontrib-applehelp = [
{file = "sphinxcontrib-applehelp-1.0.2.tar.gz", hash = "sha256:a072735ec80e7675e3f432fcae8610ecf509c5f1869d17e2eecff44389cdbc58"},
{file = "sphinxcontrib_applehelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:806111e5e962be97c29ec4c1e7fe277bfd19e9652fb1a4392105b43e01af885a"},
]
sphinxcontrib-devhelp = [
{file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
{file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
]
sphinxcontrib-googleanalytics = []
sphinxcontrib-htmlhelp = [
{file = "sphinxcontrib-htmlhelp-2.0.0.tar.gz", hash = "sha256:f5f8bb2d0d629f398bf47d0d69c07bc13b65f75a81ad9e2f71a63d4b7a2f6db2"},
{file = "sphinxcontrib_htmlhelp-2.0.0-py2.py3-none-any.whl", hash = "sha256:d412243dfb797ae3ec2b59eca0e52dac12e75a241bf0e4eb861e450d06c6ed07"},
]
sphinxcontrib-jsmath = [
{file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
{file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
]
sphinxcontrib-qthelp = [
{file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
{file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
]
sphinxcontrib-serializinghtml = [
{file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
{file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
]
srsly = [
{file = "srsly-2.4.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:8fed31ef8acbb5fead2152824ef39e12d749fcd254968689ba5991dd257b63b4"},
{file = "srsly-2.4.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:04d0b4cd91e098cdac12d2c28e256b1181ba98bcd00e460b8e42dee3e8542804"},
{file = "srsly-2.4.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6d83bea1f774b54d9313a374a95f11a776d37bcedcda93c526bf7f1cb5f26428"},
{file = "srsly-2.4.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cae5d48a0bda55a3728f49976ea0b652f508dbc5ac3e849f41b64a5753ec7f0a"},
{file = "srsly-2.4.5-cp310-cp310-win_amd64.whl", hash = "sha256:f74c64934423bcc2d3508cf3a079c7034e5cde988255dc57c7a09794c78f0610"},
{file = "srsly-2.4.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0f9abb7857f9363f1ac52123db94dfe1c4af8959a39d698eff791d17e45e00b6"},
{file = "srsly-2.4.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f48d40c3b3d20e38410e7a95fa5b4050c035f467b0793aaf67188b1edad37fe3"},
{file = "srsly-2.4.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1434759effec2ee266a24acd9b53793a81cac01fc1e6321c623195eda1b9c7df"},
{file = "srsly-2.4.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5e7b0cd9853b0d9e00ad23d26199c1e44d8fd74096cbbbabc92447a915bcfd78"},
{file = "srsly-2.4.5-cp311-cp311-win_amd64.whl", hash = "sha256:874010587a807264963de9a1c91668c43cee9ed2f683f5406bdf5a34dfe12cca"},
{file = "srsly-2.4.5-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa4e1fe143275339d1c4a74e46d4c75168eed8b200f44f2ea023d45ff089a2f"},
{file = "srsly-2.4.5-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1c4291ee125796fb05e778e9ca8f9a829e8c314b757826f2e1d533e424a93531"},
{file = "srsly-2.4.5-cp36-cp36m-win_amd64.whl", hash = "sha256:8f258ee69aefb053258ac2e4f4b9d597e622b79f78874534430e864cef0be199"},
{file = "srsly-2.4.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:ace951c3088204bd66f30326f93ab6e615ce1562a461a8a464759d99fa9c2a02"},
{file = "srsly-2.4.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:facab907801fbcb0e54b3532e04bc6a0709184d68004ef3a129e8c7e3ca63d82"},
{file = "srsly-2.4.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a49c089541a9a0a27ccb841a596350b7ee1d6adfc7ebd28eddedfd34dc9f12c5"},
{file = "srsly-2.4.5-cp37-cp37m-win_amd64.whl", hash = "sha256:db6bc02bd1e3372a3636e47b22098107c9df2cf12d220321b51c586ba17904b3"},
{file = "srsly-2.4.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9a95c682de8c6e6145199f10a7c597647ff7d398fb28874f845ba7d34a86a033"},
{file = "srsly-2.4.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:8c26c5c0e07ea7bb7b8b8735e1b2261fea308c2c883b99211d11747162c6d897"},
{file = "srsly-2.4.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e0043eff95be45acb5ce09cebb80ebdb9f2b6856aa3a15979e6fe3cc9a486753"},
{file = "srsly-2.4.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a2075124d4872e754af966e76f3258cd526eeac84f0995ee8cd561fd4cf1b68e"},
{file = "srsly-2.4.5-cp38-cp38-win_amd64.whl", hash = "sha256:1a41e5b10902c885cabe326ba86d549d7011e38534c45bed158ecb8abd4b44ce"},
{file = "srsly-2.4.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b5a96f0ae15b651fa3fd87421bd93e61c6dc46c0831cbe275c9b790d253126b5"},
{file = "srsly-2.4.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:764906e9f4c2ac5f748c49d95c8bf79648404ebc548864f9cb1fa0707942d830"},
{file = "srsly-2.4.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:95afe9625badaf5ce326e37b21362423d7e8578a5ec9c85b15c3fca93205a883"},
{file = "srsly-2.4.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90359cc3c5601afd45ec12c52bde1cf1ccbe0dc7d4244fd1f8d0c9e100c71707"},
{file = "srsly-2.4.5-cp39-cp39-win_amd64.whl", hash = "sha256:2d3b0d32be2267fb489da172d71399ac59f763189b47dbe68eedb0817afaa6dc"},
{file = "srsly-2.4.5.tar.gz", hash = "sha256:c842258967baa527cea9367986e42b8143a1a890e7d4a18d25a36edc3c7a33c7"},
]
stack-data = [
{file = "stack_data-0.6.2-py3-none-any.whl", hash = "sha256:cbb2a53eb64e5785878201a97ed7c7b94883f48b87bfb0bbe8b623c74679e4a8"},
{file = "stack_data-0.6.2.tar.gz", hash = "sha256:32d2dd0376772d01b6cb9fc996f3c8b57a357089dec328ed4b6553d037eaf815"},
]
statsmodels = [
{file = "statsmodels-0.13.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c75319fddded9507cc310fc3980e4ae4d64e3ff37b322ad5e203a84f89d85203"},
{file = "statsmodels-0.13.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6f148920ef27c7ba69a5735724f65de9422c0c8bcef71b50c846b823ceab8840"},
{file = "statsmodels-0.13.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cc4d3e866bfe0c4f804bca362d0e7e29d24b840aaba8d35a754387e16d2a119"},
{file = "statsmodels-0.13.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:072950d6f7820a6b0bd6a27b2d792a6d6f952a1d2f62f0dcf8dd808799475855"},
{file = "statsmodels-0.13.5-cp310-cp310-win_amd64.whl", hash = "sha256:159ae9962c61b31dcffe6356d72ae3d074bc597ad9273ec93ae653fe607b8516"},
{file = "statsmodels-0.13.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:9061c0d5ee4f3038b590afedd527a925e5de27195dc342381bac7675b2c5efe4"},
{file = "statsmodels-0.13.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e1d89cba5fafc1bf8e75296fdfad0b619de2bfb5e6c132913991d207f3ead675"},
{file = "statsmodels-0.13.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:01bc16e7c66acb30cd3dda6004c43212c758223d1966131226024a5c99ec5a7e"},
{file = "statsmodels-0.13.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d5cd9ab5de2c7489b890213cba2aec3d6468eaaec547041c2dfcb1e03411f7e"},
{file = "statsmodels-0.13.5-cp311-cp311-win_amd64.whl", hash = "sha256:857d5c0564a68a7ef77dc2252bb43c994c0699919b4e1f06a9852c2fbb588765"},
{file = "statsmodels-0.13.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5a5348b2757ab31c5c31b498f25eff2ea3c42086bef3d3b88847c25a30bdab9c"},
{file = "statsmodels-0.13.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9b21648e3a8e7514839ba000a48e495cdd8bb55f1b71c608cf314b05541e283b"},
{file = "statsmodels-0.13.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b829eada6cec07990f5e6820a152af4871c601fd458f76a896fb79ae2114985"},
{file = "statsmodels-0.13.5-cp37-cp37m-win_amd64.whl", hash = "sha256:872b3a8186ef20f647c7ab5ace512a8fc050148f3c2f366460ab359eec3d9695"},
{file = "statsmodels-0.13.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bc1abb81d24f56425febd5a22bb852a1b98e53b80c4a67f50938f9512f154141"},
{file = "statsmodels-0.13.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:a2c46f1b0811a9736db37badeb102c0903f33bec80145ced3aa54df61aee5c2b"},
{file = "statsmodels-0.13.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:947f79ba9662359f1cfa6e943851f17f72b06e55f4a7c7a2928ed3bc57ed6cb8"},
{file = "statsmodels-0.13.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:046251c939c51e7632bcc8c6d6f31b8ca0eaffdf726d2498463f8de3735c9a82"},
{file = "statsmodels-0.13.5-cp38-cp38-win_amd64.whl", hash = "sha256:84f720e8d611ef8f297e6d2ffa7248764e223ef7221a3fc136e47ae089609611"},
{file = "statsmodels-0.13.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b0d1d24e4adf96ec3c64d9a027dcee2c5d5096bb0dad33b4d91034c0a3c40371"},
{file = "statsmodels-0.13.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0f0e5c9c58fb6cba41db01504ec8dd018c96a95152266b7d5d67e0de98840474"},
{file = "statsmodels-0.13.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5b034aa4b9ad4f4d21abc4dd4841be0809a446db14c7aa5c8a65090aea9f1143"},
{file = "statsmodels-0.13.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:73f97565c29241e839ffcef74fa995afdfe781910ccc27c189e5890193085958"},
{file = "statsmodels-0.13.5-cp39-cp39-win_amd64.whl", hash = "sha256:2ff331e508f2d1a53d3a188305477f4cf05cd8c52beb6483885eb3d51c8be3ad"},
{file = "statsmodels-0.13.5.tar.gz", hash = "sha256:593526acae1c0fda0ea6c48439f67c3943094c542fe769f8b90fe9e6c6cc4871"},
]
sympy = [
{file = "sympy-1.11.1-py3-none-any.whl", hash = "sha256:938f984ee2b1e8eae8a07b884c8b7a1146010040fccddc6539c54f401c8f6fcf"},
{file = "sympy-1.11.1.tar.gz", hash = "sha256:e32380dce63cb7c0108ed525570092fd45168bdae2faa17e528221ef72e88658"},
]
tblib = [
{file = "tblib-1.7.0-py2.py3-none-any.whl", hash = "sha256:289fa7359e580950e7d9743eab36b0691f0310fce64dee7d9c31065b8f723e23"},
{file = "tblib-1.7.0.tar.gz", hash = "sha256:059bd77306ea7b419d4f76016aef6d7027cc8a0785579b5aad198803435f882c"},
]
tenacity = [
{file = "tenacity-8.1.0-py3-none-any.whl", hash = "sha256:35525cd47f82830069f0d6b73f7eb83bc5b73ee2fff0437952cedf98b27653ac"},
{file = "tenacity-8.1.0.tar.gz", hash = "sha256:e48c437fdf9340f5666b92cd7990e96bc5fc955e1298baf4a907e3972067a445"},
]
tensorboard = [
{file = "tensorboard-2.11.0-py3-none-any.whl", hash = "sha256:a0e592ee87962e17af3f0dce7faae3fbbd239030159e9e625cce810b7e35c53d"},
]
tensorboard-data-server = [
{file = "tensorboard_data_server-0.6.1-py3-none-any.whl", hash = "sha256:809fe9887682d35c1f7d1f54f0f40f98bb1f771b14265b453ca051e2ce58fca7"},
{file = "tensorboard_data_server-0.6.1-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:fa8cef9be4fcae2f2363c88176638baf2da19c5ec90addb49b1cde05c95c88ee"},
{file = "tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl", hash = "sha256:d8237580755e58eff68d1f3abefb5b1e39ae5c8b127cc40920f9c4fb33f4b98a"},
]
tensorboard-plugin-wit = [
{file = "tensorboard_plugin_wit-1.8.1-py3-none-any.whl", hash = "sha256:ff26bdd583d155aa951ee3b152b3d0cffae8005dc697f72b44a8e8c2a77a8cbe"},
]
tensorflow = [
{file = "tensorflow-2.11.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:6c049fec6c2040685d6f43a63e17ccc5d6b0abc16b70cc6f5e7d691262b5d2d0"},
{file = "tensorflow-2.11.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bcc8380820cea8f68f6c90b8aee5432e8537e5bb9ec79ac61a98e6a9a02c7d40"},
{file = "tensorflow-2.11.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d973458241c8771bf95d4ba68ad5d67b094f72dd181c2d562ffab538c1b0dad7"},
{file = "tensorflow-2.11.0-cp310-cp310-win_amd64.whl", hash = "sha256:d470b772ee3c291a8c7be2331e7c379e0c338223c0bf532f5906d4556f17580d"},
{file = "tensorflow-2.11.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:d29c1179149fa469ad68234c52c83081d037ead243f90e826074e2563a0f938a"},
{file = "tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2cdba2fce00d6c924470d4fb65d5e95a4b6571a863860608c0c13f0393f4ca0d"},
{file = "tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2ab20f93d2b52a44b414ec6dcf82aa12110e90e0920039a27108de28ae2728"},
{file = "tensorflow-2.11.0-cp37-cp37m-win_amd64.whl", hash = "sha256:445510f092f7827e1f60f59b8bfb58e664aaf05d07daaa21c5735a7f76ca2b25"},
{file = "tensorflow-2.11.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:056d29f2212342536ce3856aa47910a2515eb97ec0a6cc29ed47fc4be1369ec8"},
{file = "tensorflow-2.11.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:17b29d6d360fad545ab1127db52592efd3f19ac55c1a45e5014da328ae867ab4"},
{file = "tensorflow-2.11.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:335ab5cccd7a1c46e3d89d9d46913f0715e8032df8d7438f9743b3fb97b39f69"},
{file = "tensorflow-2.11.0-cp38-cp38-win_amd64.whl", hash = "sha256:d48da37c8ae711eb38047a56a052ca8bb4ee018a91a479e42b7a8d117628c32e"},
{file = "tensorflow-2.11.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:d9cf25bca641f2e5c77caa3bfd8dd6b892a7aec0695c54d2a7c9f52a54a8d487"},
{file = "tensorflow-2.11.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0d28f9691ebc48c0075e271023b3f147ae2bc29a3d3a7f42d45019c6b4a700d2"},
{file = "tensorflow-2.11.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:276a44210d956701899dc78ad0aa116a0071f22fb0bcc1ea6bb59f7646b08d11"},
{file = "tensorflow-2.11.0-cp39-cp39-win_amd64.whl", hash = "sha256:cc3444fe1d58c65a195a69656bf56015bf19dc2916da607d784b0a1e215ec008"},
]
tensorflow-estimator = [
{file = "tensorflow_estimator-2.11.0-py2.py3-none-any.whl", hash = "sha256:ea3b64acfff3d9a244f06178c9bdedcbdd3f125b67d0888dba8229498d06468b"},
]
tensorflow-io-gcs-filesystem = [
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:22753dc28c949bfaf29b573ee376370762c88d80330fe95cfb291261eb5e927a"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:52988659f405166df79905e9859bc84ae2a71e3ff61522ba32a95e4dce8e66d2"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp310-cp310-win_amd64.whl", hash = "sha256:698d7f89e09812b9afeb47c3860797343a22f997c64ab9dab98132c61daa8a7d"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:bbf245883aa52ec687b66d0fcbe0f5f0a92d98c0b1c53e6a736039a3548d29a1"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp311-cp311-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:6d95f306ff225c5053fd06deeab3e3a2716357923cb40c44d566c11be779caa3"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp311-cp311-win_amd64.whl", hash = "sha256:5fbef5836e70026245d8d9e692c44dae2c6dbc208c743d01f5b7a2978d6b6bc6"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:00cf6a92f1f9f90b2ba2d728870bcd2a70b116316d0817ab0b91dd390c25b3fd"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f76cbe1a784841c223f6861e5f6c7e53aa6232cb626d57e76881a0638c365de6"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp37-cp37m-win_amd64.whl", hash = "sha256:c5d99f56c12a349905ff684142e4d2df06ae68ecf50c4aad5449a5f81731d858"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:b6e2d275020fb4d1a952cd3fa546483f4e46ad91d64e90d3458e5ca3d12f6477"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a6670e0da16c884267e896ea5c3334d6fd319bd6ff7cf917043a9f3b2babb1b3"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp38-cp38-win_amd64.whl", hash = "sha256:bfed720fc691d3f45802a7bed420716805aef0939c11cebf25798906201f626e"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:cc062ce13ec95fb64b1fd426818a6d2b0e5be9692bc0e43a19cce115b6da4336"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:366e1eff8dbd6b64333d7061e2a8efd081ae4742614f717ced08d8cc9379eb50"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp39-cp39-win_amd64.whl", hash = "sha256:9484893779324b2d34874b0aacf3b824eb4f22d782e75df029cbccab2e607974"},
]
termcolor = [
{file = "termcolor-2.1.1-py3-none-any.whl", hash = "sha256:fa852e957f97252205e105dd55bbc23b419a70fec0085708fc0515e399f304fd"},
{file = "termcolor-2.1.1.tar.gz", hash = "sha256:67cee2009adc6449c650f6bcf3bdeed00c8ba53a8cda5362733c53e0a39fb70b"},
]
terminado = [
{file = "terminado-0.17.0-py3-none-any.whl", hash = "sha256:bf6fe52accd06d0661d7611cc73202121ec6ee51e46d8185d489ac074ca457c2"},
{file = "terminado-0.17.0.tar.gz", hash = "sha256:520feaa3aeab8ad64a69ca779be54be9234edb2d0d6567e76c93c2c9a4e6e43f"},
]
thinc = [
{file = "thinc-8.1.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5dc6629e4770a13dec34eda3c4d89302f1b5c91ac4663cd53f876a4e761fcc00"},
{file = "thinc-8.1.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8af5639de41a08d358fac073ac116faefe75289d9bed5c1fbf6c7a54724529ea"},
{file = "thinc-8.1.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4d66eeacc29769bf4238a0666f05e38d75dce60ab609eea5089975e6d8b82721"},
{file = "thinc-8.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:25fcf9b53317f3addca048f1295d4708a95c526821295fe42398e23520514373"},
{file = "thinc-8.1.5-cp310-cp310-win_amd64.whl", hash = "sha256:a683f5280601f2fa1625e738e2b6ce481d17b07350823164f5863aab6b8b8a5d"},
{file = "thinc-8.1.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:404af2a714d6e688d27f7816042bca85766cbc57808aa9afb3309ad786000726"},
{file = "thinc-8.1.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ee28aa9773cb69d6c95d0c58b3fa9997c88840ad1eb877576f407a5b3b0f93c0"},
{file = "thinc-8.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7acccd5fb2fcd6caab1f3ad9d3f6acd1c6194a638dceccb5a33bd6f1875221ab"},
{file = "thinc-8.1.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1dc59ab558c85f901ac8299eb8ff1be14404b4d47e5ed3f94f897e25496e4f80"},
{file = "thinc-8.1.5-cp311-cp311-win_amd64.whl", hash = "sha256:07a4cf13c6f0259f32c9d023e2d32d0f5e0aa12ce0422792dbadd24fa1e0379e"},
{file = "thinc-8.1.5-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3ad722c4b1351a712bf8759307ea1213f236aee4a170b2ff31f7908f31b34261"},
{file = "thinc-8.1.5-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:076d68f6c27862b66e15af3622651c58f66b3d3b1c69beadbf1c13da294f05cc"},
{file = "thinc-8.1.5-cp36-cp36m-win_amd64.whl", hash = "sha256:91a8ef8dd565b6aa9b3161b97eece079993109be156f4e8501c8bd36e02b6f3f"},
{file = "thinc-8.1.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:73538c0e596d1f281678354f6508d4af5fad3ae0743b069a96628f2a96085fa5"},
{file = "thinc-8.1.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea5e6502565fe72f9a975f6fe5d1be9d19914d2a3abb3158da08b4adffaa97c6"},
{file = "thinc-8.1.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d202e79e3d785a2931d580d3dafaa6ca357c5656c82341121731a3491a1c8887"},
{file = "thinc-8.1.5-cp37-cp37m-win_amd64.whl", hash = "sha256:61dfa235c891c1fa24f9607cd0cad264806adeb70d267162c6e5d91fb9f78640"},
{file = "thinc-8.1.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b62a4247cce4c3a07014b9386b9045dbc15a83aa46102a7fcd5d8eec21fa463a"},
{file = "thinc-8.1.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:345d15eb45743b305a35dd1dc77d282248e55e45a0a84c38d2dfc9fad6130125"},
{file = "thinc-8.1.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6793340b5ada30f11d9beaa6001ade6d80cf3a7877d701ec1710552145dabb33"},
{file = "thinc-8.1.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fa07750e65cc7d3bd922bf2046a10ef28cf22497990da13c3ca154b25449b758"},
{file = "thinc-8.1.5-cp38-cp38-win_amd64.whl", hash = "sha256:b7c1b8417e6bebcebe0bbded816b7b6587a1e239539109897e15cf8463dbed10"},
{file = "thinc-8.1.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ad96acada56e4a0509b834c2e0950a5066727ddfc8d2201b83f7bca8751886aa"},
{file = "thinc-8.1.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5d0144cccb3fb08b15bba73a97f83c0f311a388417fb89d5bb4451abe559b0a2"},
{file = "thinc-8.1.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ced446d2af306a29b0c9ba8940a6631e2e9ef287f9643f4a1d539d69e9fc7266"},
{file = "thinc-8.1.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bb376234c44f173445651c9bf397d05622e31c09a98f81cee98f5908d674380"},
{file = "thinc-8.1.5-cp39-cp39-win_amd64.whl", hash = "sha256:16be051c6f71d967fe87c3bda3a760699539cf75fee6b32527ea38feb3002e56"},
{file = "thinc-8.1.5.tar.gz", hash = "sha256:4d3e4de33d2d0eae7c1455c60c680e453b0204c29e3d2d548d7a9e7fe08ccfbd"},
]
threadpoolctl = [
{file = "threadpoolctl-3.1.0-py3-none-any.whl", hash = "sha256:8b99adda265feb6773280df41eece7b2e6561b772d21ffd52e372f999024907b"},
{file = "threadpoolctl-3.1.0.tar.gz", hash = "sha256:a335baacfaa4400ae1f0d8e3a58d6674d2f8828e3716bb2802c44955ad391380"},
]
tinycss2 = [
{file = "tinycss2-1.2.1-py3-none-any.whl", hash = "sha256:2b80a96d41e7c3914b8cda8bc7f705a4d9c49275616e886103dd839dfc847847"},
{file = "tinycss2-1.2.1.tar.gz", hash = "sha256:8cff3a8f066c2ec677c06dbc7b45619804a6938478d9d73c284b29d14ecb0627"},
]
tokenize-rt = [
{file = "tokenize_rt-5.0.0-py2.py3-none-any.whl", hash = "sha256:c67772c662c6b3dc65edf66808577968fb10badfc2042e3027196bed4daf9e5a"},
{file = "tokenize_rt-5.0.0.tar.gz", hash = "sha256:3160bc0c3e8491312d0485171dea861fc160a240f5f5766b72a1165408d10740"},
]
tomli = [
{file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
]
toolz = [
{file = "toolz-0.12.0-py3-none-any.whl", hash = "sha256:2059bd4148deb1884bb0eb770a3cde70e7f954cfbbdc2285f1f2de01fd21eb6f"},
{file = "toolz-0.12.0.tar.gz", hash = "sha256:88c570861c440ee3f2f6037c4654613228ff40c93a6c25e0eba70d17282c6194"},
]
torch = [
{file = "torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:9c038662db894a23e49e385df13d47b2a777ffd56d9bcd5b832593fab0a7e286"},
{file = "torch-1.12.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:4e1b9c14cf13fd2ab8d769529050629a0e68a6fc5cb8e84b4a3cc1dd8c4fe541"},
{file = "torch-1.12.1-cp310-cp310-win_amd64.whl", hash = "sha256:e9c8f4a311ac29fc7e8e955cfb7733deb5dbe1bdaabf5d4af2765695824b7e0d"},
{file = "torch-1.12.1-cp310-none-macosx_10_9_x86_64.whl", hash = "sha256:976c3f997cea38ee91a0dd3c3a42322785414748d1761ef926b789dfa97c6134"},
{file = "torch-1.12.1-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:68104e4715a55c4bb29a85c6a8d57d820e0757da363be1ba680fa8cc5be17b52"},
{file = "torch-1.12.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:743784ccea0dc8f2a3fe6a536bec8c4763bd82c1352f314937cb4008d4805de1"},
{file = "torch-1.12.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b5dbcca369800ce99ba7ae6dee3466607a66958afca3b740690d88168752abcf"},
{file = "torch-1.12.1-cp37-cp37m-win_amd64.whl", hash = "sha256:f3b52a634e62821e747e872084ab32fbcb01b7fa7dbb7471b6218279f02a178a"},
{file = "torch-1.12.1-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:8a34a2fbbaa07c921e1b203f59d3d6e00ed379f2b384445773bd14e328a5b6c8"},
{file = "torch-1.12.1-cp37-none-macosx_11_0_arm64.whl", hash = "sha256:42f639501928caabb9d1d55ddd17f07cd694de146686c24489ab8c615c2871f2"},
{file = "torch-1.12.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:0b44601ec56f7dd44ad8afc00846051162ef9c26a8579dda0a02194327f2d55e"},
{file = "torch-1.12.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:cd26d8c5640c3a28c526d41ccdca14cf1cbca0d0f2e14e8263a7ac17194ab1d2"},
{file = "torch-1.12.1-cp38-cp38-win_amd64.whl", hash = "sha256:42e115dab26f60c29e298559dbec88444175528b729ae994ec4c65d56fe267dd"},
{file = "torch-1.12.1-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:a8320ba9ad87e80ca5a6a016e46ada4d1ba0c54626e135d99b2129a4541c509d"},
{file = "torch-1.12.1-cp38-none-macosx_11_0_arm64.whl", hash = "sha256:03e31c37711db2cd201e02de5826de875529e45a55631d317aadce2f1ed45aa8"},
{file = "torch-1.12.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:9b356aea223772cd754edb4d9ecf2a025909b8615a7668ac7d5130f86e7ec421"},
{file = "torch-1.12.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:6cf6f54b43c0c30335428195589bd00e764a6d27f3b9ba637aaa8c11aaf93073"},
{file = "torch-1.12.1-cp39-cp39-win_amd64.whl", hash = "sha256:f00c721f489089dc6364a01fd84906348fe02243d0af737f944fddb36003400d"},
{file = "torch-1.12.1-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:bfec2843daa654f04fda23ba823af03e7b6f7650a873cdb726752d0e3718dada"},
{file = "torch-1.12.1-cp39-none-macosx_11_0_arm64.whl", hash = "sha256:69fe2cae7c39ccadd65a123793d30e0db881f1c1927945519c5c17323131437e"},
]
torchvision = [
{file = "torchvision-0.13.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:19286a733c69dcbd417b86793df807bd227db5786ed787c17297741a9b0d0fc7"},
{file = "torchvision-0.13.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:08f592ea61836ebeceb5c97f4d7a813b9d7dc651bbf7ce4401563ccfae6a21fc"},
{file = "torchvision-0.13.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:ef5fe3ec1848123cd0ec74c07658192b3147dcd38e507308c790d5943e87b88c"},
{file = "torchvision-0.13.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:099874088df104d54d8008f2a28539ca0117b512daed8bf3c2bbfa2b7ccb187a"},
{file = "torchvision-0.13.1-cp310-cp310-win_amd64.whl", hash = "sha256:8e4d02e4d8a203e0c09c10dfb478214c224d080d31efc0dbf36d9c4051f7f3c6"},
{file = "torchvision-0.13.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5e631241bee3661de64f83616656224af2e3512eb2580da7c08e08b8c965a8ac"},
{file = "torchvision-0.13.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:899eec0b9f3b99b96d6f85b9aa58c002db41c672437677b553015b9135b3be7e"},
{file = "torchvision-0.13.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:83e9e2457f23110fd53b0177e1bc621518d6ea2108f570e853b768ce36b7c679"},
{file = "torchvision-0.13.1-cp37-cp37m-win_amd64.whl", hash = "sha256:7552e80fa222252b8b217a951c85e172a710ea4cad0ae0c06fbb67addece7871"},
{file = "torchvision-0.13.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f230a1a40ed70d51e463ce43df243ec520902f8725de2502e485efc5eea9d864"},
{file = "torchvision-0.13.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e9a563894f9fa40692e24d1aa58c3ef040450017cfed3598ff9637f404f3fe3b"},
{file = "torchvision-0.13.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7cb789ceefe6dcd0dc8eeda37bfc45efb7cf34770eac9533861d51ca508eb5b3"},
{file = "torchvision-0.13.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:87c137f343197769a51333076e66bfcd576301d2cd8614b06657187c71b06c4f"},
{file = "torchvision-0.13.1-cp38-cp38-win_amd64.whl", hash = "sha256:4d8bf321c4380854ef04613935fdd415dce29d1088a7ff99e06e113f0efe9203"},
{file = "torchvision-0.13.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:0298bae3b09ac361866088434008d82b99d6458fe8888c8df90720ef4b347d44"},
{file = "torchvision-0.13.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c5ed609c8bc88c575226400b2232e0309094477c82af38952e0373edef0003fd"},
{file = "torchvision-0.13.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:3567fb3def829229ec217c1e38f08c5128ff7fb65854cac17ebac358ff7aa309"},
{file = "torchvision-0.13.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:b167934a5943242da7b1e59318f911d2d253feeca0d13ad5d832b58eed943401"},
{file = "torchvision-0.13.1-cp39-cp39-win_amd64.whl", hash = "sha256:0e77706cc90462653620e336bb90daf03d7bf1b88c3a9a3037df8d111823a56e"},
]
tornado = [
{file = "tornado-6.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:20f638fd8cc85f3cbae3c732326e96addff0a15e22d80f049e00121651e82e72"},
{file = "tornado-6.2-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:87dcafae3e884462f90c90ecc200defe5e580a7fbbb4365eda7c7c1eb809ebc9"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba09ef14ca9893954244fd872798b4ccb2367c165946ce2dd7376aebdde8e3ac"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8150f721c101abdef99073bf66d3903e292d851bee51910839831caba341a75"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3a2f5999215a3a06a4fc218026cd84c61b8b2b40ac5296a6db1f1451ef04c1e"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5f8c52d219d4995388119af7ccaa0bcec289535747620116a58d830e7c25d8a8"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_i686.whl", hash = "sha256:6fdfabffd8dfcb6cf887428849d30cf19a3ea34c2c248461e1f7d718ad30b66b"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:1d54d13ab8414ed44de07efecb97d4ef7c39f7438cf5e976ccd356bebb1b5fca"},
{file = "tornado-6.2-cp37-abi3-win32.whl", hash = "sha256:5c87076709343557ef8032934ce5f637dbb552efa7b21d08e89ae7619ed0eb23"},
{file = "tornado-6.2-cp37-abi3-win_amd64.whl", hash = "sha256:e5f923aa6a47e133d1cf87d60700889d7eae68988704e20c75fb2d65677a8e4b"},
{file = "tornado-6.2.tar.gz", hash = "sha256:9b630419bde84ec666bfd7ea0a4cb2a8a651c2d5cccdbdd1972a0c859dfc3c13"},
]
tqdm = [
{file = "tqdm-4.64.1-py2.py3-none-any.whl", hash = "sha256:6fee160d6ffcd1b1c68c65f14c829c22832bc401726335ce92c52d395944a6a1"},
{file = "tqdm-4.64.1.tar.gz", hash = "sha256:5f4f682a004951c1b450bc753c710e9280c5746ce6ffedee253ddbcbf54cf1e4"},
]
traitlets = [
{file = "traitlets-5.5.0-py3-none-any.whl", hash = "sha256:1201b2c9f76097195989cdf7f65db9897593b0dfd69e4ac96016661bb6f0d30f"},
{file = "traitlets-5.5.0.tar.gz", hash = "sha256:b122f9ff2f2f6c1709dab289a05555be011c87828e911c0cf4074b85cb780a79"},
]
typer = [
{file = "typer-0.7.0-py3-none-any.whl", hash = "sha256:b5e704f4e48ec263de1c0b3a2387cd405a13767d2f907f44c1a08cbad96f606d"},
{file = "typer-0.7.0.tar.gz", hash = "sha256:ff797846578a9f2a201b53442aedeb543319466870fbe1c701eab66dd7681165"},
]
typing-extensions = [
{file = "typing_extensions-4.4.0-py3-none-any.whl", hash = "sha256:16fa4864408f655d35ec496218b85f79b3437c829e93320c7c9215ccfd92489e"},
{file = "typing_extensions-4.4.0.tar.gz", hash = "sha256:1511434bb92bf8dd198c12b1cc812e800d4181cfcb867674e0f8279cc93087aa"},
]
tzdata = [
{file = "tzdata-2022.6-py2.py3-none-any.whl", hash = "sha256:04a680bdc5b15750c39c12a448885a51134a27ec9af83667663f0b3a1bf3f342"},
{file = "tzdata-2022.6.tar.gz", hash = "sha256:91f11db4503385928c15598c98573e3af07e7229181bee5375bd30f1695ddcae"},
]
tzlocal = [
{file = "tzlocal-4.2-py3-none-any.whl", hash = "sha256:89885494684c929d9191c57aa27502afc87a579be5cdd3225c77c463ea043745"},
{file = "tzlocal-4.2.tar.gz", hash = "sha256:ee5842fa3a795f023514ac2d801c4a81d1743bbe642e3940143326b3a00addd7"},
]
urllib3 = [
{file = "urllib3-1.26.13-py2.py3-none-any.whl", hash = "sha256:47cc05d99aaa09c9e72ed5809b60e7ba354e64b59c9c173ac3018642d8bb41fc"},
{file = "urllib3-1.26.13.tar.gz", hash = "sha256:c083dd0dce68dbfbe1129d5271cb90f9447dea7d52097c6e0126120c521ddea8"},
]
wasabi = [
{file = "wasabi-0.10.1-py3-none-any.whl", hash = "sha256:fe862cc24034fbc9f04717cd312ab884f71f51a8ecabebc3449b751c2a649d83"},
{file = "wasabi-0.10.1.tar.gz", hash = "sha256:c8e372781be19272942382b14d99314d175518d7822057cb7a97010c4259d249"},
]
wcwidth = [
{file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"},
{file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"},
]
webencodings = [
{file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
{file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
]
websocket-client = [
{file = "websocket-client-1.4.2.tar.gz", hash = "sha256:d6e8f90ca8e2dd4e8027c4561adeb9456b54044312dba655e7cae652ceb9ae59"},
{file = "websocket_client-1.4.2-py3-none-any.whl", hash = "sha256:d6b06432f184438d99ac1f456eaf22fe1ade524c3dd16e661142dc54e9cba574"},
]
Werkzeug = [
{file = "Werkzeug-2.2.2-py3-none-any.whl", hash = "sha256:f979ab81f58d7318e064e99c4506445d60135ac5cd2e177a2de0089bfd4c9bd5"},
{file = "Werkzeug-2.2.2.tar.gz", hash = "sha256:7ea2d48322cc7c0f8b3a215ed73eabd7b5d75d0b50e31ab006286ccff9e00b8f"},
]
wheel = [
{file = "wheel-0.38.4-py3-none-any.whl", hash = "sha256:b60533f3f5d530e971d6737ca6d58681ee434818fab630c83a734bb10c083ce8"},
{file = "wheel-0.38.4.tar.gz", hash = "sha256:965f5259b566725405b05e7cf774052044b1ed30119b5d586b2703aafe8719ac"},
]
widgetsnbextension = [
{file = "widgetsnbextension-4.0.3-py3-none-any.whl", hash = "sha256:7f3b0de8fda692d31ef03743b598620e31c2668b835edbd3962d080ccecf31eb"},
{file = "widgetsnbextension-4.0.3.tar.gz", hash = "sha256:34824864c062b0b3030ad78210db5ae6a3960dfb61d5b27562d6631774de0286"},
]
wrapt = [
{file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"},
{file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"},
{file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"},
{file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"},
{file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"},
{file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"},
{file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"},
{file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"},
{file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"},
{file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"},
{file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"},
{file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"},
{file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"},
{file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"},
{file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"},
{file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"},
]
xgboost = [
{file = "xgboost-1.7.1-py3-none-macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:373d8e95f2f0c0a680ee625a96141b0009f334e132be8493e0f6c69026221bbd"},
{file = "xgboost-1.7.1-py3-none-macosx_12_0_arm64.whl", hash = "sha256:91dfd4af12c01c6e683b0412f48744d2d30d6754e33b297e40845e2d136b3d30"},
{file = "xgboost-1.7.1-py3-none-manylinux2014_aarch64.whl", hash = "sha256:18b9fbad68d2af60737618072e77a43f88eec1113a143f9498698eb5db0d9c41"},
{file = "xgboost-1.7.1-py3-none-manylinux2014_x86_64.whl", hash = "sha256:e96305eb8c8b6061d83ac9fef25437e8ebc8d9c9300e75b8d07f35de1031166b"},
{file = "xgboost-1.7.1-py3-none-win_amd64.whl", hash = "sha256:fbe06896e1b12843c7f428ae56da6ac1c5975545d8785f137f73fd591c54e5f5"},
{file = "xgboost-1.7.1.tar.gz", hash = "sha256:bb302c5c33e14bab94603940987940f29203ecb8767a7a719daf579fbfaace64"},
]
zict = [
{file = "zict-2.2.0-py2.py3-none-any.whl", hash = "sha256:dabcc8c8b6833aa3b6602daad50f03da068322c1a90999ff78aed9eecc8fa92c"},
{file = "zict-2.2.0.tar.gz", hash = "sha256:d7366c2e2293314112dcf2432108428a67b927b00005619feefc310d12d833f3"},
]
zipp = [
{file = "zipp-3.11.0-py3-none-any.whl", hash = "sha256:83a28fcb75844b5c0cdaf5aa4003c2d728c77e05f5aeabe8e95e56727005fbaa"},
{file = "zipp-3.11.0.tar.gz", hash = "sha256:a7a22e05929290a67401440b39690ae6563279bced5f314609d9d03798f56766"},
]
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | I'm assuming that this file is auto-generated and doesn't need my review, right? | amit-sharma | 261 |
py-why/dowhy | 746 | Functional api/causal estimators | * Introduce `fit()` method to estimators.
* Refactor constructors to avoid using `*args` and `**kwargs` and have more explicit parameters.
* Refactor refuters and other parts of the code to use `fit()` and modify arguments to `estimate_effect()` | null | 2022-11-04 16:15:39+00:00 | 2022-12-03 17:07:53+00:00 | poetry.lock | [[package]]
name = "absl-py"
version = "1.3.0"
description = "Abseil Python Common Libraries, see https://github.com/abseil/abseil-py."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "alabaster"
version = "0.7.12"
description = "A configurable sidebar-enabled Sphinx theme"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "anyio"
version = "3.6.2"
description = "High level compatibility layer for multiple asynchronous event loop implementations"
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
idna = ">=2.8"
sniffio = ">=1.1"
[package.extras]
doc = ["packaging", "sphinx-autodoc-typehints (>=1.2.0)", "sphinx-rtd-theme"]
test = ["contextlib2", "coverage[toml] (>=4.5)", "hypothesis (>=4.0)", "mock (>=4)", "pytest (>=7.0)", "pytest-mock (>=3.6.1)", "trustme", "uvloop (<0.15)", "uvloop (>=0.15)"]
trio = ["trio (>=0.16,<0.22)"]
[[package]]
name = "appnope"
version = "0.1.3"
description = "Disable App Nap on macOS >= 10.9"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "argon2-cffi"
version = "21.3.0"
description = "The secure Argon2 password hashing algorithm."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
argon2-cffi-bindings = "*"
[package.extras]
dev = ["cogapp", "coverage[toml] (>=5.0.2)", "furo", "hypothesis", "pre-commit", "pytest", "sphinx", "sphinx-notfound-page", "tomli"]
docs = ["furo", "sphinx", "sphinx-notfound-page"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pytest"]
[[package]]
name = "argon2-cffi-bindings"
version = "21.2.0"
description = "Low-level CFFI bindings for Argon2"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.0.1"
[package.extras]
dev = ["cogapp", "pre-commit", "pytest", "wheel"]
tests = ["pytest"]
[[package]]
name = "asttokens"
version = "2.1.0"
description = "Annotate AST trees with source code positions"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[package.extras]
test = ["astroid (<=2.5.3)", "pytest"]
[[package]]
name = "astunparse"
version = "1.6.3"
description = "An AST unparser for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = ">=1.6.1,<2.0"
wheel = ">=0.23.0,<1.0"
[[package]]
name = "attrs"
version = "22.1.0"
description = "Classes Without Boilerplate"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
dev = ["cloudpickle", "coverage[toml] (>=5.0.2)", "furo", "hypothesis", "mypy (>=0.900,!=0.940)", "pre-commit", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "sphinx", "sphinx-notfound-page", "zope.interface"]
docs = ["furo", "sphinx", "sphinx-notfound-page", "zope.interface"]
tests = ["cloudpickle", "coverage[toml] (>=5.0.2)", "hypothesis", "mypy (>=0.900,!=0.940)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "zope.interface"]
tests-no-zope = ["cloudpickle", "coverage[toml] (>=5.0.2)", "hypothesis", "mypy (>=0.900,!=0.940)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins"]
[[package]]
name = "autogluon-common"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
boto3 = "*"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
setuptools = "*"
[package.extras]
tests = ["pytest", "pytest-mypy", "types-requests", "types-setuptools"]
[[package]]
name = "autogluon-core"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
"autogluon.common" = "0.6.0"
boto3 = "*"
dask = ">=2021.09.1,<=2021.11.2"
distributed = ">=2021.09.1,<=2021.11.2"
matplotlib = "*"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
requests = "*"
scikit-learn = ">=1.0.0,<1.2"
scipy = ">=1.5.4,<1.10.0"
tqdm = ">=4.38.0"
[package.extras]
all = ["hyperopt (>=0.2.7,<0.2.8)", "ray (>=2.0,<2.1)", "ray[tune] (>=2.0,<2.1)"]
ray = ["ray (>=2.0,<2.1)"]
raytune = ["hyperopt (>=0.2.7,<0.2.8)", "ray[tune] (>=2.0,<2.1)"]
tests = ["pytest", "pytest-mypy", "types-requests", "types-setuptools"]
[[package]]
name = "autogluon-features"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
"autogluon.common" = "0.6.0"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
psutil = ">=5.7.3,<6"
scikit-learn = ">=1.0.0,<1.2"
[[package]]
name = "autogluon-tabular"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
"autogluon.core" = "0.6.0"
"autogluon.features" = "0.6.0"
catboost = {version = ">=1.0,<1.2", optional = true, markers = "extra == \"all\""}
fastai = {version = ">=2.3.1,<2.8", optional = true, markers = "extra == \"all\""}
lightgbm = {version = ">=3.3,<3.4", optional = true, markers = "extra == \"all\""}
networkx = ">=2.3,<3.0"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
psutil = ">=5.7.3,<6"
scikit-learn = ">=1.0.0,<1.2"
scipy = ">=1.5.4,<1.10.0"
torch = {version = ">=1.0,<1.13", optional = true, markers = "extra == \"all\""}
xgboost = {version = ">=1.6,<1.8", optional = true, markers = "extra == \"all\""}
[package.extras]
all = ["catboost (>=1.0,<1.2)", "fastai (>=2.3.1,<2.8)", "lightgbm (>=3.3,<3.4)", "torch (>=1.0,<1.13)", "xgboost (>=1.6,<1.8)"]
catboost = ["catboost (>=1.0,<1.2)"]
fastai = ["fastai (>=2.3.1,<2.8)", "torch (>=1.0,<1.13)"]
imodels = ["imodels (>=1.3.0)"]
lightgbm = ["lightgbm (>=3.3,<3.4)"]
skex = ["scikit-learn-intelex (>=2021.5,<2021.6)"]
skl2onnx = ["skl2onnx (>=1.12.0,<1.13.0)"]
tests = ["imodels (>=1.3.0)", "skl2onnx (>=1.12.0,<1.13.0)", "vowpalwabbit (>=8.10,<8.11)"]
vowpalwabbit = ["vowpalwabbit (>=8.10,<8.11)"]
xgboost = ["xgboost (>=1.6,<1.8)"]
[[package]]
name = "babel"
version = "2.11.0"
description = "Internationalization utilities"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pytz = ">=2015.7"
[[package]]
name = "backcall"
version = "0.2.0"
description = "Specifications for callback functions passed in to an API"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "backports-zoneinfo"
version = "0.2.1"
description = "Backport of the standard library zoneinfo module"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
tzdata = ["tzdata"]
[[package]]
name = "beautifulsoup4"
version = "4.11.1"
description = "Screen-scraping library"
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.dependencies]
soupsieve = ">1.2"
[package.extras]
html5lib = ["html5lib"]
lxml = ["lxml"]
[[package]]
name = "black"
version = "22.10.0"
description = "The uncompromising code formatter."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
click = ">=8.0.0"
ipython = {version = ">=7.8.0", optional = true, markers = "extra == \"jupyter\""}
mypy-extensions = ">=0.4.3"
pathspec = ">=0.9.0"
platformdirs = ">=2"
tokenize-rt = {version = ">=3.2.0", optional = true, markers = "extra == \"jupyter\""}
tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
[package.extras]
colorama = ["colorama (>=0.4.3)"]
d = ["aiohttp (>=3.7.4)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "bleach"
version = "5.0.1"
description = "An easy safelist-based HTML-sanitizing tool."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.9.0"
webencodings = "*"
[package.extras]
css = ["tinycss2 (>=1.1.0,<1.2)"]
dev = ["Sphinx (==4.3.2)", "black (==22.3.0)", "build (==0.8.0)", "flake8 (==4.0.1)", "hashin (==0.17.0)", "mypy (==0.961)", "pip-tools (==6.6.2)", "pytest (==7.1.2)", "tox (==3.25.0)", "twine (==4.0.1)", "wheel (==0.37.1)"]
[[package]]
name = "blis"
version = "0.7.9"
description = "The Blis BLAS-like linear algebra library, as a self-contained C-extension."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.15.0"
[[package]]
name = "boto3"
version = "1.26.15"
description = "The AWS SDK for Python"
category = "main"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
botocore = ">=1.29.15,<1.30.0"
jmespath = ">=0.7.1,<2.0.0"
s3transfer = ">=0.6.0,<0.7.0"
[package.extras]
crt = ["botocore[crt] (>=1.21.0,<2.0a0)"]
[[package]]
name = "botocore"
version = "1.29.15"
description = "Low-level, data-driven core of boto 3."
category = "main"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
jmespath = ">=0.7.1,<2.0.0"
python-dateutil = ">=2.1,<3.0.0"
urllib3 = ">=1.25.4,<1.27"
[package.extras]
crt = ["awscrt (==0.14.0)"]
[[package]]
name = "cachetools"
version = "5.2.0"
description = "Extensible memoizing collections and decorators"
category = "dev"
optional = false
python-versions = "~=3.7"
[[package]]
name = "catalogue"
version = "2.0.8"
description = "Super lightweight function registries for your library"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "catboost"
version = "1.1.1"
description = "Catboost Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
graphviz = "*"
matplotlib = "*"
numpy = ">=1.16.0"
pandas = ">=0.24.0"
plotly = "*"
scipy = "*"
six = "*"
[[package]]
name = "causal-learn"
version = "0.1.3.0"
description = "causal-learn Python Package"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
graphviz = "*"
matplotlib = "*"
networkx = "*"
numpy = "*"
pandas = "*"
pydot = "*"
scikit-learn = "*"
scipy = "*"
statsmodels = "*"
tqdm = "*"
[[package]]
name = "causalml"
version = "0.13.0"
description = "Python Package for Uplift Modeling and Causal Inference with Machine Learning Algorithms"
category = "main"
optional = true
python-versions = ">=3.7"
develop = false
[package.dependencies]
Cython = ">=0.28.0"
dill = "*"
forestci = "0.6"
graphviz = "*"
lightgbm = "*"
matplotlib = "*"
numpy = ">=1.18.5"
packaging = "*"
pandas = ">=0.24.1"
pathos = "0.2.9"
pip = ">=10.0"
pydotplus = "*"
pygam = "*"
pyro-ppl = "*"
scikit-learn = "<=1.0.2"
scipy = ">=1.4.1"
seaborn = "*"
setuptools = ">=41.0.0"
shap = "*"
statsmodels = ">=0.9.0"
torch = "*"
tqdm = "*"
xgboost = "*"
[package.extras]
tf = ["tensorflow (>=2.4.0)"]
[package.source]
type = "git"
url = "https://github.com/uber/causalml"
reference = "master"
resolved_reference = "7050c74c257254de3600f69d49bda84a3ac152e2"
[[package]]
name = "certifi"
version = "2022.9.24"
description = "Python package for providing Mozilla's CA Bundle."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cffi"
version = "1.15.1"
description = "Foreign Function Interface for Python calling C code."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pycparser = "*"
[[package]]
name = "charset-normalizer"
version = "2.1.1"
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
category = "main"
optional = false
python-versions = ">=3.6.0"
[package.extras]
unicode-backport = ["unicodedata2"]
[[package]]
name = "click"
version = "8.1.3"
description = "Composable command line interface toolkit"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "cloudpickle"
version = "2.2.0"
description = "Extended pickling support for Python objects"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "colorama"
version = "0.4.6"
description = "Cross-platform colored terminal text."
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
[[package]]
name = "confection"
version = "0.0.3"
description = "The sweetest config system for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0"
srsly = ">=2.4.0,<3.0.0"
[[package]]
name = "contourpy"
version = "1.0.6"
description = "Python library for calculating contours of 2D quadrilateral grids"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.16"
[package.extras]
bokeh = ["bokeh", "selenium"]
docs = ["docutils (<0.18)", "sphinx (<=5.2.0)", "sphinx-rtd-theme"]
test = ["Pillow", "flake8", "isort", "matplotlib", "pytest"]
test-minimal = ["pytest"]
test-no-codebase = ["Pillow", "matplotlib", "pytest"]
[[package]]
name = "coverage"
version = "6.5.0"
description = "Code coverage measurement for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
tomli = {version = "*", optional = true, markers = "python_full_version <= \"3.11.0a6\" and extra == \"toml\""}
[package.extras]
toml = ["tomli"]
[[package]]
name = "cycler"
version = "0.11.0"
description = "Composable style cycles"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cymem"
version = "2.0.7"
description = "Manage calls to calloc/free through Cython"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "cython"
version = "0.29.32"
description = "The Cython compiler for writing C extensions for the Python language."
category = "main"
optional = false
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "dask"
version = "2021.11.2"
description = "Parallel PyData with Task Scheduling"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
cloudpickle = ">=1.1.1"
fsspec = ">=0.6.0"
packaging = ">=20.0"
partd = ">=0.3.10"
pyyaml = "*"
toolz = ">=0.8.2"
[package.extras]
array = ["numpy (>=1.18)"]
complete = ["bokeh (>=1.0.0,!=2.0.0)", "distributed (==2021.11.2)", "jinja2", "numpy (>=1.18)", "pandas (>=1.0)"]
dataframe = ["numpy (>=1.18)", "pandas (>=1.0)"]
diagnostics = ["bokeh (>=1.0.0,!=2.0.0)", "jinja2"]
distributed = ["distributed (==2021.11.2)"]
test = ["pre-commit", "pytest", "pytest-rerunfailures", "pytest-xdist"]
[[package]]
name = "debugpy"
version = "1.6.3"
description = "An implementation of the Debug Adapter Protocol for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "decorator"
version = "5.1.1"
description = "Decorators for Humans"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "defusedxml"
version = "0.7.1"
description = "XML bomb protection for Python stdlib modules"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "dill"
version = "0.3.6"
description = "serialize all of python"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
graph = ["objgraph (>=1.7.2)"]
[[package]]
name = "distributed"
version = "2021.11.2"
description = "Distributed scheduler for Dask"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
click = ">=6.6"
cloudpickle = ">=1.5.0"
dask = "2021.11.2"
jinja2 = "*"
msgpack = ">=0.6.0"
psutil = ">=5.0"
pyyaml = "*"
setuptools = "*"
sortedcontainers = "<2.0.0 || >2.0.0,<2.0.1 || >2.0.1"
tblib = ">=1.6.0"
toolz = ">=0.8.2"
tornado = {version = ">=6.0.3", markers = "python_version >= \"3.8\""}
zict = ">=0.1.3"
[[package]]
name = "docutils"
version = "0.17.1"
description = "Docutils -- Python Documentation Utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "econml"
version = "0.14.0"
description = "This package contains several methods for calculating Conditional Average Treatment Effects"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
joblib = ">=0.13.0"
lightgbm = "*"
numpy = "*"
pandas = "*"
scikit-learn = ">0.22.0,<1.2"
scipy = ">1.4.0"
shap = ">=0.38.1,<0.41.0"
sparse = "*"
statsmodels = ">=0.10"
[package.extras]
all = ["azure-cli", "dowhy (<0.9)", "keras (<2.4)", "matplotlib (<3.6.0)", "protobuf (<4)", "tensorflow (>1.10,<2.3)"]
automl = ["azure-cli"]
dowhy = ["dowhy (<0.9)"]
plt = ["graphviz", "matplotlib (<3.6.0)"]
tf = ["keras (<2.4)", "protobuf (<4)", "tensorflow (>1.10,<2.3)"]
[[package]]
name = "entrypoints"
version = "0.4"
description = "Discover and load entry points from installed packages."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "exceptiongroup"
version = "1.0.4"
description = "Backport of PEP 654 (exception groups)"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pytest (>=6)"]
[[package]]
name = "executing"
version = "1.2.0"
description = "Get the currently executing AST node of a frame, and other information"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
tests = ["asttokens", "littleutils", "pytest", "rich"]
[[package]]
name = "fastai"
version = "2.7.10"
description = "fastai simplifies training fast and accurate neural nets using modern best practices"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastcore = ">=1.4.5,<1.6"
fastdownload = ">=0.0.5,<2"
fastprogress = ">=0.2.4"
matplotlib = "*"
packaging = "*"
pandas = "*"
pillow = ">6.0.0"
pip = "*"
pyyaml = "*"
requests = "*"
scikit-learn = "*"
scipy = "*"
spacy = "<4"
torch = ">=1.7,<1.14"
torchvision = ">=0.8.2"
[package.extras]
dev = ["accelerate (>=0.10.0)", "albumentations", "captum (>=0.3)", "catalyst", "comet-ml", "flask", "flask-compress", "ipywidgets", "kornia", "neptune-client", "ninja", "opencv-python", "pyarrow", "pydicom", "pytorch-ignite", "pytorch-lightning", "scikit-image", "sentencepiece", "tensorboard", "timm (>=0.6.2.dev)", "transformers", "wandb"]
[[package]]
name = "fastcore"
version = "1.5.27"
description = "Python supercharged for fastai development"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
pip = "*"
[package.extras]
dev = ["jupyterlab", "matplotlib", "nbdev (>=0.2.39)", "numpy", "pandas", "pillow", "torch"]
[[package]]
name = "fastdownload"
version = "0.0.7"
description = "A general purpose data downloading library."
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
fastcore = ">=1.3.26"
fastprogress = "*"
[[package]]
name = "fastjsonschema"
version = "2.16.2"
description = "Fastest Python implementation of JSON schema"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
devel = ["colorama", "json-spec", "jsonschema", "pylint", "pytest", "pytest-benchmark", "pytest-cache", "validictory"]
[[package]]
name = "fastprogress"
version = "1.0.3"
description = "A nested progress with plotting options for fastai"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "flake8"
version = "4.0.1"
description = "the modular source code checker: pep8 pyflakes and co"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mccabe = ">=0.6.0,<0.7.0"
pycodestyle = ">=2.8.0,<2.9.0"
pyflakes = ">=2.4.0,<2.5.0"
[[package]]
name = "flaky"
version = "3.7.0"
description = "Plugin for nose or pytest that automatically reruns flaky tests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "flatbuffers"
version = "22.10.26"
description = "The FlatBuffers serialization format for Python"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fonttools"
version = "4.38.0"
description = "Tools to manipulate font files"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
all = ["brotli (>=1.0.1)", "brotlicffi (>=0.8.0)", "fs (>=2.2.0,<3)", "lxml (>=4.0,<5)", "lz4 (>=1.7.4.2)", "matplotlib", "munkres", "scipy", "skia-pathops (>=0.5.0)", "sympy", "uharfbuzz (>=0.23.0)", "unicodedata2 (>=14.0.0)", "xattr", "zopfli (>=0.1.4)"]
graphite = ["lz4 (>=1.7.4.2)"]
interpolatable = ["munkres", "scipy"]
lxml = ["lxml (>=4.0,<5)"]
pathops = ["skia-pathops (>=0.5.0)"]
plot = ["matplotlib"]
repacker = ["uharfbuzz (>=0.23.0)"]
symfont = ["sympy"]
type1 = ["xattr"]
ufo = ["fs (>=2.2.0,<3)"]
unicode = ["unicodedata2 (>=14.0.0)"]
woff = ["brotli (>=1.0.1)", "brotlicffi (>=0.8.0)", "zopfli (>=0.1.4)"]
[[package]]
name = "forestci"
version = "0.6"
description = "forestci: confidence intervals for scikit-learn forest algorithms"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
numpy = ">=1.20"
scikit-learn = ">=0.23.1"
[[package]]
name = "fsspec"
version = "2022.11.0"
description = "File-system specification"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
abfs = ["adlfs"]
adl = ["adlfs"]
arrow = ["pyarrow (>=1)"]
dask = ["dask", "distributed"]
dropbox = ["dropbox", "dropboxdrivefs", "requests"]
entrypoints = ["importlib-metadata"]
fuse = ["fusepy"]
gcs = ["gcsfs"]
git = ["pygit2"]
github = ["requests"]
gs = ["gcsfs"]
gui = ["panel"]
hdfs = ["pyarrow (>=1)"]
http = ["aiohttp (!=4.0.0a0,!=4.0.0a1)", "requests"]
libarchive = ["libarchive-c"]
oci = ["ocifs"]
s3 = ["s3fs"]
sftp = ["paramiko"]
smb = ["smbprotocol"]
ssh = ["paramiko"]
tqdm = ["tqdm"]
[[package]]
name = "future"
version = "0.18.2"
description = "Clean single-source support for Python 3 and 2"
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "gast"
version = "0.4.0"
description = "Python AST that abstracts the underlying Python version"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "google-auth"
version = "2.14.1"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*"
[package.dependencies]
cachetools = ">=2.0.0,<6.0"
pyasn1-modules = ">=0.2.1"
rsa = {version = ">=3.1.4,<5", markers = "python_version >= \"3.6\""}
six = ">=1.9.0"
[package.extras]
aiohttp = ["aiohttp (>=3.6.2,<4.0.0dev)", "requests (>=2.20.0,<3.0.0dev)"]
enterprise-cert = ["cryptography (==36.0.2)", "pyopenssl (==22.0.0)"]
pyopenssl = ["cryptography (>=38.0.3)", "pyopenssl (>=20.0.0)"]
reauth = ["pyu2f (>=0.1.5)"]
[[package]]
name = "google-auth-oauthlib"
version = "0.4.6"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
google-auth = ">=1.0.0"
requests-oauthlib = ">=0.7.0"
[package.extras]
tool = ["click (>=6.0.0)"]
[[package]]
name = "google-pasta"
version = "0.2.0"
description = "pasta is an AST-based Python refactoring library"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[[package]]
name = "graphviz"
version = "0.20.1"
description = "Simple Python interface for Graphviz"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
dev = ["flake8", "pep8-naming", "tox (>=3)", "twine", "wheel"]
docs = ["sphinx (>=5)", "sphinx-autodoc-typehints", "sphinx-rtd-theme"]
test = ["coverage", "mock (>=4)", "pytest (>=7)", "pytest-cov", "pytest-mock (>=3)"]
[[package]]
name = "grpcio"
version = "1.50.0"
description = "HTTP/2-based RPC framework"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.5.2"
[package.extras]
protobuf = ["grpcio-tools (>=1.50.0)"]
[[package]]
name = "h5py"
version = "3.7.0"
description = "Read and write HDF5 files from Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.14.5"
[[package]]
name = "heapdict"
version = "1.0.1"
description = "a heap with decrease-key and increase-key operations"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "idna"
version = "3.4"
description = "Internationalized Domain Names in Applications (IDNA)"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "imagesize"
version = "1.4.1"
description = "Getting image size from png/jpeg/jpeg2000/gif file"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "importlib-metadata"
version = "5.0.0"
description = "Read metadata from Python packages"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = ">=0.5"
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"]
perf = ["ipython"]
testing = ["flake8 (<5)", "flufl.flake8", "importlib-resources (>=1.3)", "packaging", "pyfakefs", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)", "pytest-perf (>=0.9.2)"]
[[package]]
name = "importlib-resources"
version = "5.10.0"
description = "Read resources from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""}
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"]
testing = ["flake8 (<5)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
[[package]]
name = "iniconfig"
version = "1.1.1"
description = "iniconfig: brain-dead simple config-ini parsing"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipykernel"
version = "6.17.1"
description = "IPython Kernel for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "platform_system == \"Darwin\""}
debugpy = ">=1.0"
ipython = ">=7.23.1"
jupyter-client = ">=6.1.12"
matplotlib-inline = ">=0.1"
nest-asyncio = "*"
packaging = "*"
psutil = "*"
pyzmq = ">=17"
tornado = ">=6.1"
traitlets = ">=5.1.0"
[package.extras]
docs = ["myst-parser", "pydata-sphinx-theme", "sphinx", "sphinxcontrib-github-alt"]
test = ["flaky", "ipyparallel", "pre-commit", "pytest (>=7.0)", "pytest-cov", "pytest-timeout"]
[[package]]
name = "ipython"
version = "8.6.0"
description = "IPython: Productive Interactive Computing"
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "sys_platform == \"darwin\""}
backcall = "*"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
decorator = "*"
jedi = ">=0.16"
matplotlib-inline = "*"
pexpect = {version = ">4.3", markers = "sys_platform != \"win32\""}
pickleshare = "*"
prompt-toolkit = ">3.0.1,<3.1.0"
pygments = ">=2.4.0"
stack-data = "*"
traitlets = ">=5"
[package.extras]
all = ["black", "curio", "docrepr", "ipykernel", "ipyparallel", "ipywidgets", "matplotlib", "matplotlib (!=3.2.0)", "nbconvert", "nbformat", "notebook", "numpy (>=1.20)", "pandas", "pytest (<7)", "pytest (<7.1)", "pytest-asyncio", "qtconsole", "setuptools (>=18.5)", "sphinx (>=1.3)", "sphinx-rtd-theme", "stack-data", "testpath", "trio", "typing-extensions"]
black = ["black"]
doc = ["docrepr", "ipykernel", "matplotlib", "pytest (<7)", "pytest (<7.1)", "pytest-asyncio", "setuptools (>=18.5)", "sphinx (>=1.3)", "sphinx-rtd-theme", "stack-data", "testpath", "typing-extensions"]
kernel = ["ipykernel"]
nbconvert = ["nbconvert"]
nbformat = ["nbformat"]
notebook = ["ipywidgets", "notebook"]
parallel = ["ipyparallel"]
qtconsole = ["qtconsole"]
test = ["pytest (<7.1)", "pytest-asyncio", "testpath"]
test-extra = ["curio", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.20)", "pandas", "pytest (<7.1)", "pytest-asyncio", "testpath", "trio"]
[[package]]
name = "ipython-genutils"
version = "0.2.0"
description = "Vestigial utilities from IPython"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipywidgets"
version = "8.0.2"
description = "Jupyter interactive widgets"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = ">=4.5.1"
ipython = ">=6.1.0"
jupyterlab-widgets = ">=3.0,<4.0"
traitlets = ">=4.3.1"
widgetsnbextension = ">=4.0,<5.0"
[package.extras]
test = ["jsonschema", "pytest (>=3.6.0)", "pytest-cov", "pytz"]
[[package]]
name = "isort"
version = "5.10.1"
description = "A Python utility / library to sort Python imports."
category = "dev"
optional = false
python-versions = ">=3.6.1,<4.0"
[package.extras]
colors = ["colorama (>=0.4.3,<0.5.0)"]
pipfile-deprecated-finder = ["pipreqs", "requirementslib"]
plugins = ["setuptools"]
requirements-deprecated-finder = ["pip-api", "pipreqs"]
[[package]]
name = "jedi"
version = "0.18.2"
description = "An autocompletion tool for Python that can be used for text editors."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
parso = ">=0.8.0,<0.9.0"
[package.extras]
docs = ["Jinja2 (==2.11.3)", "MarkupSafe (==1.1.1)", "Pygments (==2.8.1)", "alabaster (==0.7.12)", "babel (==2.9.1)", "chardet (==4.0.0)", "commonmark (==0.8.1)", "docutils (==0.17.1)", "future (==0.18.2)", "idna (==2.10)", "imagesize (==1.2.0)", "mock (==1.0.1)", "packaging (==20.9)", "pyparsing (==2.4.7)", "pytz (==2021.1)", "readthedocs-sphinx-ext (==2.1.4)", "recommonmark (==0.5.0)", "requests (==2.25.1)", "six (==1.15.0)", "snowballstemmer (==2.1.0)", "sphinx (==1.8.5)", "sphinx-rtd-theme (==0.4.3)", "sphinxcontrib-serializinghtml (==1.1.4)", "sphinxcontrib-websupport (==1.2.4)", "urllib3 (==1.26.4)"]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["Django (<3.1)", "attrs", "colorama", "docopt", "pytest (<7.0.0)"]
[[package]]
name = "jinja2"
version = "3.1.2"
description = "A very fast and expressive template engine."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.0"
[package.extras]
i18n = ["Babel (>=2.7)"]
[[package]]
name = "jmespath"
version = "1.0.1"
description = "JSON Matching Expressions"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "joblib"
version = "1.2.0"
description = "Lightweight pipelining with Python functions"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jsonschema"
version = "4.17.1"
description = "An implementation of JSON Schema validation for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=17.4.0"
importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""}
pkgutil-resolve-name = {version = ">=1.3.10", markers = "python_version < \"3.9\""}
pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
[package.extras]
format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"]
format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=1.11)"]
[[package]]
name = "jupyter"
version = "1.0.0"
description = "Jupyter metapackage. Install all the Jupyter components in one go."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ipykernel = "*"
ipywidgets = "*"
jupyter-console = "*"
nbconvert = "*"
notebook = "*"
qtconsole = "*"
[[package]]
name = "jupyter-client"
version = "7.4.7"
description = "Jupyter protocol implementation and client libraries"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
entrypoints = "*"
jupyter-core = ">=4.9.2"
nest-asyncio = ">=1.5.4"
python-dateutil = ">=2.8.2"
pyzmq = ">=23.0"
tornado = ">=6.2"
traitlets = "*"
[package.extras]
doc = ["ipykernel", "myst-parser", "sphinx (>=1.3.6)", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
test = ["codecov", "coverage", "ipykernel (>=6.12)", "ipython", "mypy", "pre-commit", "pytest", "pytest-asyncio (>=0.18)", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-console"
version = "6.4.4"
description = "Jupyter terminal console"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = "*"
ipython = "*"
jupyter-client = ">=7.0.0"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = "*"
[package.extras]
test = ["pexpect"]
[[package]]
name = "jupyter-core"
version = "5.0.0"
description = "Jupyter core package. A base package on which Jupyter projects rely."
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
platformdirs = "*"
pywin32 = {version = ">=1.0", markers = "sys_platform == \"win32\" and platform_python_implementation != \"PyPy\""}
traitlets = "*"
[package.extras]
test = ["ipykernel", "pre-commit", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-server"
version = "1.23.3"
description = "The backend—i.e. core services, APIs, and REST endpoints—to Jupyter web applications."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
anyio = ">=3.1.0,<4"
argon2-cffi = "*"
jinja2 = "*"
jupyter-client = ">=6.1.12"
jupyter-core = ">=4.7.0"
nbconvert = ">=6.4.4"
nbformat = ">=5.2.0"
packaging = "*"
prometheus-client = "*"
pywinpty = {version = "*", markers = "os_name == \"nt\""}
pyzmq = ">=17"
Send2Trash = "*"
terminado = ">=0.8.3"
tornado = ">=6.1.0"
traitlets = ">=5.1"
websocket-client = "*"
[package.extras]
test = ["coverage", "ipykernel", "pre-commit", "pytest (>=7.0)", "pytest-console-scripts", "pytest-cov", "pytest-mock", "pytest-timeout", "pytest-tornasync", "requests"]
[[package]]
name = "jupyterlab-pygments"
version = "0.2.2"
description = "Pygments theme using JupyterLab CSS variables"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jupyterlab-widgets"
version = "3.0.3"
description = "Jupyter interactive widgets for JupyterLab"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "keras"
version = "2.11.0"
description = "Deep learning for humans."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "kiwisolver"
version = "1.4.4"
description = "A fast implementation of the Cassowary constraint solver"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "langcodes"
version = "3.3.0"
description = "Tools for labeling human languages with IETF language tags"
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
data = ["language-data (>=1.1,<2.0)"]
[[package]]
name = "libclang"
version = "14.0.6"
description = "Clang Python Bindings, mirrored from the official LLVM repo: https://github.com/llvm/llvm-project/tree/main/clang/bindings/python, to make the installation process easier."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "lightgbm"
version = "3.3.3"
description = "LightGBM Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = "*"
scikit-learn = "!=0.22.0"
scipy = "*"
wheel = "*"
[package.extras]
dask = ["dask[array] (>=2.0.0)", "dask[dataframe] (>=2.0.0)", "dask[distributed] (>=2.0.0)", "pandas"]
[[package]]
name = "llvmlite"
version = "0.36.0"
description = "lightweight wrapper around basic LLVM functionality"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[[package]]
name = "locket"
version = "1.0.0"
description = "File-based locks for Python on Linux and Windows"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "markdown"
version = "3.4.1"
description = "Python implementation of Markdown."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
[package.extras]
testing = ["coverage", "pyyaml"]
[[package]]
name = "markupsafe"
version = "2.1.1"
description = "Safely add untrusted strings to HTML/XML markup."
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "matplotlib"
version = "3.6.2"
description = "Python plotting package"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
contourpy = ">=1.0.1"
cycler = ">=0.10"
fonttools = ">=4.22.0"
kiwisolver = ">=1.0.1"
numpy = ">=1.19"
packaging = ">=20.0"
pillow = ">=6.2.0"
pyparsing = ">=2.2.1"
python-dateutil = ">=2.7"
setuptools_scm = ">=7"
[[package]]
name = "matplotlib-inline"
version = "0.1.6"
description = "Inline Matplotlib backend for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.dependencies]
traitlets = "*"
[[package]]
name = "mccabe"
version = "0.6.1"
description = "McCabe checker, plugin for flake8"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mistune"
version = "2.0.4"
description = "A sane Markdown parser with useful plugins and renderers"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mpmath"
version = "1.2.1"
description = "Python library for arbitrary-precision floating-point arithmetic"
category = "main"
optional = false
python-versions = "*"
[package.extras]
develop = ["codecov", "pycodestyle", "pytest (>=4.6)", "pytest-cov", "wheel"]
tests = ["pytest (>=4.6)"]
[[package]]
name = "msgpack"
version = "1.0.4"
description = "MessagePack serializer"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "multiprocess"
version = "0.70.14"
description = "better multiprocessing and multithreading in python"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
dill = ">=0.3.6"
[[package]]
name = "murmurhash"
version = "1.0.9"
description = "Cython bindings for MurmurHash"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "mypy"
version = "0.971"
description = "Optional static typing for Python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mypy-extensions = ">=0.4.3"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typing-extensions = ">=3.10"
[package.extras]
dmypy = ["psutil (>=4.0)"]
python2 = ["typed-ast (>=1.4.0,<2)"]
reports = ["lxml"]
[[package]]
name = "mypy-extensions"
version = "0.4.3"
description = "Experimental type system extensions for programs checked with the mypy typechecker."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "nbclassic"
version = "0.4.8"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=6.1.1"
jupyter-core = ">=4.6.1"
jupyter-server = ">=1.8"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
notebook-shim = ">=0.1.0"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["myst-parser", "nbsphinx", "sphinx", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
json-logging = ["json-logging"]
test = ["coverage", "nbval", "pytest", "pytest-cov", "pytest-playwright", "pytest-tornasync", "requests", "requests-unixsocket", "testpath"]
[[package]]
name = "nbclient"
version = "0.7.0"
description = "A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor."
category = "dev"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
jupyter-client = ">=6.1.5"
nbformat = ">=5.0"
nest-asyncio = "*"
traitlets = ">=5.2.2"
[package.extras]
sphinx = ["Sphinx (>=1.7)", "autodoc-traits", "mock", "moto", "myst-parser", "sphinx-book-theme"]
test = ["black", "check-manifest", "flake8", "ipykernel", "ipython", "ipywidgets", "mypy", "nbconvert", "pip (>=18.1)", "pre-commit", "pytest (>=4.1)", "pytest-asyncio", "pytest-cov (>=2.6.1)", "setuptools (>=60.0)", "testpath", "twine (>=1.11.0)", "xmltodict"]
[[package]]
name = "nbconvert"
version = "7.0.0rc3"
description = "Converting Jupyter Notebooks"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
bleach = "*"
defusedxml = "*"
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jinja2 = ">=3.0"
jupyter-core = ">=4.7"
jupyterlab-pygments = "*"
markupsafe = ">=2.0"
mistune = ">=2.0.2,<3"
nbclient = ">=0.5.0"
nbformat = ">=5.1"
packaging = "*"
pandocfilters = ">=1.4.1"
pygments = ">=2.4.1"
tinycss2 = "*"
traitlets = ">=5.0"
[package.extras]
all = ["ipykernel", "ipython", "ipywidgets (>=7)", "nbsphinx (>=0.2.12)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency", "sphinx (>=1.5.1)", "sphinx-rtd-theme", "tornado (>=6.1)"]
docs = ["ipython", "nbsphinx (>=0.2.12)", "sphinx (>=1.5.1)", "sphinx-rtd-theme"]
serve = ["tornado (>=6.1)"]
test = ["ipykernel", "ipywidgets (>=7)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency"]
webpdf = ["pyppeteer (>=1,<1.1)"]
[[package]]
name = "nbformat"
version = "5.7.0"
description = "The Jupyter Notebook format"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastjsonschema = "*"
jsonschema = ">=2.6"
jupyter-core = "*"
traitlets = ">=5.1"
[package.extras]
test = ["check-manifest", "pep440", "pre-commit", "pytest", "testpath"]
[[package]]
name = "nbsphinx"
version = "0.8.10"
description = "Jupyter Notebook Tools for Sphinx"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
docutils = "*"
jinja2 = "*"
nbconvert = "!=5.4"
nbformat = "*"
sphinx = ">=1.8"
traitlets = ">=5"
[[package]]
name = "nest-asyncio"
version = "1.5.6"
description = "Patch asyncio to allow nested event loops"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "networkx"
version = "2.8.8"
description = "Python package for creating and manipulating graphs and networks"
category = "main"
optional = false
python-versions = ">=3.8"
[package.extras]
default = ["matplotlib (>=3.4)", "numpy (>=1.19)", "pandas (>=1.3)", "scipy (>=1.8)"]
developer = ["mypy (>=0.982)", "pre-commit (>=2.20)"]
doc = ["nb2plots (>=0.6)", "numpydoc (>=1.5)", "pillow (>=9.2)", "pydata-sphinx-theme (>=0.11)", "sphinx (>=5.2)", "sphinx-gallery (>=0.11)", "texext (>=0.6.6)"]
extra = ["lxml (>=4.6)", "pydot (>=1.4.2)", "pygraphviz (>=1.9)", "sympy (>=1.10)"]
test = ["codecov (>=2.1)", "pytest (>=7.2)", "pytest-cov (>=4.0)"]
[[package]]
name = "notebook"
version = "6.5.2"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=5.3.4"
jupyter-core = ">=4.6.1"
nbclassic = ">=0.4.7"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["myst-parser", "nbsphinx", "sphinx", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
json-logging = ["json-logging"]
test = ["coverage", "nbval", "pytest", "pytest-cov", "requests", "requests-unixsocket", "selenium (==4.1.5)", "testpath"]
[[package]]
name = "notebook-shim"
version = "0.2.2"
description = "A shim layer for notebook traits and config"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
jupyter-server = ">=1.8,<3"
[package.extras]
test = ["pytest", "pytest-console-scripts", "pytest-tornasync"]
[[package]]
name = "numba"
version = "0.53.1"
description = "compiling Python code using LLVM"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[package.dependencies]
llvmlite = ">=0.36.0rc1,<0.37"
numpy = ">=1.15"
setuptools = "*"
[[package]]
name = "numpy"
version = "1.23.5"
description = "NumPy is the fundamental package for array computing with Python."
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "oauthlib"
version = "3.2.2"
description = "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
rsa = ["cryptography (>=3.0.0)"]
signals = ["blinker (>=1.4.0)"]
signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "opt-einsum"
version = "3.3.0"
description = "Optimizing numpys einsum function"
category = "main"
optional = false
python-versions = ">=3.5"
[package.dependencies]
numpy = ">=1.7"
[package.extras]
docs = ["numpydoc", "sphinx (==1.2.3)", "sphinx-rtd-theme", "sphinxcontrib-napoleon"]
tests = ["pytest", "pytest-cov", "pytest-pep8"]
[[package]]
name = "packaging"
version = "21.3"
description = "Core utilities for Python packages"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pyparsing = ">=2.0.2,<3.0.5 || >3.0.5"
[[package]]
name = "pandas"
version = "1.5.2"
description = "Powerful data structures for data analysis, time series, and statistics"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = {version = ">=1.20.3", markers = "python_version < \"3.10\""}
python-dateutil = ">=2.8.1"
pytz = ">=2020.1"
[package.extras]
test = ["hypothesis (>=5.5.3)", "pytest (>=6.0)", "pytest-xdist (>=1.31)"]
[[package]]
name = "pandocfilters"
version = "1.5.0"
description = "Utilities for writing pandoc filters in python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "parso"
version = "0.8.3"
description = "A Python Parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["docopt", "pytest (<6.0.0)"]
[[package]]
name = "partd"
version = "1.3.0"
description = "Appendable key-value storage"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
locket = "*"
toolz = "*"
[package.extras]
complete = ["blosc", "numpy (>=1.9.0)", "pandas (>=0.19.0)", "pyzmq"]
[[package]]
name = "pastel"
version = "0.2.1"
description = "Bring colors to your terminal."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pathos"
version = "0.2.9"
description = "parallel graph management and execution in heterogeneous computing"
category = "main"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*"
[package.dependencies]
dill = ">=0.3.5.1"
multiprocess = ">=0.70.13"
pox = ">=0.3.1"
ppft = ">=1.7.6.5"
[[package]]
name = "pathspec"
version = "0.10.2"
description = "Utility library for gitignore style pattern matching of file paths."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pathy"
version = "0.9.0"
description = "pathlib.Path subclasses for local and cloud bucket storage"
category = "main"
optional = false
python-versions = ">= 3.6"
[package.dependencies]
smart-open = ">=5.2.1,<6.0.0"
typer = ">=0.3.0,<1.0.0"
[package.extras]
all = ["azure-storage-blob", "boto3", "google-cloud-storage (>=1.26.0,<2.0.0)", "mock", "pytest", "pytest-coverage", "typer-cli"]
azure = ["azure-storage-blob"]
gcs = ["google-cloud-storage (>=1.26.0,<2.0.0)"]
s3 = ["boto3"]
test = ["mock", "pytest", "pytest-coverage", "typer-cli"]
[[package]]
name = "patsy"
version = "0.5.3"
description = "A Python package for describing statistical models and for building design matrices."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.4"
six = "*"
[package.extras]
test = ["pytest", "pytest-cov", "scipy"]
[[package]]
name = "pexpect"
version = "4.8.0"
description = "Pexpect allows easy control of interactive console applications."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ptyprocess = ">=0.5"
[[package]]
name = "pickleshare"
version = "0.7.5"
description = "Tiny 'shelve'-like database with concurrency support"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pillow"
version = "9.3.0"
description = "Python Imaging Library (Fork)"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"]
tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "pip"
version = "22.3.1"
description = "The PyPA recommended tool for installing Python packages."
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pkgutil-resolve-name"
version = "1.3.10"
description = "Resolve a name to an object."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "platformdirs"
version = "2.5.4"
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo (>=2022.9.29)", "proselint (>=0.13)", "sphinx (>=5.3)", "sphinx-autodoc-typehints (>=1.19.4)"]
test = ["appdirs (==1.4.4)", "pytest (>=7.2)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"]
[[package]]
name = "plotly"
version = "5.11.0"
description = "An open-source, interactive data visualization library for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
tenacity = ">=6.2.0"
[[package]]
name = "pluggy"
version = "1.0.0"
description = "plugin and hook calling mechanisms for python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
dev = ["pre-commit", "tox"]
testing = ["pytest", "pytest-benchmark"]
[[package]]
name = "poethepoet"
version = "0.16.4"
description = "A task runner that works well with poetry."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pastel = ">=0.2.1,<0.3.0"
tomli = ">=1.2.2"
[package.extras]
poetry-plugin = ["poetry (>=1.0,<2.0)"]
[[package]]
name = "pox"
version = "0.3.2"
description = "utilities for filesystem exploration and automated builds"
category = "main"
optional = true
python-versions = ">=3.7"
[[package]]
name = "ppft"
version = "1.7.6.6"
description = "distributed and parallel python"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
dill = ["dill (>=0.3.6)"]
[[package]]
name = "preshed"
version = "3.0.8"
description = "Cython hash table that trusts the keys are pre-hashed"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cymem = ">=2.0.2,<2.1.0"
murmurhash = ">=0.28.0,<1.1.0"
[[package]]
name = "progressbar2"
version = "4.2.0"
description = "A Python Progressbar library to provide visual (yet text based) progress to long running operations."
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
python-utils = ">=3.0.0"
[package.extras]
docs = ["sphinx (>=1.8.5)"]
tests = ["flake8 (>=3.7.7)", "freezegun (>=0.3.11)", "pytest (>=4.6.9)", "pytest-cov (>=2.6.1)", "pytest-mypy", "sphinx (>=1.8.5)"]
[[package]]
name = "prometheus-client"
version = "0.15.0"
description = "Python client for the Prometheus monitoring system."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
twisted = ["twisted"]
[[package]]
name = "prompt-toolkit"
version = "3.0.33"
description = "Library for building powerful interactive command lines in Python"
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
wcwidth = "*"
[[package]]
name = "protobuf"
version = "3.19.6"
description = "Protocol Buffers"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "psutil"
version = "5.9.4"
description = "Cross-platform lib for process and system monitoring in Python."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.extras]
test = ["enum34", "ipaddress", "mock", "pywin32", "wmi"]
[[package]]
name = "ptyprocess"
version = "0.7.0"
description = "Run a subprocess in a pseudo terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pure-eval"
version = "0.2.2"
description = "Safely evaluate AST nodes without side effects"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest"]
[[package]]
name = "py"
version = "1.11.0"
description = "library with cross-python path, ini-parsing, io, code, log facilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pyasn1"
version = "0.4.8"
description = "ASN.1 types and codecs"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pyasn1-modules"
version = "0.2.8"
description = "A collection of ASN.1-based protocols modules."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pyasn1 = ">=0.4.6,<0.5.0"
[[package]]
name = "pycodestyle"
version = "2.8.0"
description = "Python style guide checker"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pycparser"
version = "2.21"
description = "C parser in Python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pydantic"
version = "1.10.2"
description = "Data validation and settings management using python type hints"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
typing-extensions = ">=4.1.0"
[package.extras]
dotenv = ["python-dotenv (>=0.10.4)"]
email = ["email-validator (>=1.0.3)"]
[[package]]
name = "pydata-sphinx-theme"
version = "0.9.0"
description = "Bootstrap-based Sphinx theme from the PyData community"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
docutils = "!=0.17.0"
packaging = "*"
sphinx = ">=4.0.2"
[package.extras]
coverage = ["codecov", "pydata-sphinx-theme[test]", "pytest-cov"]
dev = ["nox", "pre-commit", "pydata-sphinx-theme[coverage]", "pyyaml"]
doc = ["jupyter_sphinx", "myst-parser", "numpy", "numpydoc", "pandas", "plotly", "pytest", "pytest-regressions", "sphinx-design", "sphinx-sitemap", "sphinxext-rediraffe", "xarray"]
test = ["pydata-sphinx-theme[doc]", "pytest"]
[[package]]
name = "pydot"
version = "1.4.2"
description = "Python interface to Graphviz's Dot"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
pyparsing = ">=2.1.4"
[[package]]
name = "pydotplus"
version = "2.0.2"
description = "Python interface to Graphviz's Dot language"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
pyparsing = ">=2.0.1"
[[package]]
name = "pyflakes"
version = "2.4.0"
description = "passive checker of Python programs"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pygam"
version = "0.8.0"
description = "GAM toolkit"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
future = "*"
numpy = "*"
progressbar2 = "*"
scipy = "*"
[[package]]
name = "pygments"
version = "2.13.0"
description = "Pygments is a syntax highlighting package written in Python."
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
plugins = ["importlib-metadata"]
[[package]]
name = "pygraphviz"
version = "1.10"
description = "Python interface to Graphviz"
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "pyparsing"
version = "3.0.9"
description = "pyparsing module - Classes and methods to define and execute parsing grammars"
category = "main"
optional = false
python-versions = ">=3.6.8"
[package.extras]
diagrams = ["jinja2", "railroad-diagrams"]
[[package]]
name = "pyro-api"
version = "0.1.2"
description = "Generic API for dispatch to Pyro backends."
category = "main"
optional = true
python-versions = "*"
[package.extras]
dev = ["ipython", "sphinx (>=2.0)", "sphinx-rtd-theme"]
test = ["flake8", "pytest (>=5.0)"]
[[package]]
name = "pyro-ppl"
version = "1.8.3"
description = "A Python library for probabilistic modeling and inference"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.7"
opt-einsum = ">=2.3.2"
pyro-api = ">=0.1.1"
torch = ">=1.11.0"
tqdm = ">=4.36"
[package.extras]
dev = ["black (>=21.4b0)", "flake8", "graphviz (>=0.8)", "isort (>=5.0)", "jupyter (>=1.0.0)", "lap", "matplotlib (>=1.3)", "mypy (>=0.812)", "nbformat", "nbsphinx (>=0.3.2)", "nbstripout", "nbval", "ninja", "pandas", "pillow (==8.2.0)", "pypandoc", "pytest (>=5.0)", "pytest-xdist", "scikit-learn", "scipy (>=1.1)", "seaborn (>=0.11.0)", "sphinx", "sphinx-rtd-theme", "torchvision (>=0.12.0)", "visdom (>=0.1.4,<0.2.2)", "wget", "yapf"]
extras = ["graphviz (>=0.8)", "jupyter (>=1.0.0)", "lap", "matplotlib (>=1.3)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn (>=0.11.0)", "torchvision (>=0.12.0)", "visdom (>=0.1.4,<0.2.2)", "wget"]
funsor = ["funsor[torch] (==0.4.3)"]
horovod = ["horovod[pytorch] (>=0.19)"]
profile = ["prettytable", "pytest-benchmark", "snakeviz"]
test = ["black (>=21.4b0)", "flake8", "graphviz (>=0.8)", "jupyter (>=1.0.0)", "lap", "matplotlib (>=1.3)", "nbval", "pandas", "pillow (==8.2.0)", "pytest (>=5.0)", "pytest-cov", "scikit-learn", "scipy (>=1.1)", "seaborn (>=0.11.0)", "torchvision (>=0.12.0)", "visdom (>=0.1.4,<0.2.2)", "wget"]
[[package]]
name = "pyrsistent"
version = "0.19.2"
description = "Persistent/Functional/Immutable data structures"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pytest"
version = "7.2.0"
description = "pytest: simple powerful testing with Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=19.2.0"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=0.12,<2.0"
tomli = {version = ">=1.0.0", markers = "python_version < \"3.11\""}
[package.extras]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
[[package]]
name = "pytest-cov"
version = "3.0.0"
description = "Pytest plugin for measuring coverage."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
coverage = {version = ">=5.2.1", extras = ["toml"]}
pytest = ">=4.6"
[package.extras]
testing = ["fields", "hunter", "process-tests", "pytest-xdist", "six", "virtualenv"]
[[package]]
name = "pytest-split"
version = "0.8.0"
description = "Pytest plugin which splits the test suite to equally sized sub suites based on test execution time."
category = "dev"
optional = false
python-versions = ">=3.7.1,<4.0"
[package.dependencies]
pytest = ">=5,<8"
[[package]]
name = "python-dateutil"
version = "2.8.2"
description = "Extensions to the standard Python datetime module"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
[package.dependencies]
six = ">=1.5"
[[package]]
name = "python-utils"
version = "3.4.5"
description = "Python Utils is a module with some convenient utilities not included with the standard Python install"
category = "main"
optional = true
python-versions = ">3.6.0"
[package.extras]
docs = ["mock", "python-utils", "sphinx"]
loguru = ["loguru"]
tests = ["flake8", "loguru", "pytest", "pytest-asyncio", "pytest-cov", "pytest-mypy", "sphinx", "types-setuptools"]
[[package]]
name = "pytz"
version = "2022.6"
description = "World timezone definitions, modern and historical"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pytz-deprecation-shim"
version = "0.1.0.post0"
description = "Shims to make deprecation of pytz easier"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version >= \"3.6\" and python_version < \"3.9\""}
tzdata = {version = "*", markers = "python_version >= \"3.6\""}
[[package]]
name = "pywin32"
version = "305"
description = "Python for Window Extensions"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywinpty"
version = "2.0.9"
description = "Pseudo terminal support for Windows from Python."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pyyaml"
version = "6.0"
description = "YAML parser and emitter for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "pyzmq"
version = "24.0.1"
description = "Python bindings for 0MQ"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = {version = "*", markers = "implementation_name == \"pypy\""}
py = {version = "*", markers = "implementation_name == \"pypy\""}
[[package]]
name = "qtconsole"
version = "5.4.0"
description = "Jupyter Qt console"
category = "dev"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
ipykernel = ">=4.1"
ipython-genutils = "*"
jupyter-client = ">=4.1"
jupyter-core = "*"
pygments = "*"
pyzmq = ">=17.1"
qtpy = ">=2.0.1"
traitlets = "<5.2.1 || >5.2.1,<5.2.2 || >5.2.2"
[package.extras]
doc = ["Sphinx (>=1.3)"]
test = ["flaky", "pytest", "pytest-qt"]
[[package]]
name = "qtpy"
version = "2.3.0"
description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
[package.extras]
test = ["pytest (>=6,!=7.0.0,!=7.0.1)", "pytest-cov (>=3.0.0)", "pytest-qt"]
[[package]]
name = "requests"
version = "2.28.1"
description = "Python HTTP for Humans."
category = "main"
optional = false
python-versions = ">=3.7, <4"
[package.dependencies]
certifi = ">=2017.4.17"
charset-normalizer = ">=2,<3"
idna = ">=2.5,<4"
urllib3 = ">=1.21.1,<1.27"
[package.extras]
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "requests-oauthlib"
version = "1.3.1"
description = "OAuthlib authentication support for Requests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
oauthlib = ">=3.0.0"
requests = ">=2.0.0"
[package.extras]
rsa = ["oauthlib[signedtoken] (>=3.0.0)"]
[[package]]
name = "rpy2"
version = "3.5.6"
description = "Python interface to the R language (embedded R)"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
cffi = ">=1.10.0"
jinja2 = "*"
packaging = {version = "*", markers = "platform_system == \"Windows\""}
pytz = "*"
tzlocal = "*"
[package.extras]
all = ["ipython", "numpy", "pandas", "pytest"]
numpy = ["pandas"]
pandas = ["numpy", "pandas"]
test = ["ipython", "numpy", "pandas", "pytest"]
[[package]]
name = "rsa"
version = "4.9"
description = "Pure-Python RSA implementation"
category = "dev"
optional = false
python-versions = ">=3.6,<4"
[package.dependencies]
pyasn1 = ">=0.1.3"
[[package]]
name = "s3transfer"
version = "0.6.0"
description = "An Amazon S3 Transfer Manager"
category = "main"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
botocore = ">=1.12.36,<2.0a.0"
[package.extras]
crt = ["botocore[crt] (>=1.20.29,<2.0a.0)"]
[[package]]
name = "scikit-learn"
version = "1.0.2"
description = "A set of python modules for machine learning and data mining"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
joblib = ">=0.11"
numpy = ">=1.14.6"
scipy = ">=1.1.0"
threadpoolctl = ">=2.0.0"
[package.extras]
benchmark = ["matplotlib (>=2.2.3)", "memory-profiler (>=0.57.0)", "pandas (>=0.25.0)"]
docs = ["Pillow (>=7.1.2)", "matplotlib (>=2.2.3)", "memory-profiler (>=0.57.0)", "numpydoc (>=1.0.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "seaborn (>=0.9.0)", "sphinx (>=4.0.1)", "sphinx-gallery (>=0.7.0)", "sphinx-prompt (>=1.3.0)", "sphinxext-opengraph (>=0.4.2)"]
examples = ["matplotlib (>=2.2.3)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "seaborn (>=0.9.0)"]
tests = ["black (>=21.6b0)", "flake8 (>=3.8.2)", "matplotlib (>=2.2.3)", "mypy (>=0.770)", "pandas (>=0.25.0)", "pyamg (>=4.0.0)", "pytest (>=5.0.1)", "pytest-cov (>=2.9.0)", "scikit-image (>=0.14.5)"]
[[package]]
name = "scipy"
version = "1.8.1"
description = "SciPy: Scientific Library for Python"
category = "main"
optional = false
python-versions = ">=3.8,<3.11"
[package.dependencies]
numpy = ">=1.17.3,<1.25.0"
[[package]]
name = "scipy"
version = "1.9.3"
description = "Fundamental algorithms for scientific computing in Python"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = ">=1.18.5,<1.26.0"
[package.extras]
dev = ["flake8", "mypy", "pycodestyle", "typing_extensions"]
doc = ["matplotlib (>2)", "numpydoc", "pydata-sphinx-theme (==0.9.0)", "sphinx (!=4.1.0)", "sphinx-panels (>=0.5.2)", "sphinx-tabs"]
test = ["asv", "gmpy2", "mpmath", "pytest", "pytest-cov", "pytest-xdist", "scikit-umfpack", "threadpoolctl"]
[[package]]
name = "seaborn"
version = "0.12.1"
description = "Statistical data visualization"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
matplotlib = ">=3.1,<3.6.1 || >3.6.1"
numpy = ">=1.17"
pandas = ">=0.25"
[package.extras]
dev = ["flake8", "mypy", "pandas-stubs", "pre-commit", "pytest", "pytest-cov", "pytest-xdist"]
docs = ["ipykernel", "nbconvert", "numpydoc", "pydata_sphinx_theme (==0.10.0rc2)", "pyyaml", "sphinx-copybutton", "sphinx-design", "sphinx-issues"]
stats = ["scipy (>=1.3)", "statsmodels (>=0.10)"]
[[package]]
name = "send2trash"
version = "1.8.0"
description = "Send file to trash natively under Mac OS X, Windows and Linux."
category = "dev"
optional = false
python-versions = "*"
[package.extras]
nativelib = ["pyobjc-framework-Cocoa", "pywin32"]
objc = ["pyobjc-framework-Cocoa"]
win32 = ["pywin32"]
[[package]]
name = "setuptools"
version = "65.6.1"
description = "Easily download, build, install, upgrade, and uninstall Python packages"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-notfound-page (==0.8.3)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8 (<5)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pip-run (>=8.8)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
[[package]]
name = "setuptools-scm"
version = "7.0.5"
description = "the blessed package to manage your versions by scm tags"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = ">=20.0"
setuptools = "*"
tomli = ">=1.0.0"
typing-extensions = "*"
[package.extras]
test = ["pytest (>=6.2)", "virtualenv (>20)"]
toml = ["setuptools (>=42)"]
[[package]]
name = "shap"
version = "0.40.0"
description = "A unified approach to explain the output of any machine learning model."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
cloudpickle = "*"
numba = "*"
numpy = "*"
packaging = ">20.9"
pandas = "*"
scikit-learn = "*"
scipy = "*"
slicer = "0.0.7"
tqdm = ">4.25.0"
[package.extras]
all = ["catboost", "ipython", "lightgbm", "lime", "matplotlib", "nbsphinx", "numpydoc", "opencv-python", "pyod", "pyspark", "pytest", "pytest-cov", "pytest-mpl", "sentencepiece", "sphinx", "sphinx_rtd_theme", "torch", "transformers", "xgboost"]
docs = ["ipython", "matplotlib", "nbsphinx", "numpydoc", "sphinx", "sphinx_rtd_theme"]
others = ["lime"]
plots = ["ipython", "matplotlib"]
test = ["catboost", "lightgbm", "opencv-python", "pyod", "pyspark", "pytest", "pytest-cov", "pytest-mpl", "sentencepiece", "torch", "transformers", "xgboost"]
[[package]]
name = "six"
version = "1.16.0"
description = "Python 2 and 3 compatibility utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "slicer"
version = "0.0.7"
description = "A small package for big slicing."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "smart-open"
version = "5.2.1"
description = "Utils for streaming large files (S3, HDFS, GCS, Azure Blob Storage, gzip, bz2...)"
category = "main"
optional = false
python-versions = ">=3.6,<4.0"
[package.extras]
all = ["azure-common", "azure-core", "azure-storage-blob", "boto3", "google-cloud-storage", "requests"]
azure = ["azure-common", "azure-core", "azure-storage-blob"]
gcs = ["google-cloud-storage"]
http = ["requests"]
s3 = ["boto3"]
test = ["azure-common", "azure-core", "azure-storage-blob", "boto3", "google-cloud-storage", "moto[server] (==1.3.14)", "parameterizedtestcase", "paramiko", "pathlib2", "pytest", "pytest-rerunfailures", "requests", "responses"]
webhdfs = ["requests"]
[[package]]
name = "sniffio"
version = "1.3.0"
description = "Sniff out which async library your code is running under"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "snowballstemmer"
version = "2.2.0"
description = "This package provides 29 stemmers for 28 languages generated from Snowball algorithms."
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "sortedcontainers"
version = "2.4.0"
description = "Sorted Containers -- Sorted List, Sorted Dict, Sorted Set"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "soupsieve"
version = "2.3.2.post1"
description = "A modern CSS selector implementation for Beautiful Soup."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "spacy"
version = "3.4.3"
description = "Industrial-strength Natural Language Processing (NLP) in Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
catalogue = ">=2.0.6,<2.1.0"
cymem = ">=2.0.2,<2.1.0"
jinja2 = "*"
langcodes = ">=3.2.0,<4.0.0"
murmurhash = ">=0.28.0,<1.1.0"
numpy = ">=1.15.0"
packaging = ">=20.0"
pathy = ">=0.3.5"
preshed = ">=3.0.2,<3.1.0"
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0"
requests = ">=2.13.0,<3.0.0"
setuptools = "*"
spacy-legacy = ">=3.0.10,<3.1.0"
spacy-loggers = ">=1.0.0,<2.0.0"
srsly = ">=2.4.3,<3.0.0"
thinc = ">=8.1.0,<8.2.0"
tqdm = ">=4.38.0,<5.0.0"
typer = ">=0.3.0,<0.8.0"
wasabi = ">=0.9.1,<1.1.0"
[package.extras]
apple = ["thinc-apple-ops (>=0.1.0.dev0,<1.0.0)"]
cuda = ["cupy (>=5.0.0b4,<12.0.0)"]
cuda-autodetect = ["cupy-wheel (>=11.0.0,<12.0.0)"]
cuda100 = ["cupy-cuda100 (>=5.0.0b4,<12.0.0)"]
cuda101 = ["cupy-cuda101 (>=5.0.0b4,<12.0.0)"]
cuda102 = ["cupy-cuda102 (>=5.0.0b4,<12.0.0)"]
cuda110 = ["cupy-cuda110 (>=5.0.0b4,<12.0.0)"]
cuda111 = ["cupy-cuda111 (>=5.0.0b4,<12.0.0)"]
cuda112 = ["cupy-cuda112 (>=5.0.0b4,<12.0.0)"]
cuda113 = ["cupy-cuda113 (>=5.0.0b4,<12.0.0)"]
cuda114 = ["cupy-cuda114 (>=5.0.0b4,<12.0.0)"]
cuda115 = ["cupy-cuda115 (>=5.0.0b4,<12.0.0)"]
cuda116 = ["cupy-cuda116 (>=5.0.0b4,<12.0.0)"]
cuda117 = ["cupy-cuda117 (>=5.0.0b4,<12.0.0)"]
cuda11x = ["cupy-cuda11x (>=11.0.0,<12.0.0)"]
cuda80 = ["cupy-cuda80 (>=5.0.0b4,<12.0.0)"]
cuda90 = ["cupy-cuda90 (>=5.0.0b4,<12.0.0)"]
cuda91 = ["cupy-cuda91 (>=5.0.0b4,<12.0.0)"]
cuda92 = ["cupy-cuda92 (>=5.0.0b4,<12.0.0)"]
ja = ["sudachidict-core (>=20211220)", "sudachipy (>=0.5.2,!=0.6.1)"]
ko = ["natto-py (>=0.9.0)"]
lookups = ["spacy-lookups-data (>=1.0.3,<1.1.0)"]
ray = ["spacy-ray (>=0.1.0,<1.0.0)"]
th = ["pythainlp (>=2.0)"]
transformers = ["spacy-transformers (>=1.1.2,<1.2.0)"]
[[package]]
name = "spacy-legacy"
version = "3.0.10"
description = "Legacy registered functions for spaCy backwards compatibility"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "spacy-loggers"
version = "1.0.3"
description = "Logging utilities for SpaCy"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
wasabi = ">=0.8.1,<1.1.0"
[[package]]
name = "sparse"
version = "0.13.0"
description = "Sparse n-dimensional arrays"
category = "main"
optional = false
python-versions = ">=3.6, <4"
[package.dependencies]
numba = ">=0.49"
numpy = ">=1.17"
scipy = ">=0.19"
[package.extras]
all = ["dask[array]", "pytest (>=3.5)", "pytest-black", "pytest-cov", "sphinx", "sphinx-rtd-theme", "tox"]
docs = ["sphinx", "sphinx-rtd-theme"]
tests = ["dask[array]", "pytest (>=3.5)", "pytest-black", "pytest-cov"]
tox = ["dask[array]", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox"]
[[package]]
name = "sphinx"
version = "5.3.0"
description = "Python documentation generator"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
alabaster = ">=0.7,<0.8"
babel = ">=2.9"
colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
docutils = ">=0.14,<0.20"
imagesize = ">=1.3"
importlib-metadata = {version = ">=4.8", markers = "python_version < \"3.10\""}
Jinja2 = ">=3.0"
packaging = ">=21.0"
Pygments = ">=2.12"
requests = ">=2.5.0"
snowballstemmer = ">=2.0"
sphinxcontrib-applehelp = "*"
sphinxcontrib-devhelp = "*"
sphinxcontrib-htmlhelp = ">=2.0.0"
sphinxcontrib-jsmath = "*"
sphinxcontrib-qthelp = "*"
sphinxcontrib-serializinghtml = ">=1.1.5"
[package.extras]
docs = ["sphinxcontrib-websupport"]
lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-bugbear", "flake8-comprehensions", "flake8-simplify", "isort", "mypy (>=0.981)", "sphinx-lint", "types-requests", "types-typed-ast"]
test = ["cython", "html5lib", "pytest (>=4.6)", "typed_ast"]
[[package]]
name = "sphinx-copybutton"
version = "0.5.0"
description = "Add a copy button to each of your code cells."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
sphinx = ">=1.8"
[package.extras]
code-style = ["pre-commit (==2.12.1)"]
rtd = ["ipython", "myst-nb", "sphinx", "sphinx-book-theme"]
[[package]]
name = "sphinx-design"
version = "0.3.0"
description = "A sphinx extension for designing beautiful, view size responsive web components."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
sphinx = ">=4,<6"
[package.extras]
code-style = ["pre-commit (>=2.12,<3.0)"]
rtd = ["myst-parser (>=0.18.0,<0.19.0)"]
testing = ["myst-parser (>=0.18.0,<0.19.0)", "pytest (>=7.1,<8.0)", "pytest-cov", "pytest-regressions"]
theme-furo = ["furo (>=2022.06.04,<2022.07)"]
theme-pydata = ["pydata-sphinx-theme (>=0.9.0,<0.10.0)"]
theme-rtd = ["sphinx-rtd-theme (>=1.0,<2.0)"]
theme-sbt = ["sphinx-book-theme (>=0.3.0,<0.4.0)"]
[[package]]
name = "sphinx-rtd-theme"
version = "1.1.1"
description = "Read the Docs theme for Sphinx"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
docutils = "<0.18"
sphinx = ">=1.6,<6"
[package.extras]
dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
[[package]]
name = "sphinxcontrib-applehelp"
version = "1.0.2"
description = "sphinxcontrib-applehelp is a sphinx extension which outputs Apple help books"
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "sphinxcontrib-devhelp"
version = "1.0.2"
description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "sphinxcontrib-googleanalytics"
version = "0.2.dev20220919"
description = "Sphinx extension googleanalytics"
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
Sphinx = ">=0.6"
[package.source]
type = "git"
url = "https://github.com/sphinx-contrib/googleanalytics.git"
reference = "master"
resolved_reference = "42b3df99fdc01a136b9c575f3f251ae80cdfbe1d"
[[package]]
name = "sphinxcontrib-htmlhelp"
version = "2.0.0"
description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["html5lib", "pytest"]
[[package]]
name = "sphinxcontrib-jsmath"
version = "1.0.1"
description = "A sphinx extension which renders display math in HTML via JavaScript"
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["flake8", "mypy", "pytest"]
[[package]]
name = "sphinxcontrib-qthelp"
version = "1.0.3"
description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "sphinxcontrib-serializinghtml"
version = "1.1.5"
description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "srsly"
version = "2.4.5"
description = "Modern high-performance serialization utilities for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
catalogue = ">=2.0.3,<2.1.0"
[[package]]
name = "stack-data"
version = "0.6.1"
description = "Extract data from python stack frames and tracebacks for informative displays"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
asttokens = ">=2.1.0"
executing = ">=1.2.0"
pure-eval = "*"
[package.extras]
tests = ["cython", "littleutils", "pygments", "pytest", "typeguard"]
[[package]]
name = "statsmodels"
version = "0.13.5"
description = "Statistical computations and models for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = {version = ">=1.17", markers = "python_version != \"3.10\" or platform_system != \"Windows\" or platform_python_implementation == \"PyPy\""}
packaging = ">=21.3"
pandas = ">=0.25"
patsy = ">=0.5.2"
scipy = [
{version = ">=1.3", markers = "(python_version > \"3.9\" or platform_system != \"Windows\" or platform_machine != \"x86\") and python_version < \"3.12\""},
{version = ">=1.3,<1.9", markers = "python_version == \"3.8\" and platform_system == \"Windows\" and platform_machine == \"x86\" or python_version == \"3.9\" and platform_system == \"Windows\" and platform_machine == \"x86\""},
]
[package.extras]
build = ["cython (>=0.29.32)"]
develop = ["Jinja2", "colorama", "cython (>=0.29.32)", "cython (>=0.29.32,<3.0.0)", "flake8", "isort", "joblib", "matplotlib (>=3)", "oldest-supported-numpy (>=2022.4.18)", "pytest (>=7.0.1,<7.1.0)", "pytest-randomly", "pytest-xdist", "pywinpty", "setuptools-scm[toml] (>=7.0.0,<7.1.0)"]
docs = ["ipykernel", "jupyter-client", "matplotlib", "nbconvert", "nbformat", "numpydoc", "pandas-datareader", "sphinx"]
[[package]]
name = "sympy"
version = "1.11.1"
description = "Computer algebra system (CAS) in Python"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
mpmath = ">=0.19"
[[package]]
name = "tblib"
version = "1.7.0"
description = "Traceback serialization library."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "tenacity"
version = "8.1.0"
description = "Retry code until it succeeds"
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
doc = ["reno", "sphinx", "tornado (>=4.5)"]
[[package]]
name = "tensorboard"
version = "2.11.0"
description = "TensorBoard lets you watch Tensors Flow"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=0.4"
google-auth = ">=1.6.3,<3"
google-auth-oauthlib = ">=0.4.1,<0.5"
grpcio = ">=1.24.3"
markdown = ">=2.6.8"
numpy = ">=1.12.0"
protobuf = ">=3.9.2,<4"
requests = ">=2.21.0,<3"
setuptools = ">=41.0.0"
tensorboard-data-server = ">=0.6.0,<0.7.0"
tensorboard-plugin-wit = ">=1.6.0"
werkzeug = ">=1.0.1"
wheel = ">=0.26"
[[package]]
name = "tensorboard-data-server"
version = "0.6.1"
description = "Fast data loading for TensorBoard"
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tensorboard-plugin-wit"
version = "1.8.1"
description = "What-If Tool TensorBoard plugin."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "tensorflow"
version = "2.11.0"
description = "TensorFlow is an open source machine learning framework for everyone."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=1.0.0"
astunparse = ">=1.6.0"
flatbuffers = ">=2.0"
gast = ">=0.2.1,<=0.4.0"
google-pasta = ">=0.1.1"
grpcio = ">=1.24.3,<2.0"
h5py = ">=2.9.0"
keras = ">=2.11.0,<2.12"
libclang = ">=13.0.0"
numpy = ">=1.20"
opt-einsum = ">=2.3.2"
packaging = "*"
protobuf = ">=3.9.2,<3.20"
setuptools = "*"
six = ">=1.12.0"
tensorboard = ">=2.11,<2.12"
tensorflow-estimator = ">=2.11.0,<2.12"
tensorflow-io-gcs-filesystem = {version = ">=0.23.1", markers = "platform_machine != \"arm64\" or platform_system != \"Darwin\""}
termcolor = ">=1.1.0"
typing-extensions = ">=3.6.6"
wrapt = ">=1.11.0"
[[package]]
name = "tensorflow-estimator"
version = "2.11.0"
description = "TensorFlow Estimator."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tensorflow-io-gcs-filesystem"
version = "0.28.0"
description = "TensorFlow IO"
category = "dev"
optional = false
python-versions = ">=3.7, <3.11"
[package.extras]
tensorflow = ["tensorflow (>=2.11.0,<2.12.0)"]
tensorflow-aarch64 = ["tensorflow-aarch64 (>=2.11.0,<2.12.0)"]
tensorflow-cpu = ["tensorflow-cpu (>=2.11.0,<2.12.0)"]
tensorflow-gpu = ["tensorflow-gpu (>=2.11.0,<2.12.0)"]
tensorflow-rocm = ["tensorflow-rocm (>=2.11.0,<2.12.0)"]
[[package]]
name = "termcolor"
version = "2.1.1"
description = "ANSI color formatting for output in terminal"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
tests = ["pytest", "pytest-cov"]
[[package]]
name = "terminado"
version = "0.17.0"
description = "Tornado websocket backend for the Xterm.js Javascript terminal emulator library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ptyprocess = {version = "*", markers = "os_name != \"nt\""}
pywinpty = {version = ">=1.1.0", markers = "os_name == \"nt\""}
tornado = ">=6.1.0"
[package.extras]
docs = ["pydata-sphinx-theme", "sphinx"]
test = ["pre-commit", "pytest (>=7.0)", "pytest-timeout"]
[[package]]
name = "thinc"
version = "8.1.5"
description = "A refreshing functional take on deep learning, compatible with your favorite libraries"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
blis = ">=0.7.8,<0.8.0"
catalogue = ">=2.0.4,<2.1.0"
confection = ">=0.0.1,<1.0.0"
cymem = ">=2.0.2,<2.1.0"
murmurhash = ">=1.0.2,<1.1.0"
numpy = ">=1.15.0"
preshed = ">=3.0.2,<3.1.0"
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0"
setuptools = "*"
srsly = ">=2.4.0,<3.0.0"
wasabi = ">=0.8.1,<1.1.0"
[package.extras]
cuda = ["cupy (>=5.0.0b4)"]
cuda-autodetect = ["cupy-wheel (>=11.0.0)"]
cuda100 = ["cupy-cuda100 (>=5.0.0b4)"]
cuda101 = ["cupy-cuda101 (>=5.0.0b4)"]
cuda102 = ["cupy-cuda102 (>=5.0.0b4)"]
cuda110 = ["cupy-cuda110 (>=5.0.0b4)"]
cuda111 = ["cupy-cuda111 (>=5.0.0b4)"]
cuda112 = ["cupy-cuda112 (>=5.0.0b4)"]
cuda113 = ["cupy-cuda113 (>=5.0.0b4)"]
cuda114 = ["cupy-cuda114 (>=5.0.0b4)"]
cuda115 = ["cupy-cuda115 (>=5.0.0b4)"]
cuda116 = ["cupy-cuda116 (>=5.0.0b4)"]
cuda117 = ["cupy-cuda117 (>=5.0.0b4)"]
cuda11x = ["cupy-cuda11x (>=11.0.0)"]
cuda80 = ["cupy-cuda80 (>=5.0.0b4)"]
cuda90 = ["cupy-cuda90 (>=5.0.0b4)"]
cuda91 = ["cupy-cuda91 (>=5.0.0b4)"]
cuda92 = ["cupy-cuda92 (>=5.0.0b4)"]
datasets = ["ml-datasets (>=0.2.0,<0.3.0)"]
mxnet = ["mxnet (>=1.5.1,<1.6.0)"]
tensorflow = ["tensorflow (>=2.0.0,<2.6.0)"]
torch = ["torch (>=1.6.0)"]
[[package]]
name = "threadpoolctl"
version = "3.1.0"
description = "threadpoolctl"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tinycss2"
version = "1.2.1"
description = "A tiny CSS parser"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
webencodings = ">=0.4"
[package.extras]
doc = ["sphinx", "sphinx_rtd_theme"]
test = ["flake8", "isort", "pytest"]
[[package]]
name = "tokenize-rt"
version = "5.0.0"
description = "A wrapper around the stdlib `tokenize` which roundtrips."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tomli"
version = "2.0.1"
description = "A lil' TOML parser"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "toolz"
version = "0.12.0"
description = "List processing tools and functional utilities"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "torch"
version = "1.12.1"
description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration"
category = "main"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
typing-extensions = "*"
[[package]]
name = "torchvision"
version = "0.13.1"
description = "image and video datasets and models for torch deep learning"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = "*"
pillow = ">=5.3.0,<8.3.0 || >=8.4.0"
requests = "*"
torch = "1.12.1"
typing-extensions = "*"
[package.extras]
scipy = ["scipy"]
[[package]]
name = "tornado"
version = "6.2"
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
category = "main"
optional = false
python-versions = ">= 3.7"
[[package]]
name = "tqdm"
version = "4.64.1"
description = "Fast, Extensible Progress Meter"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
dev = ["py-make (>=0.1.0)", "twine", "wheel"]
notebook = ["ipywidgets (>=6)"]
slack = ["slack-sdk"]
telegram = ["requests"]
[[package]]
name = "traitlets"
version = "5.5.0"
description = ""
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["myst-parser", "pydata-sphinx-theme", "sphinx"]
test = ["pre-commit", "pytest"]
[[package]]
name = "typer"
version = "0.7.0"
description = "Typer, build great CLIs. Easy to code. Based on Python type hints."
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
click = ">=7.1.1,<9.0.0"
[package.extras]
all = ["colorama (>=0.4.3,<0.5.0)", "rich (>=10.11.0,<13.0.0)", "shellingham (>=1.3.0,<2.0.0)"]
dev = ["autoflake (>=1.3.1,<2.0.0)", "flake8 (>=3.8.3,<4.0.0)", "pre-commit (>=2.17.0,<3.0.0)"]
doc = ["cairosvg (>=2.5.2,<3.0.0)", "mdx-include (>=1.4.1,<2.0.0)", "mkdocs (>=1.1.2,<2.0.0)", "mkdocs-material (>=8.1.4,<9.0.0)", "pillow (>=9.3.0,<10.0.0)"]
test = ["black (>=22.3.0,<23.0.0)", "coverage (>=6.2,<7.0)", "isort (>=5.0.6,<6.0.0)", "mypy (==0.910)", "pytest (>=4.4.0,<8.0.0)", "pytest-cov (>=2.10.0,<5.0.0)", "pytest-sugar (>=0.9.4,<0.10.0)", "pytest-xdist (>=1.32.0,<4.0.0)", "rich (>=10.11.0,<13.0.0)", "shellingham (>=1.3.0,<2.0.0)"]
[[package]]
name = "typing-extensions"
version = "4.4.0"
description = "Backported and Experimental Type Hints for Python 3.7+"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tzdata"
version = "2022.6"
description = "Provider of IANA time zone data"
category = "dev"
optional = false
python-versions = ">=2"
[[package]]
name = "tzlocal"
version = "4.2"
description = "tzinfo object for the local timezone"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version < \"3.9\""}
pytz-deprecation-shim = "*"
tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["black", "pyroma", "pytest-cov", "zest.releaser"]
test = ["pytest (>=4.3)", "pytest-mock (>=3.3)"]
[[package]]
name = "urllib3"
version = "1.26.12"
description = "HTTP library with thread-safe connection pooling, file post, and more."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, <4"
[package.extras]
brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)", "brotlipy (>=0.6.0)"]
secure = ["certifi", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "ipaddress", "pyOpenSSL (>=0.14)", "urllib3-secure-extra"]
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
[[package]]
name = "wasabi"
version = "0.10.1"
description = "A lightweight console printing and formatting toolkit"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "wcwidth"
version = "0.2.5"
description = "Measures the displayed width of unicode strings in a terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "webencodings"
version = "0.5.1"
description = "Character encoding aliases for legacy web content"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "websocket-client"
version = "1.4.2"
description = "WebSocket client for Python with low level API options"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["Sphinx (>=3.4)", "sphinx-rtd-theme (>=0.5)"]
optional = ["python-socks", "wsaccel"]
test = ["websockets"]
[[package]]
name = "werkzeug"
version = "2.2.2"
description = "The comprehensive WSGI web application library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.1.1"
[package.extras]
watchdog = ["watchdog"]
[[package]]
name = "wheel"
version = "0.38.4"
description = "A built-package format for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pytest (>=3.0.0)"]
[[package]]
name = "widgetsnbextension"
version = "4.0.3"
description = "Jupyter interactive widgets for Jupyter Notebook"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "wrapt"
version = "1.14.1"
description = "Module for decorators, wrappers and monkey patching."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "xgboost"
version = "1.7.1"
description = "XGBoost Python Package"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = "*"
scipy = "*"
[package.extras]
dask = ["dask", "distributed", "pandas"]
datatable = ["datatable"]
pandas = ["pandas"]
plotting = ["graphviz", "matplotlib"]
pyspark = ["cloudpickle", "pyspark", "scikit-learn"]
scikit-learn = ["scikit-learn"]
[[package]]
name = "zict"
version = "2.2.0"
description = "Mutable mapping tools"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
heapdict = "*"
[[package]]
name = "zipp"
version = "3.10.0"
description = "Backport of pathlib-compatible object wrapper for zip files"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"]
testing = ["flake8 (<5)", "func-timeout", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
[extras]
causalml = ["causalml", "llvmlite", "cython"]
econml = ["econml"]
plotting = ["matplotlib"]
pydot = ["pydot"]
pygraphviz = ["pygraphviz"]
[metadata]
lock-version = "1.1"
python-versions = ">=3.8,<3.10"
content-hash = "12d40b6d9616d209cd632e2315aafc72f78d3e35efdf6e52ca410588465787cc"
[metadata.files]
absl-py = [
{file = "absl-py-1.3.0.tar.gz", hash = "sha256:463c38a08d2e4cef6c498b76ba5bd4858e4c6ef51da1a5a1f27139a022e20248"},
{file = "absl_py-1.3.0-py3-none-any.whl", hash = "sha256:34995df9bd7a09b3b8749e230408f5a2a2dd7a68a0d33c12a3d0cb15a041a507"},
]
alabaster = [
{file = "alabaster-0.7.12-py2.py3-none-any.whl", hash = "sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359"},
{file = "alabaster-0.7.12.tar.gz", hash = "sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02"},
]
anyio = [
{file = "anyio-3.6.2-py3-none-any.whl", hash = "sha256:fbbe32bd270d2a2ef3ed1c5d45041250284e31fc0a4df4a5a6071842051a51e3"},
{file = "anyio-3.6.2.tar.gz", hash = "sha256:25ea0d673ae30af41a0c442f81cf3b38c7e79fdc7b60335a4c14e05eb0947421"},
]
appnope = [
{file = "appnope-0.1.3-py2.py3-none-any.whl", hash = "sha256:265a455292d0bd8a72453494fa24df5a11eb18373a60c7c0430889f22548605e"},
{file = "appnope-0.1.3.tar.gz", hash = "sha256:02bd91c4de869fbb1e1c50aafc4098827a7a54ab2f39d9dcba6c9547ed920e24"},
]
argon2-cffi = [
{file = "argon2-cffi-21.3.0.tar.gz", hash = "sha256:d384164d944190a7dd7ef22c6aa3ff197da12962bd04b17f64d4e93d934dba5b"},
{file = "argon2_cffi-21.3.0-py3-none-any.whl", hash = "sha256:8c976986f2c5c0e5000919e6de187906cfd81fb1c72bf9d88c01177e77da7f80"},
]
argon2-cffi-bindings = [
{file = "argon2-cffi-bindings-21.2.0.tar.gz", hash = "sha256:bb89ceffa6c791807d1305ceb77dbfacc5aa499891d2c55661c6459651fc39e3"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ccb949252cb2ab3a08c02024acb77cfb179492d5701c7cbdbfd776124d4d2367"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9524464572e12979364b7d600abf96181d3541da11e23ddf565a32e70bd4dc0d"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b746dba803a79238e925d9046a63aa26bf86ab2a2fe74ce6b009a1c3f5c8f2ae"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58ed19212051f49a523abb1dbe954337dc82d947fb6e5a0da60f7c8471a8476c"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd46088725ef7f58b5a1ef7ca06647ebaf0eb4baff7d1d0d177c6cc8744abd86"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_i686.whl", hash = "sha256:8cd69c07dd875537a824deec19f978e0f2078fdda07fd5c42ac29668dda5f40f"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f1152ac548bd5b8bcecfb0b0371f082037e47128653df2e8ba6e914d384f3c3e"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win32.whl", hash = "sha256:603ca0aba86b1349b147cab91ae970c63118a0f30444d4bc80355937c950c082"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:b2ef1c30440dbbcba7a5dc3e319408b59676e2e039e2ae11a8775ecf482b192f"},
{file = "argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e415e3f62c8d124ee16018e491a009937f8cf7ebf5eb430ffc5de21b900dad93"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3e385d1c39c520c08b53d63300c3ecc28622f076f4c2b0e6d7e796e9f6502194"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3e3cc67fdb7d82c4718f19b4e7a87123caf8a93fde7e23cf66ac0337d3cb3f"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a22ad9800121b71099d0fb0a65323810a15f2e292f2ba450810a7316e128ee5"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f9f8b450ed0547e3d473fdc8612083fd08dd2120d6ac8f73828df9b7d45bb351"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:93f9bf70084f97245ba10ee36575f0c3f1e7d7724d67d8e5b08e61787c320ed7"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3b9ef65804859d335dc6b31582cad2c5166f0c3e7975f324d9ffaa34ee7e6583"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4966ef5848d820776f5f562a7d45fdd70c2f330c961d0d745b784034bd9f48d"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20ef543a89dee4db46a1a6e206cd015360e5a75822f76df533845c3cbaf72670"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed2937d286e2ad0cc79a7087d3c272832865f779430e0cc2b4f3718d3159b0cb"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5e00316dabdaea0b2dd82d141cc66889ced0cdcbfa599e8b471cf22c620c329a"},
]
asttokens = [
{file = "asttokens-2.1.0-py2.py3-none-any.whl", hash = "sha256:1b28ed85e254b724439afc783d4bee767f780b936c3fe8b3275332f42cf5f561"},
{file = "asttokens-2.1.0.tar.gz", hash = "sha256:4aa76401a151c8cc572d906aad7aea2a841780834a19d780f4321c0fe1b54635"},
]
astunparse = [
{file = "astunparse-1.6.3-py2.py3-none-any.whl", hash = "sha256:c2652417f2c8b5bb325c885ae329bdf3f86424075c4fd1a128674bc6fba4b8e8"},
{file = "astunparse-1.6.3.tar.gz", hash = "sha256:5ad93a8456f0d084c3456d059fd9a92cce667963232cbf763eac3bc5b7940872"},
]
attrs = [
{file = "attrs-22.1.0-py2.py3-none-any.whl", hash = "sha256:86efa402f67bf2df34f51a335487cf46b1ec130d02b8d39fd248abfd30da551c"},
{file = "attrs-22.1.0.tar.gz", hash = "sha256:29adc2665447e5191d0e7c568fde78b21f9672d344281d0c6e1ab085429b22b6"},
]
autogluon-common = [
{file = "autogluon.common-0.6.0-py3-none-any.whl", hash = "sha256:8e1a46efaab051069589b875e417df30b38150a908e9aa2ff3ab479747a487ce"},
{file = "autogluon.common-0.6.0.tar.gz", hash = "sha256:d967844c728ad8e9a5c0f9e0deddbe6c4beb0e47cdf829a44a4834b5917798e0"},
]
autogluon-core = [
{file = "autogluon.core-0.6.0-py3-none-any.whl", hash = "sha256:b7efd2dfebfc9a3be0e39d1bf1bd352f45b23cccd503cf32afb9f5f23d58126b"},
{file = "autogluon.core-0.6.0.tar.gz", hash = "sha256:a6b6d57ec38d4193afab6b121cde63a6085446a51f84b9fa358221b7fed71ff4"},
]
autogluon-features = [
{file = "autogluon.features-0.6.0-py3-none-any.whl", hash = "sha256:ecff1a69cc768bc55777b3f7453ee89859352162dd43adda4451faadc9e583bf"},
{file = "autogluon.features-0.6.0.tar.gz", hash = "sha256:dced399ac2652c7c872da5208d0a0383778aeca3706a1b987b9781c9420d80c7"},
]
autogluon-tabular = [
{file = "autogluon.tabular-0.6.0-py3-none-any.whl", hash = "sha256:16404037c475e8746d61a7b1c977d5fd14afd853ebc9777fb0eafc851d37f8ad"},
{file = "autogluon.tabular-0.6.0.tar.gz", hash = "sha256:91892b7c9749942526eabfdd1bbb6d9daae2c24f785570a0552b2c7b9b851ab4"},
]
babel = [
{file = "Babel-2.11.0-py3-none-any.whl", hash = "sha256:1ad3eca1c885218f6dce2ab67291178944f810a10a9b5f3cb8382a5a232b64fe"},
{file = "Babel-2.11.0.tar.gz", hash = "sha256:5ef4b3226b0180dedded4229651c8b0e1a3a6a2837d45a073272f313e4cf97f6"},
]
backcall = [
{file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
{file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"},
]
backports-zoneinfo = [
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:da6013fd84a690242c310d77ddb8441a559e9cb3d3d59ebac9aca1a57b2e18bc"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:89a48c0d158a3cc3f654da4c2de1ceba85263fafb861b98b59040a5086259722"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:1c5742112073a563c81f786e77514969acb58649bcdf6cdf0b4ed31a348d4546"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win32.whl", hash = "sha256:e8236383a20872c0cdf5a62b554b27538db7fa1bbec52429d8d106effbaeca08"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:8439c030a11780786a2002261569bdf362264f605dfa4d65090b64b05c9f79a7"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:f04e857b59d9d1ccc39ce2da1021d196e47234873820cbeaad210724b1ee28ac"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:17746bd546106fa389c51dbea67c8b7c8f0d14b5526a579ca6ccf5ed72c526cf"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5c144945a7752ca544b4b78c8c41544cdfaf9786f25fe5ffb10e838e19a27570"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win32.whl", hash = "sha256:e55b384612d93be96506932a786bbcde5a2db7a9e6a4bb4bffe8b733f5b9036b"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a76b38c52400b762e48131494ba26be363491ac4f9a04c1b7e92483d169f6582"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:8961c0f32cd0336fb8e8ead11a1f8cd99ec07145ec2931122faaac1c8f7fd987"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e81b76cace8eda1fca50e345242ba977f9be6ae3945af8d46326d776b4cf78d1"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7b0a64cda4145548fed9efc10322770f929b944ce5cee6c0dfe0c87bf4c0c8c9"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win32.whl", hash = "sha256:1b13e654a55cd45672cb54ed12148cd33628f672548f373963b0bff67b217328"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:4a0f800587060bf8880f954dbef70de6c11bbe59c673c3d818921f042f9954a6"},
{file = "backports.zoneinfo-0.2.1.tar.gz", hash = "sha256:fadbfe37f74051d024037f223b8e001611eac868b5c5b06144ef4d8b799862f2"},
]
beautifulsoup4 = [
{file = "beautifulsoup4-4.11.1-py3-none-any.whl", hash = "sha256:58d5c3d29f5a36ffeb94f02f0d786cd53014cf9b3b3951d42e0080d8a9498d30"},
{file = "beautifulsoup4-4.11.1.tar.gz", hash = "sha256:ad9aa55b65ef2808eb405f46cf74df7fcb7044d5cbc26487f96eb2ef2e436693"},
]
black = [
{file = "black-22.10.0-1fixedarch-cp310-cp310-macosx_11_0_x86_64.whl", hash = "sha256:5cc42ca67989e9c3cf859e84c2bf014f6633db63d1cbdf8fdb666dcd9e77e3fa"},
{file = "black-22.10.0-1fixedarch-cp311-cp311-macosx_11_0_x86_64.whl", hash = "sha256:5d8f74030e67087b219b032aa33a919fae8806d49c867846bfacde57f43972ef"},
{file = "black-22.10.0-1fixedarch-cp37-cp37m-macosx_10_16_x86_64.whl", hash = "sha256:197df8509263b0b8614e1df1756b1dd41be6738eed2ba9e9769f3880c2b9d7b6"},
{file = "black-22.10.0-1fixedarch-cp38-cp38-macosx_10_16_x86_64.whl", hash = "sha256:2644b5d63633702bc2c5f3754b1b475378fbbfb481f62319388235d0cd104c2d"},
{file = "black-22.10.0-1fixedarch-cp39-cp39-macosx_11_0_x86_64.whl", hash = "sha256:e41a86c6c650bcecc6633ee3180d80a025db041a8e2398dcc059b3afa8382cd4"},
{file = "black-22.10.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2039230db3c6c639bd84efe3292ec7b06e9214a2992cd9beb293d639c6402edb"},
{file = "black-22.10.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:14ff67aec0a47c424bc99b71005202045dc09270da44a27848d534600ac64fc7"},
{file = "black-22.10.0-cp310-cp310-win_amd64.whl", hash = "sha256:819dc789f4498ecc91438a7de64427c73b45035e2e3680c92e18795a839ebb66"},
{file = "black-22.10.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5b9b29da4f564ba8787c119f37d174f2b69cdfdf9015b7d8c5c16121ddc054ae"},
{file = "black-22.10.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b8b49776299fece66bffaafe357d929ca9451450f5466e997a7285ab0fe28e3b"},
{file = "black-22.10.0-cp311-cp311-win_amd64.whl", hash = "sha256:21199526696b8f09c3997e2b4db8d0b108d801a348414264d2eb8eb2532e540d"},
{file = "black-22.10.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1e464456d24e23d11fced2bc8c47ef66d471f845c7b7a42f3bd77bf3d1789650"},
{file = "black-22.10.0-cp37-cp37m-win_amd64.whl", hash = "sha256:9311e99228ae10023300ecac05be5a296f60d2fd10fff31cf5c1fa4ca4b1988d"},
{file = "black-22.10.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:fba8a281e570adafb79f7755ac8721b6cf1bbf691186a287e990c7929c7692ff"},
{file = "black-22.10.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:915ace4ff03fdfff953962fa672d44be269deb2eaf88499a0f8805221bc68c87"},
{file = "black-22.10.0-cp38-cp38-win_amd64.whl", hash = "sha256:444ebfb4e441254e87bad00c661fe32df9969b2bf224373a448d8aca2132b395"},
{file = "black-22.10.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:974308c58d057a651d182208a484ce80a26dac0caef2895836a92dd6ebd725e0"},
{file = "black-22.10.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:72ef3925f30e12a184889aac03d77d031056860ccae8a1e519f6cbb742736383"},
{file = "black-22.10.0-cp39-cp39-win_amd64.whl", hash = "sha256:432247333090c8c5366e69627ccb363bc58514ae3e63f7fc75c54b1ea80fa7de"},
{file = "black-22.10.0-py3-none-any.whl", hash = "sha256:c957b2b4ea88587b46cf49d1dc17681c1e672864fd7af32fc1e9664d572b3458"},
{file = "black-22.10.0.tar.gz", hash = "sha256:f513588da599943e0cde4e32cc9879e825d58720d6557062d1098c5ad80080e1"},
]
bleach = [
{file = "bleach-5.0.1-py3-none-any.whl", hash = "sha256:085f7f33c15bd408dd9b17a4ad77c577db66d76203e5984b1bd59baeee948b2a"},
{file = "bleach-5.0.1.tar.gz", hash = "sha256:0d03255c47eb9bd2f26aa9bb7f2107732e7e8fe195ca2f64709fcf3b0a4a085c"},
]
blis = [
{file = "blis-0.7.9-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b3ea73707a7938304c08363a0b990600e579bfb52dece7c674eafac4bf2df9f7"},
{file = "blis-0.7.9-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e85993364cae82707bfe7e637bee64ec96e232af31301e5c81a351778cb394b9"},
{file = "blis-0.7.9-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d205a7e69523e2bacdd67ea906b82b84034067e0de83b33bd83eb96b9e844ae3"},
{file = "blis-0.7.9-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b9737035636452fb6d08e7ab79e5a9904be18a0736868a129179cd9f9ab59825"},
{file = "blis-0.7.9-cp310-cp310-win_amd64.whl", hash = "sha256:d3882b4f44a33367812b5e287c0690027092830ffb1cce124b02f64e761819a4"},
{file = "blis-0.7.9-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3dbb44311029263a6f65ed55a35f970aeb1d20b18bfac4c025de5aadf7889a8c"},
{file = "blis-0.7.9-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:6fd5941bd5a21082b19d1dd0f6d62cd35609c25eb769aa3457d9877ef2ce37a9"},
{file = "blis-0.7.9-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:97ad55e9ef36e4ff06b35802d0cf7bfc56f9697c6bc9427f59c90956bb98377d"},
{file = "blis-0.7.9-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f7b6315d7b1ac5546bc0350f5f8d7cc064438d23db19a5c21aaa6ae7d93c1ab5"},
{file = "blis-0.7.9-cp311-cp311-win_amd64.whl", hash = "sha256:5fd46c649acd1920482b4f5556d1c88693cba9bf6a494a020b00f14b42e1132f"},
{file = "blis-0.7.9-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:db2959560dcb34e912dad0e0d091f19b05b61363bac15d78307c01334a4e5d9d"},
{file = "blis-0.7.9-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0521231bc95ab522f280da3bbb096299c910a62cac2376d48d4a1d403c54393"},
{file = "blis-0.7.9-cp36-cp36m-win_amd64.whl", hash = "sha256:d811e88480203d75e6e959f313fdbf3326393b4e2b317067d952347f5c56216e"},
{file = "blis-0.7.9-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5cb1db88ab629ccb39eac110b742b98e3511d48ce9caa82ca32609d9169a9c9c"},
{file = "blis-0.7.9-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c399a03de4059bf8e700b921f9ff5d72b2a86673616c40db40cd0592051bdd07"},
{file = "blis-0.7.9-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d4eb70a79562a211bd2e6b6db63f1e2eed32c0ab3e9ef921d86f657ae8375845"},
{file = "blis-0.7.9-cp37-cp37m-win_amd64.whl", hash = "sha256:3e3f95e035c7456a1f5f3b5a3cfe708483a00335a3a8ad2211d57ba4d5f749a5"},
{file = "blis-0.7.9-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:179037cb5e6744c2e93b6b5facc6e4a0073776d514933c3db1e1f064a3253425"},
{file = "blis-0.7.9-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d0e82a6e0337d5231129a4e8b36978fa7b973ad3bb0257fd8e3714a9b35ceffd"},
{file = "blis-0.7.9-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6d12475e588a322e66a18346a3faa9eb92523504042e665c193d1b9b0b3f0482"},
{file = "blis-0.7.9-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4d5755ef37a573647be62684ca1545698879d07321f1e5b89a4fd669ce355eb0"},
{file = "blis-0.7.9-cp38-cp38-win_amd64.whl", hash = "sha256:b8a1fcd2eb267301ab13e1e4209c165d172cdf9c0c9e08186a9e234bf91daa16"},
{file = "blis-0.7.9-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8275f6b6eee714b85f00bf882720f508ed6a60974bcde489715d37fd35529da8"},
{file = "blis-0.7.9-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:7417667c221e29fe8662c3b2ff9bc201c6a5214bbb5eb6cc290484868802258d"},
{file = "blis-0.7.9-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5f4691bf62013eccc167c38a85c09a0bf0c6e3e80d4c2229cdf2668c1124eb0"},
{file = "blis-0.7.9-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f5cec812ee47b29107eb36af9b457be7191163eab65d61775ed63538232c59d5"},
{file = "blis-0.7.9-cp39-cp39-win_amd64.whl", hash = "sha256:d81c3f627d33545fc25c9dcb5fee66c476d89288a27d63ac16ea63453401ffd5"},
{file = "blis-0.7.9.tar.gz", hash = "sha256:29ef4c25007785a90ffc2f0ab3d3bd3b75cd2d7856a9a482b7d0dac8d511a09d"},
]
boto3 = [
{file = "boto3-1.26.15-py3-none-any.whl", hash = "sha256:0e455bc50190cec1af819c9e4a257130661c4f2fad1e211b4dd2cb8f9af89464"},
{file = "boto3-1.26.15.tar.gz", hash = "sha256:e2bfc955fb70053951589d01919c9233c6ef091ae1404bb5249a0f27e05b6b36"},
]
botocore = [
{file = "botocore-1.29.15-py3-none-any.whl", hash = "sha256:02cfa6d060c50853a028b36ada96f4ddb225948bf9e7e0a4dc5b72f9e3878f15"},
{file = "botocore-1.29.15.tar.gz", hash = "sha256:7d4e148870c98bbaab04b0c85b4d3565fc00fec6148cab9da96ab4419dbfb941"},
]
cachetools = [
{file = "cachetools-5.2.0-py3-none-any.whl", hash = "sha256:f9f17d2aec496a9aa6b76f53e3b614c965223c061982d434d160f930c698a9db"},
{file = "cachetools-5.2.0.tar.gz", hash = "sha256:6a94c6402995a99c3970cc7e4884bb60b4a8639938157eeed436098bf9831757"},
]
catalogue = [
{file = "catalogue-2.0.8-py3-none-any.whl", hash = "sha256:2d786e229d8d202b4f8a2a059858e45a2331201d831e39746732daa704b99f69"},
{file = "catalogue-2.0.8.tar.gz", hash = "sha256:b325c77659208bfb6af1b0d93b1a1aa4112e1bb29a4c5ced816758a722f0e388"},
]
catboost = [
{file = "catboost-1.1.1-cp310-none-macosx_10_6_universal2.whl", hash = "sha256:93532f6807228f74db9c8184a0893ab222232d23fc5b3db534e2d8fedbba42cf"},
{file = "catboost-1.1.1-cp310-none-manylinux1_x86_64.whl", hash = "sha256:7c7364d79d5ff9deb56956560ba91a1b62b84204961d540bffd97f7b995e8cba"},
{file = "catboost-1.1.1-cp310-none-win_amd64.whl", hash = "sha256:5ec0c9bd65e53ae6c26d17c06f9c28e4febbd7cbdeb858460eb3d34249a10f30"},
{file = "catboost-1.1.1-cp36-none-macosx_10_6_universal2.whl", hash = "sha256:60acc4448eb45242f4d30aea6ccdf45bfaa8646bbc4ede3200cf25ba0d6bcf3d"},
{file = "catboost-1.1.1-cp36-none-manylinux1_x86_64.whl", hash = "sha256:b7443b40b5ddb141c6d14bff16c13f7cf4852893b57d7eda5dff30fb7517e14d"},
{file = "catboost-1.1.1-cp36-none-win_amd64.whl", hash = "sha256:190828590270e3dea5fb58f0fd13715ee2324f6ee321866592c422a1da141961"},
{file = "catboost-1.1.1-cp37-none-macosx_10_6_universal2.whl", hash = "sha256:a2fe4d08a360c3c3cabfa3a94c586f2261b93a3fff043ae2b43d2d4de121c2ce"},
{file = "catboost-1.1.1-cp37-none-manylinux1_x86_64.whl", hash = "sha256:4e350c40920dbd9644f1c7b88cb74cb8b96f1ecbbd7c12f6223964465d83b968"},
{file = "catboost-1.1.1-cp37-none-win_amd64.whl", hash = "sha256:0033569f2e6314a04a84ec83eecd39f77402426b52571b78991e629d7252c6f7"},
{file = "catboost-1.1.1-cp38-none-macosx_10_6_universal2.whl", hash = "sha256:454aae50922b10172b94971033d4b0607128a2e2ca8a5845cf8879ea28d80942"},
{file = "catboost-1.1.1-cp38-none-manylinux1_x86_64.whl", hash = "sha256:3fd12d9f1f89440292c63b242ccabdab012d313250e2b1e8a779d6618c734b32"},
{file = "catboost-1.1.1-cp38-none-win_amd64.whl", hash = "sha256:840348bf56dd11f6096030208601cbce87f1e6426ef33140fb6cc97bceb5fef3"},
{file = "catboost-1.1.1-cp39-none-macosx_10_6_universal2.whl", hash = "sha256:9e7c47050c8840ccaff4d394907d443bda01280a30778ae9d71939a7528f5ae3"},
{file = "catboost-1.1.1-cp39-none-manylinux1_x86_64.whl", hash = "sha256:a60ae2630f7b3752f262515a51b265521a4993df75dea26fa60777ec6e479395"},
{file = "catboost-1.1.1-cp39-none-win_amd64.whl", hash = "sha256:156264dbe9e841cb0b6333383e928cb8f65df4d00429a9771eb8b06b9bcfa17c"},
]
causal-learn = [
{file = "causal-learn-0.1.3.0.tar.gz", hash = "sha256:8242bced95e11eb4b4ee5f8085c528a25496d20c87bd5f3fcdb17d4678d7de63"},
{file = "causal_learn-0.1.3.0-py3-none-any.whl", hash = "sha256:d7271b0a60e839b725735373c4c5c012446dd216f17cc4b46aed550e08054d72"},
]
causalml = []
certifi = [
{file = "certifi-2022.9.24-py3-none-any.whl", hash = "sha256:90c1a32f1d68f940488354e36370f6cca89f0f106db09518524c88d6ed83f382"},
{file = "certifi-2022.9.24.tar.gz", hash = "sha256:0d9c601124e5a6ba9712dbc60d9c53c21e34f5f641fe83002317394311bdce14"},
]
cffi = [
{file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
{file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
{file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
{file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
{file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
{file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
{file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
{file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
{file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
{file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
{file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
{file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
{file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
{file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
{file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
{file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
{file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
{file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
{file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
{file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
{file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
{file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
{file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
{file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
{file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
]
charset-normalizer = [
{file = "charset-normalizer-2.1.1.tar.gz", hash = "sha256:5a3d016c7c547f69d6f81fb0db9449ce888b418b5b9952cc5e6e66843e9dd845"},
{file = "charset_normalizer-2.1.1-py3-none-any.whl", hash = "sha256:83e9a75d1911279afd89352c68b45348559d1fc0506b054b346651b5e7fee29f"},
]
click = [
{file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
{file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
]
cloudpickle = [
{file = "cloudpickle-2.2.0-py3-none-any.whl", hash = "sha256:7428798d5926d8fcbfd092d18d01a2a03daf8237d8fcdc8095d256b8490796f0"},
{file = "cloudpickle-2.2.0.tar.gz", hash = "sha256:3f4219469c55453cfe4737e564b67c2a149109dabf7f242478948b895f61106f"},
]
colorama = [
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
]
confection = [
{file = "confection-0.0.3-py3-none-any.whl", hash = "sha256:51af839c1240430421da2b248541ebc95f9d0ee385bcafa768b8acdbd2b0111d"},
{file = "confection-0.0.3.tar.gz", hash = "sha256:4fec47190057c43c9acbecb8b1b87a9bf31c469caa0d6888a5b9384432fdba5a"},
]
contourpy = [
{file = "contourpy-1.0.6-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:613c665529899b5d9fade7e5d1760111a0b011231277a0d36c49f0d3d6914bd6"},
{file = "contourpy-1.0.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:78ced51807ccb2f45d4ea73aca339756d75d021069604c2fccd05390dc3c28eb"},
{file = "contourpy-1.0.6-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b3b1bd7577c530eaf9d2bc52d1a93fef50ac516a8b1062c3d1b9bcec9ebe329b"},
{file = "contourpy-1.0.6-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d8834c14b8c3dd849005e06703469db9bf96ba2d66a3f88ecc539c9a8982e0ee"},
{file = "contourpy-1.0.6-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f4052a8a4926d4468416fc7d4b2a7b2a3e35f25b39f4061a7e2a3a2748c4fc48"},
{file = "contourpy-1.0.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1c0e1308307a75e07d1f1b5f0f56b5af84538a5e9027109a7bcf6cb47c434e72"},
{file = "contourpy-1.0.6-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9fc4e7973ed0e1fe689435842a6e6b330eb7ccc696080dda9a97b1a1b78e41db"},
{file = "contourpy-1.0.6-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:08e8d09d96219ace6cb596506fb9b64ea5f270b2fb9121158b976d88871fcfd1"},
{file = "contourpy-1.0.6-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:f33da6b5d19ad1bb5e7ad38bb8ba5c426d2178928bc2b2c44e8823ea0ecb6ff3"},
{file = "contourpy-1.0.6-cp310-cp310-win32.whl", hash = "sha256:12a7dc8439544ed05c6553bf026d5e8fa7fad48d63958a95d61698df0e00092b"},
{file = "contourpy-1.0.6-cp310-cp310-win_amd64.whl", hash = "sha256:eadad75bf91897f922e0fb3dca1b322a58b1726a953f98c2e5f0606bd8408621"},
{file = "contourpy-1.0.6-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:913bac9d064cff033cf3719e855d4f1db9f1c179e0ecf3ba9fdef21c21c6a16a"},
{file = "contourpy-1.0.6-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:46deb310a276cc5c1fd27958e358cce68b1e8a515fa5a574c670a504c3a3fe30"},
{file = "contourpy-1.0.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b64f747e92af7da3b85631a55d68c45a2d728b4036b03cdaba4bd94bcc85bd6f"},
{file = "contourpy-1.0.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50627bf76abb6ba291ad08db583161939c2c5fab38c38181b7833423ab9c7de3"},
{file = "contourpy-1.0.6-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:358f6364e4873f4d73360b35da30066f40387dd3c427a3e5432c6b28dd24a8fa"},
{file = "contourpy-1.0.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c78bfbc1a7bff053baf7e508449d2765964d67735c909b583204e3240a2aca45"},
{file = "contourpy-1.0.6-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e43255a83835a129ef98f75d13d643844d8c646b258bebd11e4a0975203e018f"},
{file = "contourpy-1.0.6-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:375d81366afd547b8558c4720337218345148bc2fcffa3a9870cab82b29667f2"},
{file = "contourpy-1.0.6-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:b98c820608e2dca6442e786817f646d11057c09a23b68d2b3737e6dcb6e4a49b"},
{file = "contourpy-1.0.6-cp311-cp311-win32.whl", hash = "sha256:0e4854cc02006ad6684ce092bdadab6f0912d131f91c2450ce6dbdea78ee3c0b"},
{file = "contourpy-1.0.6-cp311-cp311-win_amd64.whl", hash = "sha256:d2eff2af97ea0b61381828b1ad6cd249bbd41d280e53aea5cccd7b2b31b8225c"},
{file = "contourpy-1.0.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5b117d29433fc8393b18a696d794961464e37afb34a6eeb8b2c37b5f4128a83e"},
{file = "contourpy-1.0.6-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:341330ed19074f956cb20877ad8d2ae50e458884bfa6a6df3ae28487cc76c768"},
{file = "contourpy-1.0.6-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:371f6570a81dfdddbb837ba432293a63b4babb942a9eb7aaa699997adfb53278"},
{file = "contourpy-1.0.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9447c45df407d3ecb717d837af3b70cfef432138530712263730783b3d016512"},
{file = "contourpy-1.0.6-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:730c27978a0003b47b359935478b7d63fd8386dbb2dcd36c1e8de88cbfc1e9de"},
{file = "contourpy-1.0.6-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:da1ef35fd79be2926ba80fbb36327463e3656c02526e9b5b4c2b366588b74d9a"},
{file = "contourpy-1.0.6-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:cd2bc0c8f2e8de7dd89a7f1c10b8844e291bca17d359373203ef2e6100819edd"},
{file = "contourpy-1.0.6-cp37-cp37m-win32.whl", hash = "sha256:3a1917d3941dd58732c449c810fa7ce46cc305ce9325a11261d740118b85e6f3"},
{file = "contourpy-1.0.6-cp37-cp37m-win_amd64.whl", hash = "sha256:06ca79e1efbbe2df795822df2fa173d1a2b38b6e0f047a0ec7903fbca1d1847e"},
{file = "contourpy-1.0.6-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e626cefff8491bce356221c22af5a3ea528b0b41fbabc719c00ae233819ea0bf"},
{file = "contourpy-1.0.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:dbe6fe7a1166b1ddd7b6d887ea6fa8389d3f28b5ed3f73a8f40ece1fc5a3d340"},
{file = "contourpy-1.0.6-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e13b31d1b4b68db60b3b29f8e337908f328c7f05b9add4b1b5c74e0691180109"},
{file = "contourpy-1.0.6-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a79d239fc22c3b8d9d3de492aa0c245533f4f4c7608e5749af866949c0f1b1b9"},
{file = "contourpy-1.0.6-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9e8e686a6db92a46111a1ee0ee6f7fbfae4048f0019de207149f43ac1812cf95"},
{file = "contourpy-1.0.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:acd2bd02f1a7adff3a1f33e431eb96ab6d7987b039d2946a9b39fe6fb16a1036"},
{file = "contourpy-1.0.6-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:03d1b9c6b44a9e30d554654c72be89af94fab7510b4b9f62356c64c81cec8b7d"},
{file = "contourpy-1.0.6-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:b48d94386f1994db7c70c76b5808c12e23ed7a4ee13693c2fc5ab109d60243c0"},
{file = "contourpy-1.0.6-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:208bc904889c910d95aafcf7be9e677726df9ef71e216780170dbb7e37d118fa"},
{file = "contourpy-1.0.6-cp38-cp38-win32.whl", hash = "sha256:444fb776f58f4906d8d354eb6f6ce59d0a60f7b6a720da6c1ccb839db7c80eb9"},
{file = "contourpy-1.0.6-cp38-cp38-win_amd64.whl", hash = "sha256:9bc407a6af672da20da74823443707e38ece8b93a04009dca25856c2d9adadb1"},
{file = "contourpy-1.0.6-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:aa4674cf3fa2bd9c322982644967f01eed0c91bb890f624e0e0daf7a5c3383e9"},
{file = "contourpy-1.0.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6f56515e7c6fae4529b731f6c117752247bef9cdad2b12fc5ddf8ca6a50965a5"},
{file = "contourpy-1.0.6-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:344cb3badf6fc7316ad51835f56ac387bdf86c8e1b670904f18f437d70da4183"},
{file = "contourpy-1.0.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0b1e66346acfb17694d46175a0cea7d9036f12ed0c31dfe86f0f405eedde2bdd"},
{file = "contourpy-1.0.6-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8468b40528fa1e15181cccec4198623b55dcd58306f8815a793803f51f6c474a"},
{file = "contourpy-1.0.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1dedf4c64185a216c35eb488e6f433297c660321275734401760dafaeb0ad5c2"},
{file = "contourpy-1.0.6-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:494efed2c761f0f37262815f9e3c4bb9917c5c69806abdee1d1cb6611a7174a0"},
{file = "contourpy-1.0.6-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:75a2e638042118118ab39d337da4c7908c1af74a8464cad59f19fbc5bbafec9b"},
{file = "contourpy-1.0.6-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a628bba09ba72e472bf7b31018b6281fd4cc903f0888049a3724afba13b6e0b8"},
{file = "contourpy-1.0.6-cp39-cp39-win32.whl", hash = "sha256:e1739496c2f0108013629aa095cc32a8c6363444361960c07493818d0dea2da4"},
{file = "contourpy-1.0.6-cp39-cp39-win_amd64.whl", hash = "sha256:a457ee72d9032e86730f62c5eeddf402e732fdf5ca8b13b41772aa8ae13a4563"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:d912f0154a20a80ea449daada904a7eb6941c83281a9fab95de50529bfc3a1da"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4081918147fc4c29fad328d5066cfc751da100a1098398742f9f364be63803fc"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0537cc1195245bbe24f2913d1f9211b8f04eb203de9044630abd3664c6cc339c"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dcd556c8fc37a342dd636d7eef150b1399f823a4462f8c968e11e1ebeabee769"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:f6ca38dd8d988eca8f07305125dec6f54ac1c518f1aaddcc14d08c01aebb6efc"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:c1baa49ab9fedbf19d40d93163b7d3e735d9cd8d5efe4cce9907902a6dad391f"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:211dfe2bd43bf5791d23afbe23a7952e8ac8b67591d24be3638cabb648b3a6eb"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c38c6536c2d71ca2f7e418acaf5bca30a3af7f2a2fa106083c7d738337848dbe"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b1ee48a130da4dd0eb8055bbab34abf3f6262957832fd575e0cab4979a15a41"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5641927cc5ae66155d0c80195dc35726eae060e7defc18b7ab27600f39dd1fe7"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:7ee394502026d68652c2824348a40bf50f31351a668977b51437131a90d777ea"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0b97454ed5b1368b66ed414c754cba15b9750ce69938fc6153679787402e4cdf"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0236875c5a0784215b49d00ebbe80c5b6b5d5244b3655a36dda88105334dea17"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:84c593aeff7a0171f639da92cb86d24954bbb61f8a1b530f74eb750a14685832"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:9b0e7fe7f949fb719b206548e5cde2518ffb29936afa4303d8a1c4db43dcb675"},
{file = "contourpy-1.0.6.tar.gz", hash = "sha256:6e459ebb8bb5ee4c22c19cc000174f8059981971a33ce11e17dddf6aca97a142"},
]
coverage = [
{file = "coverage-6.5.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ef8674b0ee8cc11e2d574e3e2998aea5df5ab242e012286824ea3c6970580e53"},
{file = "coverage-6.5.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:784f53ebc9f3fd0e2a3f6a78b2be1bd1f5575d7863e10c6e12504f240fd06660"},
{file = "coverage-6.5.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b4a5be1748d538a710f87542f22c2cad22f80545a847ad91ce45e77417293eb4"},
{file = "coverage-6.5.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:83516205e254a0cb77d2d7bb3632ee019d93d9f4005de31dca0a8c3667d5bc04"},
{file = "coverage-6.5.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:af4fffaffc4067232253715065e30c5a7ec6faac36f8fc8d6f64263b15f74db0"},
{file = "coverage-6.5.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:97117225cdd992a9c2a5515db1f66b59db634f59d0679ca1fa3fe8da32749cae"},
{file = "coverage-6.5.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:a1170fa54185845505fbfa672f1c1ab175446c887cce8212c44149581cf2d466"},
{file = "coverage-6.5.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:11b990d520ea75e7ee8dcab5bc908072aaada194a794db9f6d7d5cfd19661e5a"},
{file = "coverage-6.5.0-cp310-cp310-win32.whl", hash = "sha256:5dbec3b9095749390c09ab7c89d314727f18800060d8d24e87f01fb9cfb40b32"},
{file = "coverage-6.5.0-cp310-cp310-win_amd64.whl", hash = "sha256:59f53f1dc5b656cafb1badd0feb428c1e7bc19b867479ff72f7a9dd9b479f10e"},
{file = "coverage-6.5.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:4a5375e28c5191ac38cca59b38edd33ef4cc914732c916f2929029b4bfb50795"},
{file = "coverage-6.5.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c4ed2820d919351f4167e52425e096af41bfabacb1857186c1ea32ff9983ed75"},
{file = "coverage-6.5.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:33a7da4376d5977fbf0a8ed91c4dffaaa8dbf0ddbf4c8eea500a2486d8bc4d7b"},
{file = "coverage-6.5.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a8fb6cf131ac4070c9c5a3e21de0f7dc5a0fbe8bc77c9456ced896c12fcdad91"},
{file = "coverage-6.5.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a6b7d95969b8845250586f269e81e5dfdd8ff828ddeb8567a4a2eaa7313460c4"},
{file = "coverage-6.5.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:1ef221513e6f68b69ee9e159506d583d31aa3567e0ae84eaad9d6ec1107dddaa"},
{file = "coverage-6.5.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cca4435eebea7962a52bdb216dec27215d0df64cf27fc1dd538415f5d2b9da6b"},
{file = "coverage-6.5.0-cp311-cp311-win32.whl", hash = "sha256:98e8a10b7a314f454d9eff4216a9a94d143a7ee65018dd12442e898ee2310578"},
{file = "coverage-6.5.0-cp311-cp311-win_amd64.whl", hash = "sha256:bc8ef5e043a2af066fa8cbfc6e708d58017024dc4345a1f9757b329a249f041b"},
{file = "coverage-6.5.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:4433b90fae13f86fafff0b326453dd42fc9a639a0d9e4eec4d366436d1a41b6d"},
{file = "coverage-6.5.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f4f05d88d9a80ad3cac6244d36dd89a3c00abc16371769f1340101d3cb899fc3"},
{file = "coverage-6.5.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:94e2565443291bd778421856bc975d351738963071e9b8839ca1fc08b42d4bef"},
{file = "coverage-6.5.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:027018943386e7b942fa832372ebc120155fd970837489896099f5cfa2890f79"},
{file = "coverage-6.5.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:255758a1e3b61db372ec2736c8e2a1fdfaf563977eedbdf131de003ca5779b7d"},
{file = "coverage-6.5.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:851cf4ff24062c6aec510a454b2584f6e998cada52d4cb58c5e233d07172e50c"},
{file = "coverage-6.5.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:12adf310e4aafddc58afdb04d686795f33f4d7a6fa67a7a9d4ce7d6ae24d949f"},
{file = "coverage-6.5.0-cp37-cp37m-win32.whl", hash = "sha256:b5604380f3415ba69de87a289a2b56687faa4fe04dbee0754bfcae433489316b"},
{file = "coverage-6.5.0-cp37-cp37m-win_amd64.whl", hash = "sha256:4a8dbc1f0fbb2ae3de73eb0bdbb914180c7abfbf258e90b311dcd4f585d44bd2"},
{file = "coverage-6.5.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d900bb429fdfd7f511f868cedd03a6bbb142f3f9118c09b99ef8dc9bf9643c3c"},
{file = "coverage-6.5.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:2198ea6fc548de52adc826f62cb18554caedfb1d26548c1b7c88d8f7faa8f6ba"},
{file = "coverage-6.5.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c4459b3de97b75e3bd6b7d4b7f0db13f17f504f3d13e2a7c623786289dd670e"},
{file = "coverage-6.5.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:20c8ac5386253717e5ccc827caad43ed66fea0efe255727b1053a8154d952398"},
{file = "coverage-6.5.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6b07130585d54fe8dff3d97b93b0e20290de974dc8177c320aeaf23459219c0b"},
{file = "coverage-6.5.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:dbdb91cd8c048c2b09eb17713b0c12a54fbd587d79adcebad543bc0cd9a3410b"},
{file = "coverage-6.5.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:de3001a203182842a4630e7b8d1a2c7c07ec1b45d3084a83d5d227a3806f530f"},
{file = "coverage-6.5.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:e07f4a4a9b41583d6eabec04f8b68076ab3cd44c20bd29332c6572dda36f372e"},
{file = "coverage-6.5.0-cp38-cp38-win32.whl", hash = "sha256:6d4817234349a80dbf03640cec6109cd90cba068330703fa65ddf56b60223a6d"},
{file = "coverage-6.5.0-cp38-cp38-win_amd64.whl", hash = "sha256:7ccf362abd726b0410bf8911c31fbf97f09f8f1061f8c1cf03dfc4b6372848f6"},
{file = "coverage-6.5.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:633713d70ad6bfc49b34ead4060531658dc6dfc9b3eb7d8a716d5873377ab745"},
{file = "coverage-6.5.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:95203854f974e07af96358c0b261f1048d8e1083f2de9b1c565e1be4a3a48cfc"},
{file = "coverage-6.5.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b9023e237f4c02ff739581ef35969c3739445fb059b060ca51771e69101efffe"},
{file = "coverage-6.5.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:265de0fa6778d07de30bcf4d9dc471c3dc4314a23a3c6603d356a3c9abc2dfcf"},
{file = "coverage-6.5.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f830ed581b45b82451a40faabb89c84e1a998124ee4212d440e9c6cf70083e5"},
{file = "coverage-6.5.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:7b6be138d61e458e18d8e6ddcddd36dd96215edfe5f1168de0b1b32635839b62"},
{file = "coverage-6.5.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:42eafe6778551cf006a7c43153af1211c3aaab658d4d66fa5fcc021613d02518"},
{file = "coverage-6.5.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:723e8130d4ecc8f56e9a611e73b31219595baa3bb252d539206f7bbbab6ffc1f"},
{file = "coverage-6.5.0-cp39-cp39-win32.whl", hash = "sha256:d9ecf0829c6a62b9b573c7bb6d4dcd6ba8b6f80be9ba4fc7ed50bf4ac9aecd72"},
{file = "coverage-6.5.0-cp39-cp39-win_amd64.whl", hash = "sha256:fc2af30ed0d5ae0b1abdb4ebdce598eafd5b35397d4d75deb341a614d333d987"},
{file = "coverage-6.5.0-pp36.pp37.pp38-none-any.whl", hash = "sha256:1431986dac3923c5945271f169f59c45b8802a114c8f548d611f2015133df77a"},
{file = "coverage-6.5.0.tar.gz", hash = "sha256:f642e90754ee3e06b0e7e51bce3379590e76b7f76b708e1a71ff043f87025c84"},
]
cycler = [
{file = "cycler-0.11.0-py3-none-any.whl", hash = "sha256:3a27e95f763a428a739d2add979fa7494c912a32c17c4c38c4d5f082cad165a3"},
{file = "cycler-0.11.0.tar.gz", hash = "sha256:9c87405839a19696e837b3b818fed3f5f69f16f1eec1a1ad77e043dcea9c772f"},
]
cymem = [
{file = "cymem-2.0.7-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:4981fc9182cc1fe54bfedf5f73bfec3ce0c27582d9be71e130c46e35958beef0"},
{file = "cymem-2.0.7-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:42aedfd2e77aa0518a24a2a60a2147308903abc8b13c84504af58539c39e52a3"},
{file = "cymem-2.0.7-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c183257dc5ab237b664f64156c743e788f562417c74ea58c5a3939fe2d48d6f6"},
{file = "cymem-2.0.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d18250f97eeb13af2e8b19d3cefe4bf743b963d93320b0a2e729771410fd8cf4"},
{file = "cymem-2.0.7-cp310-cp310-win_amd64.whl", hash = "sha256:864701e626b65eb2256060564ed8eb034ebb0a8f14ce3fbef337e88352cdee9f"},
{file = "cymem-2.0.7-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:314273be1f143da674388e0a125d409e2721fbf669c380ae27c5cbae4011e26d"},
{file = "cymem-2.0.7-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:df543a36e7000808fe0a03d92fd6cd8bf23fa8737c3f7ae791a5386de797bf79"},
{file = "cymem-2.0.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e5e1b7de7952d89508d07601b9e95b2244e70d7ef60fbc161b3ad68f22815f8"},
{file = "cymem-2.0.7-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2aa33f1dbd7ceda37970e174c38fd1cf106817a261aa58521ba9918156868231"},
{file = "cymem-2.0.7-cp311-cp311-win_amd64.whl", hash = "sha256:10178e402bb512b2686b8c2f41f930111e597237ca8f85cb583ea93822ef798d"},
{file = "cymem-2.0.7-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a2971b7da5aa2e65d8fbbe9f2acfc19ff8e73f1896e3d6e1223cc9bf275a0207"},
{file = "cymem-2.0.7-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:85359ab7b490e6c897c04863704481600bd45188a0e2ca7375eb5db193e13cb7"},
{file = "cymem-2.0.7-cp36-cp36m-win_amd64.whl", hash = "sha256:0ac45088abffbae9b7db2c597f098de51b7e3c1023cb314e55c0f7f08440cf66"},
{file = "cymem-2.0.7-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:26e5d5c6958855d2fe3d5629afe85a6aae5531abaa76f4bc21b9abf9caaccdfe"},
{file = "cymem-2.0.7-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:011039e12d3144ac1bf3a6b38f5722b817f0d6487c8184e88c891b360b69f533"},
{file = "cymem-2.0.7-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f9e63e5ad4ed6ffa21fd8db1c03b05be3fea2f32e32fdace67a840ea2702c3d"},
{file = "cymem-2.0.7-cp37-cp37m-win_amd64.whl", hash = "sha256:5ea6b027fdad0c3e9a4f1b94d28d213be08c466a60c72c633eb9db76cf30e53a"},
{file = "cymem-2.0.7-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:4302df5793a320c4f4a263c7785d2fa7f29928d72cb83ebeb34d64a610f8d819"},
{file = "cymem-2.0.7-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:24b779046484674c054af1e779c68cb224dc9694200ac13b22129d7fb7e99e6d"},
{file = "cymem-2.0.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c50794c612801ed8b599cd4af1ed810a0d39011711c8224f93e1153c00e08d1"},
{file = "cymem-2.0.7-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a9525ad563b36dc1e30889d0087a0daa67dd7bb7d3e1530c4b61cd65cc756a5b"},
{file = "cymem-2.0.7-cp38-cp38-win_amd64.whl", hash = "sha256:48b98da6b906fe976865263e27734ebc64f972a978a999d447ad6c83334e3f90"},
{file = "cymem-2.0.7-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:e156788d32ad8f7141330913c5d5d2aa67182fca8f15ae22645e9f379abe8a4c"},
{file = "cymem-2.0.7-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3da89464021fe669932fce1578343fcaf701e47e3206f50d320f4f21e6683ca5"},
{file = "cymem-2.0.7-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4f359cab9f16e25b3098f816c40acbf1697a3b614a8d02c56e6ebcb9c89a06b3"},
{file = "cymem-2.0.7-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f165d7bce55d6730930e29d8294569788aa127f1be8d1642d9550ed96223cb37"},
{file = "cymem-2.0.7-cp39-cp39-win_amd64.whl", hash = "sha256:59a09cf0e71b1b88bfa0de544b801585d81d06ea123c1725e7c5da05b7ca0d20"},
{file = "cymem-2.0.7.tar.gz", hash = "sha256:e6034badb5dd4e10344211c81f16505a55553a7164adc314c75bd80cf07e57a8"},
]
cython = [
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:39afb4679b8c6bf7ccb15b24025568f4f9b4d7f9bf3cbd981021f542acecd75b"},
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:dbee03b8d42dca924e6aa057b836a064c769ddfd2a4c2919e65da2c8a362d528"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ba622326f2862f9c1f99ca8d47ade49871241920a352c917e16861e25b0e5c3"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e6ffa08aa1c111a1ebcbd1cf4afaaec120bc0bbdec3f2545f8bb7d3e8e77a1cd"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:97335b2cd4acebf30d14e2855d882de83ad838491a09be2011745579ac975833"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:06be83490c906b6429b4389e13487a26254ccaad2eef6f3d4ee21d8d3a4aaa2b"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:eefd2b9a5f38ded8d859fe96cc28d7d06e098dc3f677e7adbafda4dcdd4a461c"},
{file = "Cython-0.29.32-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5514f3b4122cb22317122a48e175a7194e18e1803ca555c4c959d7dfe68eaf98"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:656dc5ff1d269de4d11ee8542f2ffd15ab466c447c1f10e5b8aba6f561967276"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:cdf10af3e2e3279dc09fdc5f95deaa624850a53913f30350ceee824dc14fc1a6"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:3875c2b2ea752816a4d7ae59d45bb546e7c4c79093c83e3ba7f4d9051dd02928"},
{file = "Cython-0.29.32-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:79e3bab19cf1b021b613567c22eb18b76c0c547b9bc3903881a07bfd9e7e64cf"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0595aee62809ba353cebc5c7978e0e443760c3e882e2c7672c73ffe46383673"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0ea8267fc373a2c5064ad77d8ff7bf0ea8b88f7407098ff51829381f8ec1d5d9"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:c8e8025f496b5acb6ba95da2fb3e9dacffc97d9a92711aacfdd42f9c5927e094"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:afbce249133a830f121b917f8c9404a44f2950e0e4f5d1e68f043da4c2e9f457"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:513e9707407608ac0d306c8b09d55a28be23ea4152cbd356ceaec0f32ef08d65"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e83228e0994497900af954adcac27f64c9a57cd70a9ec768ab0cb2c01fd15cf1"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:ea1dcc07bfb37367b639415333cfbfe4a93c3be340edf1db10964bc27d42ed64"},
{file = "Cython-0.29.32-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8669cadeb26d9a58a5e6b8ce34d2c8986cc3b5c0bfa77eda6ceb471596cb2ec3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:ed087eeb88a8cf96c60fb76c5c3b5fb87188adee5e179f89ec9ad9a43c0c54b3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:3f85eb2343d20d91a4ea9cf14e5748092b376a64b7e07fc224e85b2753e9070b"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:63b79d9e1f7c4d1f498ab1322156a0d7dc1b6004bf981a8abda3f66800e140cd"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e1958e0227a4a6a2c06fd6e35b7469de50adf174102454db397cec6e1403cce3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:856d2fec682b3f31583719cb6925c6cdbb9aa30f03122bcc45c65c8b6f515754"},
{file = "Cython-0.29.32-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:479690d2892ca56d34812fe6ab8f58e4b2e0129140f3d94518f15993c40553da"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:67fdd2f652f8d4840042e2d2d91e15636ba2bcdcd92e7e5ffbc68e6ef633a754"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:4a4b03ab483271f69221c3210f7cde0dcc456749ecf8243b95bc7a701e5677e0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:40eff7aa26e91cf108fd740ffd4daf49f39b2fdffadabc7292b4b7dc5df879f0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0bbc27abdf6aebfa1bce34cd92bd403070356f28b0ecb3198ff8a182791d58b9"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:cddc47ec746a08603037731f5d10aebf770ced08666100bd2cdcaf06a85d4d1b"},
{file = "Cython-0.29.32-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:eca3065a1279456e81c615211d025ea11bfe4e19f0c5650b859868ca04b3fcbd"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d968ffc403d92addf20b68924d95428d523436adfd25cf505d427ed7ba3bee8b"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f3fd44cc362eee8ae569025f070d56208908916794b6ab21e139cea56470a2b3"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:b6da3063c5c476f5311fd76854abae6c315f1513ef7d7904deed2e774623bbb9"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:061e25151c38f2361bc790d3bcf7f9d9828a0b6a4d5afa56fbed3bd33fb2373a"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f9944013588a3543fca795fffb0a070a31a243aa4f2d212f118aa95e69485831"},
{file = "Cython-0.29.32-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:07d173d3289415bb496e72cb0ddd609961be08fe2968c39094d5712ffb78672b"},
{file = "Cython-0.29.32-py2.py3-none-any.whl", hash = "sha256:eeb475eb6f0ccf6c039035eb4f0f928eb53ead88777e0a760eccb140ad90930b"},
{file = "Cython-0.29.32.tar.gz", hash = "sha256:8733cf4758b79304f2a4e39ebfac5e92341bce47bcceb26c1254398b2f8c1af7"},
]
dask = [
{file = "dask-2021.11.2-py3-none-any.whl", hash = "sha256:2b0ad7beba8950add4fdc7c5cb94fa9444915ddb00c711d5743e2c4bb0a95ef5"},
{file = "dask-2021.11.2.tar.gz", hash = "sha256:e12bfe272928d62fa99623d98d0e0b0c045b33a47509ef31a22175aa5fd10917"},
]
debugpy = [
{file = "debugpy-1.6.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:c4b2bd5c245eeb49824bf7e539f95fb17f9a756186e51c3e513e32999d8846f3"},
{file = "debugpy-1.6.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b8deaeb779699350deeed835322730a3efec170b88927debc9ba07a1a38e2585"},
{file = "debugpy-1.6.3-cp310-cp310-win32.whl", hash = "sha256:fc233a0160f3b117b20216f1169e7211b83235e3cd6749bcdd8dbb72177030c7"},
{file = "debugpy-1.6.3-cp310-cp310-win_amd64.whl", hash = "sha256:dda8652520eae3945833e061cbe2993ad94a0b545aebd62e4e6b80ee616c76b2"},
{file = "debugpy-1.6.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:d5c814596a170a0a58fa6fad74947e30bfd7e192a5d2d7bd6a12156c2899e13a"},
{file = "debugpy-1.6.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c4cd6f37e3c168080d61d698390dfe2cd9e74ebf80b448069822a15dadcda57d"},
{file = "debugpy-1.6.3-cp37-cp37m-win32.whl", hash = "sha256:3c9f985944a30cfc9ae4306ac6a27b9c31dba72ca943214dad4a0ab3840f6161"},
{file = "debugpy-1.6.3-cp37-cp37m-win_amd64.whl", hash = "sha256:5ad571a36cec137ae6ed951d0ff75b5e092e9af6683da084753231150cbc5b25"},
{file = "debugpy-1.6.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:adcfea5ea06d55d505375995e150c06445e2b20cd12885bcae566148c076636b"},
{file = "debugpy-1.6.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:daadab4403427abd090eccb38d8901afd8b393e01fd243048fab3f1d7132abb4"},
{file = "debugpy-1.6.3-cp38-cp38-win32.whl", hash = "sha256:6efc30325b68e451118b795eff6fe8488253ca3958251d5158106d9c87581bc6"},
{file = "debugpy-1.6.3-cp38-cp38-win_amd64.whl", hash = "sha256:86d784b72c5411c833af1cd45b83d80c252b77c3bfdb43db17c441d772f4c734"},
{file = "debugpy-1.6.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:4e255982552b0edfe3a6264438dbd62d404baa6556a81a88f9420d3ed79b06ae"},
{file = "debugpy-1.6.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:cca23cb6161ac89698d629d892520327dd1be9321c0960e610bbcb807232b45d"},
{file = "debugpy-1.6.3-cp39-cp39-win32.whl", hash = "sha256:7c302095a81be0d5c19f6529b600bac971440db3e226dce85347cc27e6a61908"},
{file = "debugpy-1.6.3-cp39-cp39-win_amd64.whl", hash = "sha256:34d2cdd3a7c87302ba5322b86e79c32c2115be396f3f09ca13306d8a04fe0f16"},
{file = "debugpy-1.6.3-py2.py3-none-any.whl", hash = "sha256:84c39940a0cac410bf6aa4db00ba174f973eef521fbe9dd058e26bcabad89c4f"},
{file = "debugpy-1.6.3.zip", hash = "sha256:e8922090514a890eec99cfb991bab872dd2e353ebb793164d5f01c362b9a40bf"},
]
decorator = [
{file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"},
{file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"},
]
defusedxml = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
]
dill = [
{file = "dill-0.3.6-py3-none-any.whl", hash = "sha256:a07ffd2351b8c678dfc4a856a3005f8067aea51d6ba6c700796a4d9e280f39f0"},
{file = "dill-0.3.6.tar.gz", hash = "sha256:e5db55f3687856d8fbdab002ed78544e1c4559a130302693d839dfe8f93f2373"},
]
distributed = [
{file = "distributed-2021.11.2-py3-none-any.whl", hash = "sha256:af1f7b98d85d43886fefe2354379c848c7a5aa6ae4d2313a7aca9ab9081a7e56"},
{file = "distributed-2021.11.2.tar.gz", hash = "sha256:f86a01a2e1e678865d2e42300c47552b5012cd81a2d354e47827a1fd074cc302"},
]
docutils = [
{file = "docutils-0.17.1-py2.py3-none-any.whl", hash = "sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61"},
{file = "docutils-0.17.1.tar.gz", hash = "sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125"},
]
econml = [
{file = "econml-0.14.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9c2fc1d67d98774d00bfe8e76d76af3de5ebc8d5f7a440da3c667d5ad244f971"},
{file = "econml-0.14.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9b02aca395eaa905bff080c3efd4f74bf281f168c674d74bdf899fc9467311e1"},
{file = "econml-0.14.0-cp310-cp310-win_amd64.whl", hash = "sha256:d2cca82486826c2b13f47ed0140f3fc85d8016fb43153a1b2de025345b190c6c"},
{file = "econml-0.14.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:ce98668ba93d33856b60750e23312b9a6d503af6890b5588ab708db9de05ff49"},
{file = "econml-0.14.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3b6b9938a2f48bf3055ae0ea47ac5a627d1c180f22e62531943961427769b0ef"},
{file = "econml-0.14.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3c780c49a97bd688475f8863a7bdad2cbe19fdb4417708e3874f2bdae102852f"},
{file = "econml-0.14.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7f2930eb311ea576195718b97fde83b4f2d29f3f3dc57ce0834b52fee410bfac"},
{file = "econml-0.14.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:36be15da6ff3b295bc5cf80b95753e19bc123a1103bf53a2a0744daef49273e5"},
{file = "econml-0.14.0-cp38-cp38-win_amd64.whl", hash = "sha256:f71ab406f37b64dead4bee1b4c4869204faf9c55887dc8117bd9396d977edaf3"},
{file = "econml-0.14.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1b0e67419c4eff2acdf8138f208de333a85c3e6fded831a6664bb02d6f4bcbe1"},
{file = "econml-0.14.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:376724e0535ad9cbc585f768110eb23bfd3b3218032a61cac8793a09ee3bce95"},
{file = "econml-0.14.0-cp39-cp39-win_amd64.whl", hash = "sha256:6e1f0554d0f930dc639dbf3d7cb171297aa113dd64b7db322e0abb7d12eaa4dc"},
{file = "econml-0.14.0.tar.gz", hash = "sha256:5637d36c7548fb3ad01956d091cc6a9f788b090bc8b892bd527012e5bdbce041"},
]
entrypoints = [
{file = "entrypoints-0.4-py3-none-any.whl", hash = "sha256:f174b5ff827504fd3cd97cc3f8649f3693f51538c7e4bdf3ef002c8429d42f9f"},
{file = "entrypoints-0.4.tar.gz", hash = "sha256:b706eddaa9218a19ebcd67b56818f05bb27589b1ca9e8d797b74affad4ccacd4"},
]
exceptiongroup = [
{file = "exceptiongroup-1.0.4-py3-none-any.whl", hash = "sha256:542adf9dea4055530d6e1279602fa5cb11dab2395fa650b8674eaec35fc4a828"},
{file = "exceptiongroup-1.0.4.tar.gz", hash = "sha256:bd14967b79cd9bdb54d97323216f8fdf533e278df937aa2a90089e7d6e06e5ec"},
]
executing = [
{file = "executing-1.2.0-py2.py3-none-any.whl", hash = "sha256:0314a69e37426e3608aada02473b4161d4caf5a4b244d1d0c48072b8fee7bacc"},
{file = "executing-1.2.0.tar.gz", hash = "sha256:19da64c18d2d851112f09c287f8d3dbbdf725ab0e569077efb6cdcbd3497c107"},
]
fastai = [
{file = "fastai-2.7.10-py3-none-any.whl", hash = "sha256:db3709d6ff9ede9cd29111420b3669238248fa4f5a29d98daf37d52d122d9424"},
{file = "fastai-2.7.10.tar.gz", hash = "sha256:ccef6a185ae3a637efc9bcd9fea8e48b75f454d0ebad3b6df426f22fae20039d"},
]
fastcore = [
{file = "fastcore-1.5.27-py3-none-any.whl", hash = "sha256:79dffaa3de96066e4d7f2b8793f1a8a9468c82bc97d3d48ec002de34097b2a9f"},
{file = "fastcore-1.5.27.tar.gz", hash = "sha256:c6b66b35569d17251e25999bafc7d9bcdd6446c1e710503c08670c3ff1eef271"},
]
fastdownload = [
{file = "fastdownload-0.0.7-py3-none-any.whl", hash = "sha256:b791fa3406a2da003ba64615f03c60e2ea041c3c555796450b9a9a601bc0bbac"},
{file = "fastdownload-0.0.7.tar.gz", hash = "sha256:20507edb8e89406a1fbd7775e6e2a3d81a4dd633dd506b0e9cf0e1613e831d6a"},
]
fastjsonschema = [
{file = "fastjsonschema-2.16.2-py3-none-any.whl", hash = "sha256:21f918e8d9a1a4ba9c22e09574ba72267a6762d47822db9add95f6454e51cc1c"},
{file = "fastjsonschema-2.16.2.tar.gz", hash = "sha256:01e366f25d9047816fe3d288cbfc3e10541daf0af2044763f3d0ade42476da18"},
]
fastprogress = [
{file = "fastprogress-1.0.3-py3-none-any.whl", hash = "sha256:6dfea88f7a4717b0a8d6ee2048beae5dbed369f932a368c5dd9caff34796f7c5"},
{file = "fastprogress-1.0.3.tar.gz", hash = "sha256:7a17d2b438890f838c048eefce32c4ded47197ecc8ea042cecc33d3deb8022f5"},
]
flake8 = [
{file = "flake8-4.0.1-py2.py3-none-any.whl", hash = "sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d"},
{file = "flake8-4.0.1.tar.gz", hash = "sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"},
]
flaky = [
{file = "flaky-3.7.0-py2.py3-none-any.whl", hash = "sha256:d6eda73cab5ae7364504b7c44670f70abed9e75f77dd116352f662817592ec9c"},
{file = "flaky-3.7.0.tar.gz", hash = "sha256:3ad100780721a1911f57a165809b7ea265a7863305acb66708220820caf8aa0d"},
]
flatbuffers = [
{file = "flatbuffers-22.10.26-py2.py3-none-any.whl", hash = "sha256:e36d5ba7a5e9483ff0ec1d238fdc3011c866aab7f8ce77d5e9d445ac12071d84"},
{file = "flatbuffers-22.10.26.tar.gz", hash = "sha256:8698aaa635ca8cf805c7d8414d4a4a8ecbffadca0325fa60551cb3ca78612356"},
]
fonttools = [
{file = "fonttools-4.38.0-py3-none-any.whl", hash = "sha256:820466f43c8be8c3009aef8b87e785014133508f0de64ec469e4efb643ae54fb"},
{file = "fonttools-4.38.0.zip", hash = "sha256:2bb244009f9bf3fa100fc3ead6aeb99febe5985fa20afbfbaa2f8946c2fbdaf1"},
]
forestci = [
{file = "forestci-0.6-py3-none-any.whl", hash = "sha256:025e76b20e23ddbdfc0a9c9c7f261751ee376b33a7b257b86e72fbad8312d650"},
{file = "forestci-0.6.tar.gz", hash = "sha256:f74f51eba9a7c189fdb673203cea10383f0a34504d2d28dee0fd712d19945b5a"},
]
fsspec = [
{file = "fsspec-2022.11.0-py3-none-any.whl", hash = "sha256:d6e462003e3dcdcb8c7aa84c73a228f8227e72453cd22570e2363e8844edfe7b"},
{file = "fsspec-2022.11.0.tar.gz", hash = "sha256:259d5fd5c8e756ff2ea72f42e7613c32667dc2049a4ac3d84364a7ca034acb8b"},
]
future = [
{file = "future-0.18.2.tar.gz", hash = "sha256:b1bead90b70cf6ec3f0710ae53a525360fa360d306a86583adc6bf83a4db537d"},
]
gast = [
{file = "gast-0.4.0-py3-none-any.whl", hash = "sha256:b7adcdd5adbebf1adf17378da5ba3f543684dbec47b1cda1f3997e573cd542c4"},
{file = "gast-0.4.0.tar.gz", hash = "sha256:40feb7b8b8434785585ab224d1568b857edb18297e5a3047f1ba012bc83b42c1"},
]
google-auth = [
{file = "google-auth-2.14.1.tar.gz", hash = "sha256:ccaa901f31ad5cbb562615eb8b664b3dd0bf5404a67618e642307f00613eda4d"},
{file = "google_auth-2.14.1-py2.py3-none-any.whl", hash = "sha256:f5d8701633bebc12e0deea4df8abd8aff31c28b355360597f7f2ee60f2e4d016"},
]
google-auth-oauthlib = [
{file = "google-auth-oauthlib-0.4.6.tar.gz", hash = "sha256:a90a072f6993f2c327067bf65270046384cda5a8ecb20b94ea9a687f1f233a7a"},
{file = "google_auth_oauthlib-0.4.6-py2.py3-none-any.whl", hash = "sha256:3f2a6e802eebbb6fb736a370fbf3b055edcb6b52878bf2f26330b5e041316c73"},
]
google-pasta = [
{file = "google-pasta-0.2.0.tar.gz", hash = "sha256:c9f2c8dfc8f96d0d5808299920721be30c9eec37f2389f28904f454565c8a16e"},
{file = "google_pasta-0.2.0-py2-none-any.whl", hash = "sha256:4612951da876b1a10fe3960d7226f0c7682cf901e16ac06e473b267a5afa8954"},
{file = "google_pasta-0.2.0-py3-none-any.whl", hash = "sha256:b32482794a366b5366a32c92a9a9201b107821889935a02b3e51f6b432ea84ed"},
]
graphviz = [
{file = "graphviz-0.20.1-py3-none-any.whl", hash = "sha256:587c58a223b51611c0cf461132da386edd896a029524ca61a1462b880bf97977"},
{file = "graphviz-0.20.1.zip", hash = "sha256:8c58f14adaa3b947daf26c19bc1e98c4e0702cdc31cf99153e6f06904d492bf8"},
]
grpcio = [
{file = "grpcio-1.50.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:906f4d1beb83b3496be91684c47a5d870ee628715227d5d7c54b04a8de802974"},
{file = "grpcio-1.50.0-cp310-cp310-macosx_12_0_x86_64.whl", hash = "sha256:2d9fd6e38b16c4d286a01e1776fdf6c7a4123d99ae8d6b3f0b4a03a34bf6ce45"},
{file = "grpcio-1.50.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:4b123fbb7a777a2fedec684ca0b723d85e1d2379b6032a9a9b7851829ed3ca9a"},
{file = "grpcio-1.50.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b2f77a90ba7b85bfb31329f8eab9d9540da2cf8a302128fb1241d7ea239a5469"},
{file = "grpcio-1.50.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9eea18a878cffc804506d39c6682d71f6b42ec1c151d21865a95fae743fda500"},
{file = "grpcio-1.50.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:2b71916fa8f9eb2abd93151fafe12e18cebb302686b924bd4ec39266211da525"},
{file = "grpcio-1.50.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:95ce51f7a09491fb3da8cf3935005bff19983b77c4e9437ef77235d787b06842"},
{file = "grpcio-1.50.0-cp310-cp310-win32.whl", hash = "sha256:f7025930039a011ed7d7e7ef95a1cb5f516e23c5a6ecc7947259b67bea8e06ca"},
{file = "grpcio-1.50.0-cp310-cp310-win_amd64.whl", hash = "sha256:05f7c248e440f538aaad13eee78ef35f0541e73498dd6f832fe284542ac4b298"},
{file = "grpcio-1.50.0-cp311-cp311-linux_armv7l.whl", hash = "sha256:ca8a2254ab88482936ce941485c1c20cdeaef0efa71a61dbad171ab6758ec998"},
{file = "grpcio-1.50.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:3b611b3de3dfd2c47549ca01abfa9bbb95937eb0ea546ea1d762a335739887be"},
{file = "grpcio-1.50.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1a4cd8cb09d1bc70b3ea37802be484c5ae5a576108bad14728f2516279165dd7"},
{file = "grpcio-1.50.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:156f8009e36780fab48c979c5605eda646065d4695deea4cfcbcfdd06627ddb6"},
{file = "grpcio-1.50.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:de411d2b030134b642c092e986d21aefb9d26a28bf5a18c47dd08ded411a3bc5"},
{file = "grpcio-1.50.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d144ad10eeca4c1d1ce930faa105899f86f5d99cecfe0d7224f3c4c76265c15e"},
{file = "grpcio-1.50.0-cp311-cp311-win32.whl", hash = "sha256:92d7635d1059d40d2ec29c8bf5ec58900120b3ce5150ef7414119430a4b2dd5c"},
{file = "grpcio-1.50.0-cp311-cp311-win_amd64.whl", hash = "sha256:ce8513aee0af9c159319692bfbf488b718d1793d764798c3d5cff827a09e25ef"},
{file = "grpcio-1.50.0-cp37-cp37m-linux_armv7l.whl", hash = "sha256:8e8999a097ad89b30d584c034929f7c0be280cd7851ac23e9067111167dcbf55"},
{file = "grpcio-1.50.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:a50a1be449b9e238b9bd43d3857d40edf65df9416dea988929891d92a9f8a778"},
{file = "grpcio-1.50.0-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:cf151f97f5f381163912e8952eb5b3afe89dec9ed723d1561d59cabf1e219a35"},
{file = "grpcio-1.50.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a23d47f2fc7111869f0ff547f771733661ff2818562b04b9ed674fa208e261f4"},
{file = "grpcio-1.50.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d84d04dec64cc4ed726d07c5d17b73c343c8ddcd6b59c7199c801d6bbb9d9ed1"},
{file = "grpcio-1.50.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:67dd41a31f6fc5c7db097a5c14a3fa588af54736ffc174af4411d34c4f306f68"},
{file = "grpcio-1.50.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8d4c8e73bf20fb53fe5a7318e768b9734cf122fe671fcce75654b98ba12dfb75"},
{file = "grpcio-1.50.0-cp37-cp37m-win32.whl", hash = "sha256:7489dbb901f4fdf7aec8d3753eadd40839c9085967737606d2c35b43074eea24"},
{file = "grpcio-1.50.0-cp37-cp37m-win_amd64.whl", hash = "sha256:531f8b46f3d3db91d9ef285191825d108090856b3bc86a75b7c3930f16ce432f"},
{file = "grpcio-1.50.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:d534d169673dd5e6e12fb57cc67664c2641361e1a0885545495e65a7b761b0f4"},
{file = "grpcio-1.50.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:1d8d02dbb616c0a9260ce587eb751c9c7dc689bc39efa6a88cc4fa3e9c138a7b"},
{file = "grpcio-1.50.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:baab51dcc4f2aecabf4ed1e2f57bceab240987c8b03533f1cef90890e6502067"},
{file = "grpcio-1.50.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40838061e24f960b853d7bce85086c8e1b81c6342b1f4c47ff0edd44bbae2722"},
{file = "grpcio-1.50.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:931e746d0f75b2a5cff0a1197d21827a3a2f400c06bace036762110f19d3d507"},
{file = "grpcio-1.50.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:15f9e6d7f564e8f0776770e6ef32dac172c6f9960c478616c366862933fa08b4"},
{file = "grpcio-1.50.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:a4c23e54f58e016761b576976da6a34d876420b993f45f66a2bfb00363ecc1f9"},
{file = "grpcio-1.50.0-cp38-cp38-win32.whl", hash = "sha256:3e4244c09cc1b65c286d709658c061f12c61c814be0b7030a2d9966ff02611e0"},
{file = "grpcio-1.50.0-cp38-cp38-win_amd64.whl", hash = "sha256:8e69aa4e9b7f065f01d3fdcecbe0397895a772d99954bb82eefbb1682d274518"},
{file = "grpcio-1.50.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:af98d49e56605a2912cf330b4627e5286243242706c3a9fa0bcec6e6f68646fc"},
{file = "grpcio-1.50.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:080b66253f29e1646ac53ef288c12944b131a2829488ac3bac8f52abb4413c0d"},
{file = "grpcio-1.50.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:ab5d0e3590f0a16cb88de4a3fa78d10eb66a84ca80901eb2c17c1d2c308c230f"},
{file = "grpcio-1.50.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb11464f480e6103c59d558a3875bd84eed6723f0921290325ebe97262ae1347"},
{file = "grpcio-1.50.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e07fe0d7ae395897981d16be61f0db9791f482f03fee7d1851fe20ddb4f69c03"},
{file = "grpcio-1.50.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:d75061367a69808ab2e84c960e9dce54749bcc1e44ad3f85deee3a6c75b4ede9"},
{file = "grpcio-1.50.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ae23daa7eda93c1c49a9ecc316e027ceb99adbad750fbd3a56fa9e4a2ffd5ae0"},
{file = "grpcio-1.50.0-cp39-cp39-win32.whl", hash = "sha256:177afaa7dba3ab5bfc211a71b90da1b887d441df33732e94e26860b3321434d9"},
{file = "grpcio-1.50.0-cp39-cp39-win_amd64.whl", hash = "sha256:ea8ccf95e4c7e20419b7827aa5b6da6f02720270686ac63bd3493a651830235c"},
{file = "grpcio-1.50.0.tar.gz", hash = "sha256:12b479839a5e753580b5e6053571de14006157f2ef9b71f38c56dc9b23b95ad6"},
]
h5py = [
{file = "h5py-3.7.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d77af42cb751ad6cc44f11bae73075a07429a5cf2094dfde2b1e716e059b3911"},
{file = "h5py-3.7.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:63beb8b7b47d0896c50de6efb9a1eaa81dbe211f3767e7dd7db159cea51ba37a"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:04e2e1e2fc51b8873e972a08d2f89625ef999b1f2d276199011af57bb9fc7851"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f73307c876af49aa869ec5df1818e9bb0bdcfcf8a5ba773cc45a4fba5a286a5c"},
{file = "h5py-3.7.0-cp310-cp310-win_amd64.whl", hash = "sha256:f514b24cacdd983e61f8d371edac8c1b780c279d0acb8485639e97339c866073"},
{file = "h5py-3.7.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:43fed4d13743cf02798a9a03a360a88e589d81285e72b83f47d37bb64ed44881"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c038399ce09a58ff8d89ec3e62f00aa7cb82d14f34e24735b920e2a811a3a426"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03d64fb86bb86b978928bad923b64419a23e836499ec6363e305ad28afd9d287"},
{file = "h5py-3.7.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e5b7820b75f9519499d76cc708e27242ccfdd9dfb511d6deb98701961d0445aa"},
{file = "h5py-3.7.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a9351d729ea754db36d175098361b920573fdad334125f86ac1dd3a083355e20"},
{file = "h5py-3.7.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6776d896fb90c5938de8acb925e057e2f9f28755f67ec3edcbc8344832616c38"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0a047fddbe6951bce40e9cde63373c838a978c5e05a011a682db9ba6334b8e85"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0798a9c0ff45f17d0192e4d7114d734cac9f8b2b2c76dd1d923c4d0923f27bb6"},
{file = "h5py-3.7.0-cp38-cp38-win_amd64.whl", hash = "sha256:0d8de8cb619fc597da7cf8cdcbf3b7ff8c5f6db836568afc7dc16d21f59b2b49"},
{file = "h5py-3.7.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f084bbe816907dfe59006756f8f2d16d352faff2d107f4ffeb1d8de126fc5dc7"},
{file = "h5py-3.7.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1fcb11a2dc8eb7ddcae08afd8fae02ba10467753a857fa07a404d700a93f3d53"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ed43e2cc4f511756fd664fb45d6b66c3cbed4e3bd0f70e29c37809b2ae013c44"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e7535df5ee3dc3e5d1f408fdfc0b33b46bc9b34db82743c82cd674d8239b9ad"},
{file = "h5py-3.7.0-cp39-cp39-win_amd64.whl", hash = "sha256:9e2ad2aa000f5b1e73b5dfe22f358ca46bf1a2b6ca394d9659874d7fc251731a"},
{file = "h5py-3.7.0.tar.gz", hash = "sha256:3fcf37884383c5da64846ab510190720027dca0768def34dd8dcb659dbe5cbf3"},
]
heapdict = [
{file = "HeapDict-1.0.1-py3-none-any.whl", hash = "sha256:6065f90933ab1bb7e50db403b90cab653c853690c5992e69294c2de2b253fc92"},
{file = "HeapDict-1.0.1.tar.gz", hash = "sha256:8495f57b3e03d8e46d5f1b2cc62ca881aca392fd5cc048dc0aa2e1a6d23ecdb6"},
]
idna = [
{file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
{file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
]
imagesize = [
{file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
{file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
]
importlib-metadata = [
{file = "importlib_metadata-5.0.0-py3-none-any.whl", hash = "sha256:ddb0e35065e8938f867ed4928d0ae5bf2a53b7773871bfe6bcc7e4fcdc7dea43"},
{file = "importlib_metadata-5.0.0.tar.gz", hash = "sha256:da31db32b304314d044d3c12c79bd59e307889b287ad12ff387b3500835fc2ab"},
]
importlib-resources = [
{file = "importlib_resources-5.10.0-py3-none-any.whl", hash = "sha256:ee17ec648f85480d523596ce49eae8ead87d5631ae1551f913c0100b5edd3437"},
{file = "importlib_resources-5.10.0.tar.gz", hash = "sha256:c01b1b94210d9849f286b86bb51bcea7cd56dde0600d8db721d7b81330711668"},
]
iniconfig = [
{file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"},
{file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"},
]
ipykernel = [
{file = "ipykernel-6.17.1-py3-none-any.whl", hash = "sha256:3a9a1b2ad6dbbd5879855aabb4557f08e63fa2208bffed897f03070e2bb436f6"},
{file = "ipykernel-6.17.1.tar.gz", hash = "sha256:e178c1788399f93a459c241fe07c3b810771c607b1fb064a99d2c5d40c90c5d4"},
]
ipython = [
{file = "ipython-8.6.0-py3-none-any.whl", hash = "sha256:91ef03016bcf72dd17190f863476e7c799c6126ec7e8be97719d1bc9a78a59a4"},
{file = "ipython-8.6.0.tar.gz", hash = "sha256:7c959e3dedbf7ed81f9b9d8833df252c430610e2a4a6464ec13cd20975ce20a5"},
]
ipython-genutils = [
{file = "ipython_genutils-0.2.0-py2.py3-none-any.whl", hash = "sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8"},
{file = "ipython_genutils-0.2.0.tar.gz", hash = "sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"},
]
ipywidgets = [
{file = "ipywidgets-8.0.2-py3-none-any.whl", hash = "sha256:1dc3dd4ee19ded045ea7c86eb273033d238d8e43f9e7872c52d092683f263891"},
{file = "ipywidgets-8.0.2.tar.gz", hash = "sha256:08cb75c6e0a96836147cbfdc55580ae04d13e05d26ffbc377b4e1c68baa28b1f"},
]
isort = [
{file = "isort-5.10.1-py3-none-any.whl", hash = "sha256:6f62d78e2f89b4500b080fe3a81690850cd254227f27f75c3a0c491a1f351ba7"},
{file = "isort-5.10.1.tar.gz", hash = "sha256:e8443a5e7a020e9d7f97f1d7d9cd17c88bcb3bc7e218bf9cf5095fe550be2951"},
]
jedi = [
{file = "jedi-0.18.2-py2.py3-none-any.whl", hash = "sha256:203c1fd9d969ab8f2119ec0a3342e0b49910045abe6af0a3ae83a5764d54639e"},
{file = "jedi-0.18.2.tar.gz", hash = "sha256:bae794c30d07f6d910d32a7048af09b5a39ed740918da923c6b780790ebac612"},
]
jinja2 = [
{file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
{file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
]
jmespath = [
{file = "jmespath-1.0.1-py3-none-any.whl", hash = "sha256:02e2e4cc71b5bcab88332eebf907519190dd9e6e82107fa7f83b1003a6252980"},
{file = "jmespath-1.0.1.tar.gz", hash = "sha256:90261b206d6defd58fdd5e85f478bf633a2901798906be2ad389150c5c60edbe"},
]
joblib = [
{file = "joblib-1.2.0-py3-none-any.whl", hash = "sha256:091138ed78f800342968c523bdde947e7a305b8594b910a0fea2ab83c3c6d385"},
{file = "joblib-1.2.0.tar.gz", hash = "sha256:e1cee4a79e4af22881164f218d4311f60074197fb707e082e803b61f6d137018"},
]
jsonschema = [
{file = "jsonschema-4.17.1-py3-none-any.whl", hash = "sha256:410ef23dcdbca4eaedc08b850079179883c2ed09378bd1f760d4af4aacfa28d7"},
{file = "jsonschema-4.17.1.tar.gz", hash = "sha256:05b2d22c83640cde0b7e0aa329ca7754fbd98ea66ad8ae24aa61328dfe057fa3"},
]
jupyter = [
{file = "jupyter-1.0.0-py2.py3-none-any.whl", hash = "sha256:5b290f93b98ffbc21c0c7e749f054b3267782166d72fa5e3ed1ed4eaf34a2b78"},
{file = "jupyter-1.0.0.tar.gz", hash = "sha256:d9dc4b3318f310e34c82951ea5d6683f67bed7def4b259fafbfe4f1beb1d8e5f"},
{file = "jupyter-1.0.0.zip", hash = "sha256:3e1f86076bbb7c8c207829390305a2b1fe836d471ed54be66a3b8c41e7f46cc7"},
]
jupyter-client = [
{file = "jupyter_client-7.4.7-py3-none-any.whl", hash = "sha256:df56ae23b8e1da1b66f89dee1368e948b24a7f780fa822c5735187589fc4c157"},
{file = "jupyter_client-7.4.7.tar.gz", hash = "sha256:330f6b627e0b4bf2f54a3a0dd9e4a22d2b649c8518168afedce2c96a1ceb2860"},
]
jupyter-console = [
{file = "jupyter_console-6.4.4-py3-none-any.whl", hash = "sha256:756df7f4f60c986e7bc0172e4493d3830a7e6e75c08750bbe59c0a5403ad6dee"},
{file = "jupyter_console-6.4.4.tar.gz", hash = "sha256:172f5335e31d600df61613a97b7f0352f2c8250bbd1092ef2d658f77249f89fb"},
]
jupyter-core = [
{file = "jupyter_core-5.0.0-py3-none-any.whl", hash = "sha256:6da1fae48190da8551e1b5dbbb19d51d00b079d59a073c7030407ecaf96dbb1e"},
{file = "jupyter_core-5.0.0.tar.gz", hash = "sha256:4ed68b7c606197c7e344a24b7195eef57898157075a69655a886074b6beb7043"},
]
jupyter-server = [
{file = "jupyter_server-1.23.3-py3-none-any.whl", hash = "sha256:438496cac509709cc85e60172e5538ca45b4c8a0862bb97cd73e49f2ace419cb"},
{file = "jupyter_server-1.23.3.tar.gz", hash = "sha256:f7f7a2f9d36f4150ad125afef0e20b1c76c8ff83eb5e39fb02d3b9df0f9b79ab"},
]
jupyterlab-pygments = [
{file = "jupyterlab_pygments-0.2.2-py2.py3-none-any.whl", hash = "sha256:2405800db07c9f770863bcf8049a529c3dd4d3e28536638bd7c1c01d2748309f"},
{file = "jupyterlab_pygments-0.2.2.tar.gz", hash = "sha256:7405d7fde60819d905a9fa8ce89e4cd830e318cdad22a0030f7a901da705585d"},
]
jupyterlab-widgets = [
{file = "jupyterlab_widgets-3.0.3-py3-none-any.whl", hash = "sha256:6aa1bc0045470d54d76b9c0b7609a8f8f0087573bae25700a370c11f82cb38c8"},
{file = "jupyterlab_widgets-3.0.3.tar.gz", hash = "sha256:c767181399b4ca8b647befe2d913b1260f51bf9d8ef9b7a14632d4c1a7b536bd"},
]
keras = [
{file = "keras-2.11.0-py2.py3-none-any.whl", hash = "sha256:38c6fff0ea9a8b06a2717736565c92a73c8cd9b1c239e7125ccb188b7848f65e"},
]
kiwisolver = [
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:2f5e60fabb7343a836360c4f0919b8cd0d6dbf08ad2ca6b9cf90bf0c76a3c4f6"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:10ee06759482c78bdb864f4109886dff7b8a56529bc1609d4f1112b93fe6423c"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c79ebe8f3676a4c6630fd3f777f3cfecf9289666c84e775a67d1d358578dc2e3"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abbe9fa13da955feb8202e215c4018f4bb57469b1b78c7a4c5c7b93001699938"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7577c1987baa3adc4b3c62c33bd1118c3ef5c8ddef36f0f2c950ae0b199e100d"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8ad8285b01b0d4695102546b342b493b3ccc6781fc28c8c6a1bb63e95d22f09"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8ed58b8acf29798b036d347791141767ccf65eee7f26bde03a71c944449e53de"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a68b62a02953b9841730db7797422f983935aeefceb1679f0fc85cbfbd311c32"},
{file = "kiwisolver-1.4.4-cp310-cp310-win32.whl", hash = "sha256:e92a513161077b53447160b9bd8f522edfbed4bd9759e4c18ab05d7ef7e49408"},
{file = "kiwisolver-1.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:3fe20f63c9ecee44560d0e7f116b3a747a5d7203376abeea292ab3152334d004"},
{file = "kiwisolver-1.4.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:e0ea21f66820452a3f5d1655f8704a60d66ba1191359b96541eaf457710a5fc6"},
{file = "kiwisolver-1.4.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:bc9db8a3efb3e403e4ecc6cd9489ea2bac94244f80c78e27c31dcc00d2790ac2"},
{file = "kiwisolver-1.4.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d5b61785a9ce44e5a4b880272baa7cf6c8f48a5180c3e81c59553ba0cb0821ca"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c2dbb44c3f7e6c4d3487b31037b1bdbf424d97687c1747ce4ff2895795c9bf69"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6295ecd49304dcf3bfbfa45d9a081c96509e95f4b9d0eb7ee4ec0530c4a96514"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4bd472dbe5e136f96a4b18f295d159d7f26fd399136f5b17b08c4e5f498cd494"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bf7d9fce9bcc4752ca4a1b80aabd38f6d19009ea5cbda0e0856983cf6d0023f5"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78d6601aed50c74e0ef02f4204da1816147a6d3fbdc8b3872d263338a9052c51"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:877272cf6b4b7e94c9614f9b10140e198d2186363728ed0f701c6eee1baec1da"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:db608a6757adabb32f1cfe6066e39b3706d8c3aa69bbc353a5b61edad36a5cb4"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:5853eb494c71e267912275e5586fe281444eb5e722de4e131cddf9d442615626"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:f0a1dbdb5ecbef0d34eb77e56fcb3e95bbd7e50835d9782a45df81cc46949750"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:283dffbf061a4ec60391d51e6155e372a1f7a4f5b15d59c8505339454f8989e4"},
{file = "kiwisolver-1.4.4-cp311-cp311-win32.whl", hash = "sha256:d06adcfa62a4431d404c31216f0f8ac97397d799cd53800e9d3efc2fbb3cf14e"},
{file = "kiwisolver-1.4.4-cp311-cp311-win_amd64.whl", hash = "sha256:e7da3fec7408813a7cebc9e4ec55afed2d0fd65c4754bc376bf03498d4e92686"},
{file = "kiwisolver-1.4.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:62ac9cc684da4cf1778d07a89bf5f81b35834cb96ca523d3a7fb32509380cbf6"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41dae968a94b1ef1897cb322b39360a0812661dba7c682aa45098eb8e193dbdf"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:02f79693ec433cb4b5f51694e8477ae83b3205768a6fb48ffba60549080e295b"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d0611a0a2a518464c05ddd5a3a1a0e856ccc10e67079bb17f265ad19ab3c7597"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:db5283d90da4174865d520e7366801a93777201e91e79bacbac6e6927cbceede"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:1041feb4cda8708ce73bb4dcb9ce1ccf49d553bf87c3954bdfa46f0c3f77252c"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win32.whl", hash = "sha256:a553dadda40fef6bfa1456dc4be49b113aa92c2a9a9e8711e955618cd69622e3"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:03baab2d6b4a54ddbb43bba1a3a2d1627e82d205c5cf8f4c924dc49284b87166"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:841293b17ad704d70c578f1f0013c890e219952169ce8a24ebc063eecf775454"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f4f270de01dd3e129a72efad823da90cc4d6aafb64c410c9033aba70db9f1ff0"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f9f39e2f049db33a908319cf46624a569b36983c7c78318e9726a4cb8923b26c"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c97528e64cb9ebeff9701e7938653a9951922f2a38bd847787d4a8e498cc83ae"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d1573129aa0fd901076e2bfb4275a35f5b7aa60fbfb984499d661ec950320b0"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad881edc7ccb9d65b0224f4e4d05a1e85cf62d73aab798943df6d48ab0cd79a1"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b428ef021242344340460fa4c9185d0b1f66fbdbfecc6c63eff4b7c29fad429d"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:2e407cb4bd5a13984a6c2c0fe1845e4e41e96f183e5e5cd4d77a857d9693494c"},
{file = "kiwisolver-1.4.4-cp38-cp38-win32.whl", hash = "sha256:75facbe9606748f43428fc91a43edb46c7ff68889b91fa31f53b58894503a191"},
{file = "kiwisolver-1.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:5bce61af018b0cb2055e0e72e7d65290d822d3feee430b7b8203d8a855e78766"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8c808594c88a025d4e322d5bb549282c93c8e1ba71b790f539567932722d7bd8"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f0a71d85ecdd570ded8ac3d1c0f480842f49a40beb423bb8014539a9f32a5897"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b533558eae785e33e8c148a8d9921692a9fe5aa516efbdff8606e7d87b9d5824"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:efda5fc8cc1c61e4f639b8067d118e742b812c930f708e6667a5ce0d13499e29"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7c43e1e1206cd421cd92e6b3280d4385d41d7166b3ed577ac20444b6995a445f"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc8d3bd6c72b2dd9decf16ce70e20abcb3274ba01b4e1c96031e0c4067d1e7cd"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4ea39b0ccc4f5d803e3337dd46bcce60b702be4d86fd0b3d7531ef10fd99a1ac"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:968f44fdbf6dd757d12920d63b566eeb4d5b395fd2d00d29d7ef00a00582aac9"},
{file = "kiwisolver-1.4.4-cp39-cp39-win32.whl", hash = "sha256:da7e547706e69e45d95e116e6939488d62174e033b763ab1496b4c29b76fabea"},
{file = "kiwisolver-1.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:ba59c92039ec0a66103b1d5fe588fa546373587a7d68f5c96f743c3396afc04b"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:91672bacaa030f92fc2f43b620d7b337fd9a5af28b0d6ed3f77afc43c4a64b5a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:787518a6789009c159453da4d6b683f468ef7a65bbde796bcea803ccf191058d"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da152d8cdcab0e56e4f45eb08b9aea6455845ec83172092f09b0e077ece2cf7a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ecb1fa0db7bf4cff9dac752abb19505a233c7f16684c5826d1f11ebd9472b871"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:28bc5b299f48150b5f822ce68624e445040595a4ac3d59251703779836eceff9"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:81e38381b782cc7e1e46c4e14cd997ee6040768101aefc8fa3c24a4cc58e98f8"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2a66fdfb34e05b705620dd567f5a03f239a088d5a3f321e7b6ac3239d22aa286"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:872b8ca05c40d309ed13eb2e582cab0c5a05e81e987ab9c521bf05ad1d5cf5cb"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:70e7c2e7b750585569564e2e5ca9845acfaa5da56ac46df68414f29fea97be9f"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:9f85003f5dfa867e86d53fac6f7e6f30c045673fa27b603c397753bebadc3008"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2e307eb9bd99801f82789b44bb45e9f541961831c7311521b13a6c85afc09767"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b1792d939ec70abe76f5054d3f36ed5656021dcad1322d1cc996d4e54165cef9"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6cb459eea32a4e2cf18ba5fcece2dbdf496384413bc1bae15583f19e567f3b2"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:36dafec3d6d6088d34e2de6b85f9d8e2324eb734162fba59d2ba9ed7a2043d5b"},
{file = "kiwisolver-1.4.4.tar.gz", hash = "sha256:d41997519fcba4a1e46eb4a2fe31bc12f0ff957b2b81bac28db24744f333e955"},
]
langcodes = [
{file = "langcodes-3.3.0-py3-none-any.whl", hash = "sha256:4d89fc9acb6e9c8fdef70bcdf376113a3db09b67285d9e1d534de6d8818e7e69"},
{file = "langcodes-3.3.0.tar.gz", hash = "sha256:794d07d5a28781231ac335a1561b8442f8648ca07cd518310aeb45d6f0807ef6"},
]
libclang = [
{file = "libclang-14.0.6-py2.py3-none-macosx_10_9_x86_64.whl", hash = "sha256:8791cf3c3b087c373a6d61e9199da7a541da922c9ddcfed1122090586b996d6e"},
{file = "libclang-14.0.6-py2.py3-none-macosx_11_0_arm64.whl", hash = "sha256:7b06fc76bd1e67c8b04b5719bf2ac5d6a323b289b245dfa9e468561d99538188"},
{file = "libclang-14.0.6-py2.py3-none-manylinux1_x86_64.whl", hash = "sha256:e429853939423f276a25140b0b702442d7da9a09e001c05e48df888336947614"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2010_x86_64.whl", hash = "sha256:206d2789e4450a37d054e63b70451a6fc1873466397443fa13de2b3d4adb2796"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_aarch64.whl", hash = "sha256:e2add1703129b2abe066fb1890afa880870a89fd6ab4ec5d2a7a8dc8d271677e"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_armv7l.whl", hash = "sha256:5dd3c6fca1b007d308a4114afa8e4e9d32f32b2572520701d45fcc626ac5cd6c"},
{file = "libclang-14.0.6-py2.py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:cfb0e892ebb5dff6bd498ab5778adb8581f26a00fd8347b3c76c989fe2fd04f7"},
{file = "libclang-14.0.6-py2.py3-none-win_amd64.whl", hash = "sha256:ea03c12675151837660cdd5dce65bd89320896ac3421efef43a36678f113ce95"},
{file = "libclang-14.0.6-py2.py3-none-win_arm64.whl", hash = "sha256:2e4303e04517fcd11173cb2e51a7070eed71e16ef45d4e26a82c5e881cac3d27"},
{file = "libclang-14.0.6.tar.gz", hash = "sha256:9052a8284d8846984f6fa826b1d7460a66d3b23a486d782633b42b6e3b418789"},
]
lightgbm = [
{file = "lightgbm-3.3.3-py3-none-macosx_10_15_x86_64.macosx_11_6_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:27b0ae82549d6c59ede4fa3245f4b21a6bf71ab5ec5c55601cf5a962a18c6f80"},
{file = "lightgbm-3.3.3-py3-none-manylinux1_x86_64.whl", hash = "sha256:389edda68b7f24a1755a6af4dad06e16236e374e9de64253a105b12982b153e2"},
{file = "lightgbm-3.3.3-py3-none-manylinux2014_aarch64.whl", hash = "sha256:b0af55bd476785726eaacbd3c880f8168d362d4bba098790f55cd10fe928591b"},
{file = "lightgbm-3.3.3-py3-none-win_amd64.whl", hash = "sha256:b334dbcd670e3d87f4ff3cfe31d652ab18eb88ad9092a02010916320549b7d10"},
{file = "lightgbm-3.3.3.tar.gz", hash = "sha256:857e559ae84a22963ce2b62168292969d21add30bc9246a84d4e7eedae67966d"},
]
llvmlite = [
{file = "llvmlite-0.36.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cc0f9b9644b4ab0e4a5edb17f1531d791630c88858220d3cc688d6edf10da100"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f7918dbac02b1ebbfd7302ad8e8307d7877ab57d782d5f04b70ff9696b53c21b"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:7768658646c418b9b3beccb7044277a608bc8c62b82a85e73c7e5c065e4157c2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win32.whl", hash = "sha256:05f807209a360d39526d98141b6f281b9c7c771c77a4d1fc22002440642c8de2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win_amd64.whl", hash = "sha256:d1fdd63c371626c25ad834e1c6297eb76cf2f093a40dbb401a87b6476ab4e34e"},
{file = "llvmlite-0.36.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7c4e7066447305d5095d0b0a9cae7b835d2f0fde143456b3124110eab0856426"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:9dad7e4bb042492914292aea3f4172eca84db731f9478250240955aedba95e08"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:1ce5bc0a638d874a08d4222be0a7e48e5df305d094c2ff8dec525ef32b581551"},
{file = "llvmlite-0.36.0-cp37-cp37m-win32.whl", hash = "sha256:dbedff0f6d417b374253a6bab39aa4b5364f1caab30c06ba8726904776fcf1cb"},
{file = "llvmlite-0.36.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3b17fc4b0dd17bd29d7297d054e2915fad535889907c3f65232ee21f483447c5"},
{file = "llvmlite-0.36.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b3a77e46e6053e2a86e607e87b97651dda81e619febb914824a927bff4e88737"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:048a7c117641c9be87b90005684e64a6f33ea0897ebab1df8a01214a10d6e79a"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:7db4b0eef93125af1c4092c64a3c73c7dc904101117ef53f8d78a1a499b8d5f4"},
{file = "llvmlite-0.36.0-cp38-cp38-win32.whl", hash = "sha256:50b1828bde514b31431b2bba1aa20b387f5625b81ad6e12fede430a04645e47a"},
{file = "llvmlite-0.36.0-cp38-cp38-win_amd64.whl", hash = "sha256:f608bae781b2d343e15e080c546468c5a6f35f57f0446923ea198dd21f23757e"},
{file = "llvmlite-0.36.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6a3abc8a8889aeb06bf9c4a7e5df5bc7bb1aa0aedd91a599813809abeec80b5a"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:705f0323d931684428bb3451549603299bb5e17dd60fb979d67c3807de0debc1"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:5a6548b4899facb182145147185e9166c69826fb424895f227e6b7cf924a8da1"},
{file = "llvmlite-0.36.0-cp39-cp39-win32.whl", hash = "sha256:ff52fb9c2be66b95b0e67d56fce11038397e5be1ea410ee53f5f1175fdbb107a"},
{file = "llvmlite-0.36.0-cp39-cp39-win_amd64.whl", hash = "sha256:1dee416ea49fd338c74ec15c0c013e5273b0961528169af06ff90772614f7f6c"},
{file = "llvmlite-0.36.0.tar.gz", hash = "sha256:765128fdf5f149ed0b889ffbe2b05eb1717f8e20a5c87fa2b4018fbcce0fcfc9"},
]
locket = [
{file = "locket-1.0.0-py2.py3-none-any.whl", hash = "sha256:b6c819a722f7b6bd955b80781788e4a66a55628b858d347536b7e81325a3a5e3"},
{file = "locket-1.0.0.tar.gz", hash = "sha256:5c0d4c052a8bbbf750e056a8e65ccd309086f4f0f18a2eac306a8dfa4112a632"},
]
markdown = [
{file = "Markdown-3.4.1-py3-none-any.whl", hash = "sha256:08fb8465cffd03d10b9dd34a5c3fea908e20391a2a90b88d66362cb05beed186"},
{file = "Markdown-3.4.1.tar.gz", hash = "sha256:3b809086bb6efad416156e00a0da66fe47618a5d6918dd688f53f40c8e4cfeff"},
]
markupsafe = [
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:86b1f75c4e7c2ac2ccdaec2b9022845dbb81880ca318bb7a0a01fbf7813e3812"},
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f121a1420d4e173a5d96e47e9a0c0dcff965afdf1626d28de1460815f7c4ee7a"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a49907dd8420c5685cfa064a1335b6754b74541bbb3706c259c02ed65b644b3e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10c1bfff05d95783da83491be968e8fe789263689c02724e0c691933c52994f5"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7bd98b796e2b6553da7225aeb61f447f80a1ca64f41d83612e6139ca5213aa4"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b09bf97215625a311f669476f44b8b318b075847b49316d3e28c08e41a7a573f"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:694deca8d702d5db21ec83983ce0bb4b26a578e71fbdbd4fdcd387daa90e4d5e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:efc1913fd2ca4f334418481c7e595c00aad186563bbc1ec76067848c7ca0a933"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win32.whl", hash = "sha256:4a33dea2b688b3190ee12bd7cfa29d39c9ed176bda40bfa11099a3ce5d3a7ac6"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:dda30ba7e87fbbb7eab1ec9f58678558fd9a6b8b853530e176eabd064da81417"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:671cd1187ed5e62818414afe79ed29da836dde67166a9fac6d435873c44fdd02"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3799351e2336dc91ea70b034983ee71cf2f9533cdff7c14c90ea126bfd95d65a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e72591e9ecd94d7feb70c1cbd7be7b3ebea3f548870aa91e2732960fa4d57a37"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6fbf47b5d3728c6aea2abb0589b5d30459e369baa772e0f37a0320185e87c980"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d5ee4f386140395a2c818d149221149c54849dfcfcb9f1debfe07a8b8bd63f9a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:bcb3ed405ed3222f9904899563d6fc492ff75cce56cba05e32eff40e6acbeaa3"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:e1c0b87e09fa55a220f058d1d49d3fb8df88fbfab58558f1198e08c1e1de842a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win32.whl", hash = "sha256:8dc1c72a69aa7e082593c4a203dcf94ddb74bb5c8a731e4e1eb68d031e8498ff"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:97a68e6ada378df82bc9f16b800ab77cbf4b2fada0081794318520138c088e4a"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e8c843bbcda3a2f1e3c2ab25913c80a3c5376cd00c6e8c4a86a89a28c8dc5452"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0212a68688482dc52b2d45013df70d169f542b7394fc744c02a57374a4207003"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e576a51ad59e4bfaac456023a78f6b5e6e7651dcd383bcc3e18d06f9b55d6d1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b9fe39a2ccc108a4accc2676e77da025ce383c108593d65cc909add5c3bd601"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96e37a3dc86e80bf81758c152fe66dbf60ed5eca3d26305edf01892257049925"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6d0072fea50feec76a4c418096652f2c3238eaa014b2f94aeb1d56a66b41403f"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:089cf3dbf0cd6c100f02945abeb18484bd1ee57a079aefd52cffd17fba910b88"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a074d34ee7a5ce3effbc526b7083ec9731bb3cbf921bbe1d3005d4d2bdb3a63"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win32.whl", hash = "sha256:421be9fbf0ffe9ffd7a378aafebbf6f4602d564d34be190fc19a193232fd12b1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:fc7b548b17d238737688817ab67deebb30e8073c95749d55538ed473130ec0c7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e04e26803c9c3851c931eac40c695602c6295b8d432cbe78609649ad9bd2da8a"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b87db4360013327109564f0e591bd2a3b318547bcef31b468a92ee504d07ae4f"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99a2a507ed3ac881b975a2976d59f38c19386d128e7a9a18b7df6fff1fd4c1d6"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56442863ed2b06d19c37f94d999035e15ee982988920e12a5b4ba29b62ad1f77"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3ce11ee3f23f79dbd06fb3d63e2f6af7b12db1d46932fe7bd8afa259a5996603"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:33b74d289bd2f5e527beadcaa3f401e0df0a89927c1559c8566c066fa4248ab7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:43093fb83d8343aac0b1baa75516da6092f58f41200907ef92448ecab8825135"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8e3dcf21f367459434c18e71b2a9532d96547aef8a871872a5bd69a715c15f96"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win32.whl", hash = "sha256:d4306c36ca495956b6d568d276ac11fdd9c30a36f1b6eb928070dc5360b22e1c"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:46d00d6cfecdde84d40e572d63735ef81423ad31184100411e6e3388d405e247"},
{file = "MarkupSafe-2.1.1.tar.gz", hash = "sha256:7f91197cc9e48f989d12e4e6fbc46495c446636dfc81b9ccf50bb0ec74b91d4b"},
]
matplotlib = [
{file = "matplotlib-3.6.2-cp310-cp310-macosx_10_12_universal2.whl", hash = "sha256:8d0068e40837c1d0df6e3abf1cdc9a34a6d2611d90e29610fa1d2455aeb4e2e5"},
{file = "matplotlib-3.6.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:252957e208c23db72ca9918cb33e160c7833faebf295aaedb43f5b083832a267"},
{file = "matplotlib-3.6.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d50e8c1e571ee39b5dfbc295c11ad65988879f68009dd281a6e1edbc2ff6c18c"},
{file = "matplotlib-3.6.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d840adcad7354be6f2ec28d0706528b0026e4c3934cc6566b84eac18633eab1b"},
{file = "matplotlib-3.6.2-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:78ec3c3412cf277e6252764ee4acbdbec6920cc87ad65862272aaa0e24381eee"},
{file = "matplotlib-3.6.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9347cc6822f38db2b1d1ce992f375289670e595a2d1c15961aacbe0977407dfc"},
{file = "matplotlib-3.6.2-cp310-cp310-win32.whl", hash = "sha256:e0bbee6c2a5bf2a0017a9b5e397babb88f230e6f07c3cdff4a4c4bc75ed7c617"},
{file = "matplotlib-3.6.2-cp310-cp310-win_amd64.whl", hash = "sha256:8a0ae37576ed444fe853709bdceb2be4c7df6f7acae17b8378765bd28e61b3ae"},
{file = "matplotlib-3.6.2-cp311-cp311-macosx_10_12_universal2.whl", hash = "sha256:5ecfc6559132116dedfc482d0ad9df8a89dc5909eebffd22f3deb684132d002f"},
{file = "matplotlib-3.6.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:9f335e5625feb90e323d7e3868ec337f7b9ad88b5d633f876e3b778813021dab"},
{file = "matplotlib-3.6.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b2604c6450f9dd2c42e223b1f5dca9643a23cfecc9fde4a94bb38e0d2693b136"},
{file = "matplotlib-3.6.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e5afe0a7ea0e3a7a257907060bee6724a6002b7eec55d0db16fd32409795f3e1"},
{file = "matplotlib-3.6.2-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca0e7a658fbafcddcaefaa07ba8dae9384be2343468a8e011061791588d839fa"},
{file = "matplotlib-3.6.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:32d29c8c26362169c80c5718ce367e8c64f4dd068a424e7110df1dd2ed7bd428"},
{file = "matplotlib-3.6.2-cp311-cp311-win32.whl", hash = "sha256:5024b8ed83d7f8809982d095d8ab0b179bebc07616a9713f86d30cf4944acb73"},
{file = "matplotlib-3.6.2-cp311-cp311-win_amd64.whl", hash = "sha256:52c2bdd7cd0bf9d5ccdf9c1816568fd4ccd51a4d82419cc5480f548981b47dd0"},
{file = "matplotlib-3.6.2-cp38-cp38-macosx_10_12_universal2.whl", hash = "sha256:8a8dbe2cb7f33ff54b16bb5c500673502a35f18ac1ed48625e997d40c922f9cc"},
{file = "matplotlib-3.6.2-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:380d48c15ec41102a2b70858ab1dedfa33eb77b2c0982cb65a200ae67a48e9cb"},
{file = "matplotlib-3.6.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0844523dfaaff566e39dbfa74e6f6dc42e92f7a365ce80929c5030b84caa563a"},
{file = "matplotlib-3.6.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7f716b6af94dc1b6b97c46401774472f0867e44595990fe80a8ba390f7a0a028"},
{file = "matplotlib-3.6.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:74153008bd24366cf099d1f1e83808d179d618c4e32edb0d489d526523a94d9f"},
{file = "matplotlib-3.6.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f41e57ad63d336fe50d3a67bb8eaa26c09f6dda6a59f76777a99b8ccd8e26aec"},
{file = "matplotlib-3.6.2-cp38-cp38-win32.whl", hash = "sha256:d0e9ac04065a814d4cf2c6791a2ad563f739ae3ae830d716d54245c2b96fead6"},
{file = "matplotlib-3.6.2-cp38-cp38-win_amd64.whl", hash = "sha256:8a9d899953c722b9afd7e88dbefd8fb276c686c3116a43c577cfabf636180558"},
{file = "matplotlib-3.6.2-cp39-cp39-macosx_10_12_universal2.whl", hash = "sha256:f04f97797df35e442ed09f529ad1235d1f1c0f30878e2fe09a2676b71a8801e0"},
{file = "matplotlib-3.6.2-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:3964934731fd7a289a91d315919cf757f293969a4244941ab10513d2351b4e83"},
{file = "matplotlib-3.6.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:168093410b99f647ba61361b208f7b0d64dde1172b5b1796d765cd243cadb501"},
{file = "matplotlib-3.6.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e16dcaecffd55b955aa5e2b8a804379789c15987e8ebd2f32f01398a81e975b"},
{file = "matplotlib-3.6.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:83dc89c5fd728fdb03b76f122f43b4dcee8c61f1489e232d9ad0f58020523e1c"},
{file = "matplotlib-3.6.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:795ad83940732b45d39b82571f87af0081c120feff2b12e748d96bb191169e33"},
{file = "matplotlib-3.6.2-cp39-cp39-win32.whl", hash = "sha256:19d61ee6414c44a04addbe33005ab1f87539d9f395e25afcbe9a3c50ce77c65c"},
{file = "matplotlib-3.6.2-cp39-cp39-win_amd64.whl", hash = "sha256:5ba73aa3aca35d2981e0b31230d58abb7b5d7ca104e543ae49709208d8ce706a"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:1836f366272b1557a613f8265db220eb8dd883202bbbabe01bad5a4eadfd0c95"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0eda9d1b43f265da91fb9ae10d6922b5a986e2234470a524e6b18f14095b20d2"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ec9be0f4826cdb3a3a517509dcc5f87f370251b76362051ab59e42b6b765f8c4"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:3cef89888a466228fc4e4b2954e740ce8e9afde7c4315fdd18caa1b8de58ca17"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:54fa9fe27f5466b86126ff38123261188bed568c1019e4716af01f97a12fe812"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e68be81cd8c22b029924b6d0ee814c337c0e706b8d88495a617319e5dd5441c3"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b0ca2c60d3966dfd6608f5f8c49b8a0fcf76de6654f2eda55fc6ef038d5a6f27"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4426c74761790bff46e3d906c14c7aab727543293eed5a924300a952e1a3a3c1"},
{file = "matplotlib-3.6.2.tar.gz", hash = "sha256:b03fd10a1709d0101c054883b550f7c4c5e974f751e2680318759af005964990"},
]
matplotlib-inline = [
{file = "matplotlib-inline-0.1.6.tar.gz", hash = "sha256:f887e5f10ba98e8d2b150ddcf4702c1e5f8b3a20005eb0f74bfdbd360ee6f304"},
{file = "matplotlib_inline-0.1.6-py3-none-any.whl", hash = "sha256:f1f41aab5328aa5aaea9b16d083b128102f8712542f819fe7e6a420ff581b311"},
]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
]
mistune = [
{file = "mistune-2.0.4-py2.py3-none-any.whl", hash = "sha256:182cc5ee6f8ed1b807de6b7bb50155df7b66495412836b9a74c8fbdfc75fe36d"},
{file = "mistune-2.0.4.tar.gz", hash = "sha256:9ee0a66053e2267aba772c71e06891fa8f1af6d4b01d5e84e267b4570d4d9808"},
]
mpmath = [
{file = "mpmath-1.2.1-py3-none-any.whl", hash = "sha256:604bc21bd22d2322a177c73bdb573994ef76e62edd595d17e00aff24b0667e5c"},
{file = "mpmath-1.2.1.tar.gz", hash = "sha256:79ffb45cf9f4b101a807595bcb3e72e0396202e0b1d25d689134b48c4216a81a"},
]
msgpack = [
{file = "msgpack-1.0.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:4ab251d229d10498e9a2f3b1e68ef64cb393394ec477e3370c457f9430ce9250"},
{file = "msgpack-1.0.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:112b0f93202d7c0fef0b7810d465fde23c746a2d482e1e2de2aafd2ce1492c88"},
{file = "msgpack-1.0.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:002b5c72b6cd9b4bafd790f364b8480e859b4712e91f43014fe01e4f957b8467"},
{file = "msgpack-1.0.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:35bc0faa494b0f1d851fd29129b2575b2e26d41d177caacd4206d81502d4c6a6"},
{file = "msgpack-1.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4733359808c56d5d7756628736061c432ded018e7a1dff2d35a02439043321aa"},
{file = "msgpack-1.0.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb514ad14edf07a1dbe63761fd30f89ae79b42625731e1ccf5e1f1092950eaa6"},
{file = "msgpack-1.0.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:c23080fdeec4716aede32b4e0ef7e213c7b1093eede9ee010949f2a418ced6ba"},
{file = "msgpack-1.0.4-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:49565b0e3d7896d9ea71d9095df15b7f75a035c49be733051c34762ca95bbf7e"},
{file = "msgpack-1.0.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:aca0f1644d6b5a73eb3e74d4d64d5d8c6c3d577e753a04c9e9c87d07692c58db"},
{file = "msgpack-1.0.4-cp310-cp310-win32.whl", hash = "sha256:0dfe3947db5fb9ce52aaea6ca28112a170db9eae75adf9339a1aec434dc954ef"},
{file = "msgpack-1.0.4-cp310-cp310-win_amd64.whl", hash = "sha256:4dea20515f660aa6b7e964433b1808d098dcfcabbebeaaad240d11f909298075"},
{file = "msgpack-1.0.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e83f80a7fec1a62cf4e6c9a660e39c7f878f603737a0cdac8c13131d11d97f52"},
{file = "msgpack-1.0.4-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c11a48cf5e59026ad7cb0dc29e29a01b5a66a3e333dc11c04f7e991fc5510a9"},
{file = "msgpack-1.0.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1276e8f34e139aeff1c77a3cefb295598b504ac5314d32c8c3d54d24fadb94c9"},
{file = "msgpack-1.0.4-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6c9566f2c39ccced0a38d37c26cc3570983b97833c365a6044edef3574a00c08"},
{file = "msgpack-1.0.4-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:fcb8a47f43acc113e24e910399376f7277cf8508b27e5b88499f053de6b115a8"},
{file = "msgpack-1.0.4-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:76ee788122de3a68a02ed6f3a16bbcd97bc7c2e39bd4d94be2f1821e7c4a64e6"},
{file = "msgpack-1.0.4-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:0a68d3ac0104e2d3510de90a1091720157c319ceeb90d74f7b5295a6bee51bae"},
{file = "msgpack-1.0.4-cp36-cp36m-win32.whl", hash = "sha256:85f279d88d8e833ec015650fd15ae5eddce0791e1e8a59165318f371158efec6"},
{file = "msgpack-1.0.4-cp36-cp36m-win_amd64.whl", hash = "sha256:c1683841cd4fa45ac427c18854c3ec3cd9b681694caf5bff04edb9387602d661"},
{file = "msgpack-1.0.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a75dfb03f8b06f4ab093dafe3ddcc2d633259e6c3f74bb1b01996f5d8aa5868c"},
{file = "msgpack-1.0.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9667bdfdf523c40d2511f0e98a6c9d3603be6b371ae9a238b7ef2dc4e7a427b0"},
{file = "msgpack-1.0.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:11184bc7e56fd74c00ead4f9cc9a3091d62ecb96e97653add7a879a14b003227"},
{file = "msgpack-1.0.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ac5bd7901487c4a1dd51a8c58f2632b15d838d07ceedaa5e4c080f7190925bff"},
{file = "msgpack-1.0.4-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:1e91d641d2bfe91ba4c52039adc5bccf27c335356055825c7f88742c8bb900dd"},
{file = "msgpack-1.0.4-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:2a2df1b55a78eb5f5b7d2a4bb221cd8363913830145fad05374a80bf0877cb1e"},
{file = "msgpack-1.0.4-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:545e3cf0cf74f3e48b470f68ed19551ae6f9722814ea969305794645da091236"},
{file = "msgpack-1.0.4-cp37-cp37m-win32.whl", hash = "sha256:2cc5ca2712ac0003bcb625c96368fd08a0f86bbc1a5578802512d87bc592fe44"},
{file = "msgpack-1.0.4-cp37-cp37m-win_amd64.whl", hash = "sha256:eba96145051ccec0ec86611fe9cf693ce55f2a3ce89c06ed307de0e085730ec1"},
{file = "msgpack-1.0.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:7760f85956c415578c17edb39eed99f9181a48375b0d4a94076d84148cf67b2d"},
{file = "msgpack-1.0.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:449e57cc1ff18d3b444eb554e44613cffcccb32805d16726a5494038c3b93dab"},
{file = "msgpack-1.0.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d603de2b8d2ea3f3bcb2efe286849aa7a81531abc52d8454da12f46235092bcb"},
{file = "msgpack-1.0.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:48f5d88c99f64c456413d74a975bd605a9b0526293218a3b77220a2c15458ba9"},
{file = "msgpack-1.0.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6916c78f33602ecf0509cc40379271ba0f9ab572b066bd4bdafd7434dee4bc6e"},
{file = "msgpack-1.0.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:81fc7ba725464651190b196f3cd848e8553d4d510114a954681fd0b9c479d7e1"},
{file = "msgpack-1.0.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:d5b5b962221fa2c5d3a7f8133f9abffc114fe218eb4365e40f17732ade576c8e"},
{file = "msgpack-1.0.4-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:77ccd2af37f3db0ea59fb280fa2165bf1b096510ba9fe0cc2bf8fa92a22fdb43"},
{file = "msgpack-1.0.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:b17be2478b622939e39b816e0aa8242611cc8d3583d1cd8ec31b249f04623243"},
{file = "msgpack-1.0.4-cp38-cp38-win32.whl", hash = "sha256:2bb8cdf50dd623392fa75525cce44a65a12a00c98e1e37bf0fb08ddce2ff60d2"},
{file = "msgpack-1.0.4-cp38-cp38-win_amd64.whl", hash = "sha256:26b8feaca40a90cbe031b03d82b2898bf560027160d3eae1423f4a67654ec5d6"},
{file = "msgpack-1.0.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:462497af5fd4e0edbb1559c352ad84f6c577ffbbb708566a0abaaa84acd9f3ae"},
{file = "msgpack-1.0.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2999623886c5c02deefe156e8f869c3b0aaeba14bfc50aa2486a0415178fce55"},
{file = "msgpack-1.0.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f0029245c51fd9473dc1aede1160b0a29f4a912e6b1dd353fa6d317085b219da"},
{file = "msgpack-1.0.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ed6f7b854a823ea44cf94919ba3f727e230da29feb4a99711433f25800cf747f"},
{file = "msgpack-1.0.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0df96d6eaf45ceca04b3f3b4b111b86b33785683d682c655063ef8057d61fd92"},
{file = "msgpack-1.0.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6a4192b1ab40f8dca3f2877b70e63799d95c62c068c84dc028b40a6cb03ccd0f"},
{file = "msgpack-1.0.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0e3590f9fb9f7fbc36df366267870e77269c03172d086fa76bb4eba8b2b46624"},
{file = "msgpack-1.0.4-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:1576bd97527a93c44fa856770197dec00d223b0b9f36ef03f65bac60197cedf8"},
{file = "msgpack-1.0.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:63e29d6e8c9ca22b21846234913c3466b7e4ee6e422f205a2988083de3b08cae"},
{file = "msgpack-1.0.4-cp39-cp39-win32.whl", hash = "sha256:fb62ea4b62bfcb0b380d5680f9a4b3f9a2d166d9394e9bbd9666c0ee09a3645c"},
{file = "msgpack-1.0.4-cp39-cp39-win_amd64.whl", hash = "sha256:4d5834a2a48965a349da1c5a79760d94a1a0172fbb5ab6b5b33cbf8447e109ce"},
{file = "msgpack-1.0.4.tar.gz", hash = "sha256:f5d869c18f030202eb412f08b28d2afeea553d6613aee89e200d7aca7ef01f5f"},
]
multiprocess = [
{file = "multiprocess-0.70.14-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:560a27540daef4ce8b24ed3cc2496a3c670df66c96d02461a4da67473685adf3"},
{file = "multiprocess-0.70.14-pp37-pypy37_pp73-manylinux_2_24_i686.whl", hash = "sha256:bfbbfa36f400b81d1978c940616bc77776424e5e34cb0c94974b178d727cfcd5"},
{file = "multiprocess-0.70.14-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:89fed99553a04ec4f9067031f83a886d7fdec5952005551a896a4b6a59575bb9"},
{file = "multiprocess-0.70.14-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:40a5e3685462079e5fdee7c6789e3ef270595e1755199f0d50685e72523e1d2a"},
{file = "multiprocess-0.70.14-pp38-pypy38_pp73-manylinux_2_24_i686.whl", hash = "sha256:44936b2978d3f2648727b3eaeab6d7fa0bedf072dc5207bf35a96d5ee7c004cf"},
{file = "multiprocess-0.70.14-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:e628503187b5d494bf29ffc52d3e1e57bb770ce7ce05d67c4bbdb3a0c7d3b05f"},
{file = "multiprocess-0.70.14-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:0d5da0fc84aacb0e4bd69c41b31edbf71b39fe2fb32a54eaedcaea241050855c"},
{file = "multiprocess-0.70.14-pp39-pypy39_pp73-manylinux_2_24_i686.whl", hash = "sha256:6a7b03a5b98e911a7785b9116805bd782815c5e2bd6c91c6a320f26fd3e7b7ad"},
{file = "multiprocess-0.70.14-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:cea5bdedd10aace3c660fedeac8b087136b4366d4ee49a30f1ebf7409bce00ae"},
{file = "multiprocess-0.70.14-py310-none-any.whl", hash = "sha256:7dc1f2f6a1d34894c8a9a013fbc807971e336e7cc3f3ff233e61b9dc679b3b5c"},
{file = "multiprocess-0.70.14-py37-none-any.whl", hash = "sha256:93a8208ca0926d05cdbb5b9250a604c401bed677579e96c14da3090beb798193"},
{file = "multiprocess-0.70.14-py38-none-any.whl", hash = "sha256:6725bc79666bbd29a73ca148a0fb5f4ea22eed4a8f22fce58296492a02d18a7b"},
{file = "multiprocess-0.70.14-py39-none-any.whl", hash = "sha256:63cee628b74a2c0631ef15da5534c8aedbc10c38910b9c8b18dcd327528d1ec7"},
{file = "multiprocess-0.70.14.tar.gz", hash = "sha256:3eddafc12f2260d27ae03fe6069b12570ab4764ab59a75e81624fac453fbf46a"},
]
murmurhash = [
{file = "murmurhash-1.0.9-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:697ed01454d92681c7ae26eb1adcdc654b54062bcc59db38ed03cad71b23d449"},
{file = "murmurhash-1.0.9-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5ef31b5c11be2c064dbbdd0e22ab3effa9ceb5b11ae735295c717c120087dd94"},
{file = "murmurhash-1.0.9-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d7a2bd203377a31bbb2d83fe3f968756d6c9bbfa36c64c6ebfc3c6494fc680bc"},
{file = "murmurhash-1.0.9-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0eb0f8e652431ea238c11bcb671fef5c03aff0544bf7e098df81ea4b6d495405"},
{file = "murmurhash-1.0.9-cp310-cp310-win_amd64.whl", hash = "sha256:cf0b3fe54dca598f5b18c9951e70812e070ecb4c0672ad2cc32efde8a33b3df6"},
{file = "murmurhash-1.0.9-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5dc41be79ba4d09aab7e9110a8a4d4b37b184b63767b1b247411667cdb1057a3"},
{file = "murmurhash-1.0.9-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c0f84ecdf37c06eda0222f2f9e81c0974e1a7659c35b755ab2fdc642ebd366db"},
{file = "murmurhash-1.0.9-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:241693c1c819148eac29d7882739b1099c891f1f7431127b2652c23f81722cec"},
{file = "murmurhash-1.0.9-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:47f5ca56c430230d3b581dfdbc54eb3ad8b0406dcc9afdd978da2e662c71d370"},
{file = "murmurhash-1.0.9-cp311-cp311-win_amd64.whl", hash = "sha256:660ae41fc6609abc05130543011a45b33ca5d8318ae5c70e66bbd351ca936063"},
{file = "murmurhash-1.0.9-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:01137d688a6b259bde642513506b062364ea4e1609f886d9bd095c3ae6da0b94"},
{file = "murmurhash-1.0.9-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b70bbf55d89713873a35bd4002bc231d38e530e1051d57ca5d15f96c01fd778"},
{file = "murmurhash-1.0.9-cp36-cp36m-win_amd64.whl", hash = "sha256:3e802fa5b0e618ee99e8c114ce99fc91677f14e9de6e18b945d91323a93c84e8"},
{file = "murmurhash-1.0.9-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:213d0248e586082e1cab6157d9945b846fd2b6be34357ad5ea0d03a1931d82ba"},
{file = "murmurhash-1.0.9-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94b89d02aeab5e6bad5056f9d08df03ac7cfe06e61ff4b6340feb227fda80ce8"},
{file = "murmurhash-1.0.9-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0c2e2ee2d91a87952fe0f80212e86119aa1fd7681f03e6c99b279e50790dc2b3"},
{file = "murmurhash-1.0.9-cp37-cp37m-win_amd64.whl", hash = "sha256:8c3d69fb649c77c74a55624ebf7a0df3c81629e6ea6e80048134f015da57b2ea"},
{file = "murmurhash-1.0.9-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ab78675510f83e7a3c6bd0abdc448a9a2b0b385b0d7ee766cbbfc5cc278a3042"},
{file = "murmurhash-1.0.9-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0ac5530c250d2b0073ed058555847c8d88d2d00229e483d45658c13b32398523"},
{file = "murmurhash-1.0.9-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69157e8fa6b25c4383645227069f6a1f8738d32ed2a83558961019ca3ebef56a"},
{file = "murmurhash-1.0.9-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2aebe2ae016525a662ff772b72a2c9244a673e3215fcd49897f494258b96f3e7"},
{file = "murmurhash-1.0.9-cp38-cp38-win_amd64.whl", hash = "sha256:a5952f9c18a717fa17579e27f57bfa619299546011a8378a8f73e14eece332f6"},
{file = "murmurhash-1.0.9-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ef79202feeac68e83971239169a05fa6514ecc2815ce04c8302076d267870f6e"},
{file = "murmurhash-1.0.9-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:799fcbca5693ad6a40f565ae6b8e9718e5875a63deddf343825c0f31c32348fa"},
{file = "murmurhash-1.0.9-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9b995bc82eaf9223e045210207b8878fdfe099a788dd8abd708d9ee58459a9d"},
{file = "murmurhash-1.0.9-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b129e1c5ebd772e6ff5ef925bcce695df13169bd885337e6074b923ab6edcfc8"},
{file = "murmurhash-1.0.9-cp39-cp39-win_amd64.whl", hash = "sha256:379bf6b414bd27dd36772dd1570565a7d69918e980457370838bd514df0d91e9"},
{file = "murmurhash-1.0.9.tar.gz", hash = "sha256:fe7a38cb0d3d87c14ec9dddc4932ffe2dbc77d75469ab80fd5014689b0e07b58"},
]
mypy = [
{file = "mypy-0.971-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f2899a3cbd394da157194f913a931edfd4be5f274a88041c9dc2d9cdcb1c315c"},
{file = "mypy-0.971-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:98e02d56ebe93981c41211c05adb630d1d26c14195d04d95e49cd97dbc046dc5"},
{file = "mypy-0.971-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:19830b7dba7d5356d3e26e2427a2ec91c994cd92d983142cbd025ebe81d69cf3"},
{file = "mypy-0.971-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:02ef476f6dcb86e6f502ae39a16b93285fef97e7f1ff22932b657d1ef1f28655"},
{file = "mypy-0.971-cp310-cp310-win_amd64.whl", hash = "sha256:25c5750ba5609a0c7550b73a33deb314ecfb559c350bb050b655505e8aed4103"},
{file = "mypy-0.971-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d3348e7eb2eea2472db611486846742d5d52d1290576de99d59edeb7cd4a42ca"},
{file = "mypy-0.971-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3fa7a477b9900be9b7dd4bab30a12759e5abe9586574ceb944bc29cddf8f0417"},
{file = "mypy-0.971-cp36-cp36m-win_amd64.whl", hash = "sha256:2ad53cf9c3adc43cf3bea0a7d01a2f2e86db9fe7596dfecb4496a5dda63cbb09"},
{file = "mypy-0.971-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:855048b6feb6dfe09d3353466004490b1872887150c5bb5caad7838b57328cc8"},
{file = "mypy-0.971-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:23488a14a83bca6e54402c2e6435467a4138785df93ec85aeff64c6170077fb0"},
{file = "mypy-0.971-cp37-cp37m-win_amd64.whl", hash = "sha256:4b21e5b1a70dfb972490035128f305c39bc4bc253f34e96a4adf9127cf943eb2"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:9796a2ba7b4b538649caa5cecd398d873f4022ed2333ffde58eaf604c4d2cb27"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a361d92635ad4ada1b1b2d3630fc2f53f2127d51cf2def9db83cba32e47c856"},
{file = "mypy-0.971-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b793b899f7cf563b1e7044a5c97361196b938e92f0a4343a5d27966a53d2ec71"},
{file = "mypy-0.971-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d1ea5d12c8e2d266b5fb8c7a5d2e9c0219fedfeb493b7ed60cd350322384ac27"},
{file = "mypy-0.971-cp38-cp38-win_amd64.whl", hash = "sha256:23c7ff43fff4b0df93a186581885c8512bc50fc4d4910e0f838e35d6bb6b5e58"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1f7656b69974a6933e987ee8ffb951d836272d6c0f81d727f1d0e2696074d9e6"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d2022bfadb7a5c2ef410d6a7c9763188afdb7f3533f22a0a32be10d571ee4bbe"},
{file = "mypy-0.971-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ef943c72a786b0f8d90fd76e9b39ce81fb7171172daf84bf43eaf937e9f220a9"},
{file = "mypy-0.971-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d744f72eb39f69312bc6c2abf8ff6656973120e2eb3f3ec4f758ed47e414a4bf"},
{file = "mypy-0.971-cp39-cp39-win_amd64.whl", hash = "sha256:77a514ea15d3007d33a9e2157b0ba9c267496acf12a7f2b9b9f8446337aac5b0"},
{file = "mypy-0.971-py3-none-any.whl", hash = "sha256:0d054ef16b071149917085f51f89555a576e2618d5d9dd70bd6eea6410af3ac9"},
{file = "mypy-0.971.tar.gz", hash = "sha256:40b0f21484238269ae6a57200c807d80debc6459d444c0489a102d7c6a75fa56"},
]
mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
nbclassic = [
{file = "nbclassic-0.4.8-py3-none-any.whl", hash = "sha256:cbf05df5842b420d5cece0143462380ea9d308ff57c2dc0eb4d6e035b18fbfb3"},
{file = "nbclassic-0.4.8.tar.gz", hash = "sha256:c74d8a500f8e058d46b576a41e5bc640711e1032cf7541dde5f73ea49497e283"},
]
nbclient = [
{file = "nbclient-0.7.0-py3-none-any.whl", hash = "sha256:434c91385cf3e53084185334d675a0d33c615108b391e260915d1aa8e86661b8"},
{file = "nbclient-0.7.0.tar.gz", hash = "sha256:a1d844efd6da9bc39d2209bf996dbd8e07bf0f36b796edfabaa8f8a9ab77c3aa"},
]
nbconvert = [
{file = "nbconvert-7.0.0rc3-py3-none-any.whl", hash = "sha256:6774a0bf293d76fa2e886255812d953b750059330c3d7305ad271c02590f1957"},
{file = "nbconvert-7.0.0rc3.tar.gz", hash = "sha256:efb9aae47dad2eae02dd9e7d2cc8add6b7e8f15c6548c0de3363f6d2f8a39146"},
]
nbformat = [
{file = "nbformat-5.7.0-py3-none-any.whl", hash = "sha256:1b05ec2c552c2f1adc745f4eddce1eac8ca9ffd59bb9fd859e827eaa031319f9"},
{file = "nbformat-5.7.0.tar.gz", hash = "sha256:1d4760c15c1a04269ef5caf375be8b98dd2f696e5eb9e603ec2bf091f9b0d3f3"},
]
nbsphinx = [
{file = "nbsphinx-0.8.10-py3-none-any.whl", hash = "sha256:6076fba58020420927899362579f12779a43091eb238f414519ec25b4a8cfc96"},
{file = "nbsphinx-0.8.10.tar.gz", hash = "sha256:a8d68046f8aab916e2940b9b3819bd3ef9ddce868aa38845ea366645cabb6254"},
]
nest-asyncio = [
{file = "nest_asyncio-1.5.6-py3-none-any.whl", hash = "sha256:b9a953fb40dceaa587d109609098db21900182b16440652454a146cffb06e8b8"},
{file = "nest_asyncio-1.5.6.tar.gz", hash = "sha256:d267cc1ff794403f7df692964d1d2a3fa9418ffea2a3f6859a439ff482fef290"},
]
networkx = [
{file = "networkx-2.8.8-py3-none-any.whl", hash = "sha256:e435dfa75b1d7195c7b8378c3859f0445cd88c6b0375c181ed66823a9ceb7524"},
{file = "networkx-2.8.8.tar.gz", hash = "sha256:230d388117af870fce5647a3c52401fcf753e94720e6ea6b4197a5355648885e"},
]
notebook = [
{file = "notebook-6.5.2-py3-none-any.whl", hash = "sha256:e04f9018ceb86e4fa841e92ea8fb214f8d23c1cedfde530cc96f92446924f0e4"},
{file = "notebook-6.5.2.tar.gz", hash = "sha256:c1897e5317e225fc78b45549a6ab4b668e4c996fd03a04e938fe5e7af2bfffd0"},
]
notebook-shim = [
{file = "notebook_shim-0.2.2-py3-none-any.whl", hash = "sha256:9c6c30f74c4fbea6fce55c1be58e7fd0409b1c681b075dcedceb005db5026949"},
{file = "notebook_shim-0.2.2.tar.gz", hash = "sha256:090e0baf9a5582ff59b607af523ca2db68ff216da0c69956b62cab2ef4fc9c3f"},
]
numba = [
{file = "numba-0.53.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:b23de6b6837c132087d06b8b92d343edb54b885873b824a037967fbd5272ebb7"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:6545b9e9b0c112b81de7f88a3c787469a357eeff8211e90b8f45ee243d521cc2"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:8fa5c963a43855050a868106a87cd614f3c3f459951c8fc468aec263ef80d063"},
{file = "numba-0.53.1-cp36-cp36m-win32.whl", hash = "sha256:aaa6ebf56afb0b6752607b9f3bf39e99b0efe3c1fa6849698373925ee6838fd7"},
{file = "numba-0.53.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b08b3df38aab769df79ed948d70f0a54a3cdda49d58af65369235c204ec5d0f3"},
{file = "numba-0.53.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:bf5c463b62d013e3f709cc8277adf2f4f4d8cc6757293e29c6db121b77e6b760"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:74df02e73155f669e60dcff07c4eef4a03dbf5b388594db74142ab40914fe4f5"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:5165709bf62f28667e10b9afe6df0ce1037722adab92d620f59cb8bbb8104641"},
{file = "numba-0.53.1-cp37-cp37m-win32.whl", hash = "sha256:2e96958ed2ca7e6d967b2ce29c8da0ca47117e1de28e7c30b2c8c57386506fa5"},
{file = "numba-0.53.1-cp37-cp37m-win_amd64.whl", hash = "sha256:276f9d1674fe08d95872d81b97267c6b39dd830f05eb992608cbede50fcf48a9"},
{file = "numba-0.53.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:4c4c8d102512ae472af52c76ad9522da718c392cb59f4cd6785d711fa5051a2a"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:691adbeac17dbdf6ed7c759e9e33a522351f07d2065fe926b264b6b2c15fd89b"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:94aab3e0e9e8754116325ce026e1b29ae72443c706a3104cf7f3368dc3012912"},
{file = "numba-0.53.1-cp38-cp38-win32.whl", hash = "sha256:aabeec89bb3e3162136eea492cea7ee8882ddcda2201f05caecdece192c40896"},
{file = "numba-0.53.1-cp38-cp38-win_amd64.whl", hash = "sha256:1895ebd256819ff22256cd6fe24aa8f7470b18acc73e7917e8e93c9ac7f565dc"},
{file = "numba-0.53.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:224d197a46a9e602a16780d87636e199e2cdef528caef084a4d8fd8909c2455c"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:aba7acb247a09d7f12bd17a8e28bbb04e8adef9fc20ca29835d03b7894e1b49f"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:bd126f1f49da6fc4b3169cf1d96f1c3b3f84a7badd11fe22da344b923a00e744"},
{file = "numba-0.53.1-cp39-cp39-win32.whl", hash = "sha256:0ef9d1f347b251282ae46e5a5033600aa2d0dfa1ee8c16cb8137b8cd6f79e221"},
{file = "numba-0.53.1-cp39-cp39-win_amd64.whl", hash = "sha256:17146885cbe4e89c9d4abd4fcb8886dee06d4591943dc4343500c36ce2fcfa69"},
{file = "numba-0.53.1.tar.gz", hash = "sha256:9cd4e5216acdc66c4e9dab2dfd22ddb5bef151185c070d4a3cd8e78638aff5b0"},
]
numpy = [
{file = "numpy-1.23.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9c88793f78fca17da0145455f0d7826bcb9f37da4764af27ac945488116efe63"},
{file = "numpy-1.23.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e9f4c4e51567b616be64e05d517c79a8a22f3606499941d97bb76f2ca59f982d"},
{file = "numpy-1.23.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7903ba8ab592b82014713c491f6c5d3a1cde5b4a3bf116404e08f5b52f6daf43"},
{file = "numpy-1.23.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5e05b1c973a9f858c74367553e236f287e749465f773328c8ef31abe18f691e1"},
{file = "numpy-1.23.5-cp310-cp310-win32.whl", hash = "sha256:522e26bbf6377e4d76403826ed689c295b0b238f46c28a7251ab94716da0b280"},
{file = "numpy-1.23.5-cp310-cp310-win_amd64.whl", hash = "sha256:dbee87b469018961d1ad79b1a5d50c0ae850000b639bcb1b694e9981083243b6"},
{file = "numpy-1.23.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ce571367b6dfe60af04e04a1834ca2dc5f46004ac1cc756fb95319f64c095a96"},
{file = "numpy-1.23.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:56e454c7833e94ec9769fa0f86e6ff8e42ee38ce0ce1fa4cbb747ea7e06d56aa"},
{file = "numpy-1.23.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5039f55555e1eab31124a5768898c9e22c25a65c1e0037f4d7c495a45778c9f2"},
{file = "numpy-1.23.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:58f545efd1108e647604a1b5aa809591ccd2540f468a880bedb97247e72db387"},
{file = "numpy-1.23.5-cp311-cp311-win32.whl", hash = "sha256:b2a9ab7c279c91974f756c84c365a669a887efa287365a8e2c418f8b3ba73fb0"},
{file = "numpy-1.23.5-cp311-cp311-win_amd64.whl", hash = "sha256:0cbe9848fad08baf71de1a39e12d1b6310f1d5b2d0ea4de051058e6e1076852d"},
{file = "numpy-1.23.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f063b69b090c9d918f9df0a12116029e274daf0181df392839661c4c7ec9018a"},
{file = "numpy-1.23.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0aaee12d8883552fadfc41e96b4c82ee7d794949e2a7c3b3a7201e968c7ecab9"},
{file = "numpy-1.23.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:92c8c1e89a1f5028a4c6d9e3ccbe311b6ba53694811269b992c0b224269e2398"},
{file = "numpy-1.23.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d208a0f8729f3fb790ed18a003f3a57895b989b40ea4dce4717e9cf4af62c6bb"},
{file = "numpy-1.23.5-cp38-cp38-win32.whl", hash = "sha256:06005a2ef6014e9956c09ba07654f9837d9e26696a0470e42beedadb78c11b07"},
{file = "numpy-1.23.5-cp38-cp38-win_amd64.whl", hash = "sha256:ca51fcfcc5f9354c45f400059e88bc09215fb71a48d3768fb80e357f3b457e1e"},
{file = "numpy-1.23.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8969bfd28e85c81f3f94eb4a66bc2cf1dbdc5c18efc320af34bffc54d6b1e38f"},
{file = "numpy-1.23.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a7ac231a08bb37f852849bbb387a20a57574a97cfc7b6cabb488a4fc8be176de"},
{file = "numpy-1.23.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bf837dc63ba5c06dc8797c398db1e223a466c7ece27a1f7b5232ba3466aafe3d"},
{file = "numpy-1.23.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:33161613d2269025873025b33e879825ec7b1d831317e68f4f2f0f84ed14c719"},
{file = "numpy-1.23.5-cp39-cp39-win32.whl", hash = "sha256:af1da88f6bc3d2338ebbf0e22fe487821ea4d8e89053e25fa59d1d79786e7481"},
{file = "numpy-1.23.5-cp39-cp39-win_amd64.whl", hash = "sha256:09b7847f7e83ca37c6e627682f145856de331049013853f344f37b0c9690e3df"},
{file = "numpy-1.23.5-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:abdde9f795cf292fb9651ed48185503a2ff29be87770c3b8e2a14b0cd7aa16f8"},
{file = "numpy-1.23.5-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f9a909a8bae284d46bbfdefbdd4a262ba19d3bc9921b1e76126b1d21c3c34135"},
{file = "numpy-1.23.5-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:01dd17cbb340bf0fc23981e52e1d18a9d4050792e8fb8363cecbf066a84b827d"},
{file = "numpy-1.23.5.tar.gz", hash = "sha256:1b1766d6f397c18153d40015ddfc79ddb715cabadc04d2d228d4e5a8bc4ded1a"},
]
oauthlib = [
{file = "oauthlib-3.2.2-py3-none-any.whl", hash = "sha256:8139f29aac13e25d502680e9e19963e83f16838d48a0d71c287fe40e7067fbca"},
{file = "oauthlib-3.2.2.tar.gz", hash = "sha256:9859c40929662bec5d64f34d01c99e093149682a3f38915dc0655d5a633dd918"},
]
opt-einsum = [
{file = "opt_einsum-3.3.0-py3-none-any.whl", hash = "sha256:2455e59e3947d3c275477df7f5205b30635e266fe6dc300e3d9f9646bfcea147"},
{file = "opt_einsum-3.3.0.tar.gz", hash = "sha256:59f6475f77bbc37dcf7cd748519c0ec60722e91e63ca114e68821c0c54a46549"},
]
packaging = [
{file = "packaging-21.3-py3-none-any.whl", hash = "sha256:ef103e05f519cdc783ae24ea4e2e0f508a9c99b2d4969652eed6a2e1ea5bd522"},
{file = "packaging-21.3.tar.gz", hash = "sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb"},
]
pandas = [
{file = "pandas-1.5.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e9dbacd22555c2d47f262ef96bb4e30880e5956169741400af8b306bbb24a273"},
{file = "pandas-1.5.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e2b83abd292194f350bb04e188f9379d36b8dfac24dd445d5c87575f3beaf789"},
{file = "pandas-1.5.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2552bffc808641c6eb471e55aa6899fa002ac94e4eebfa9ec058649122db5824"},
{file = "pandas-1.5.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fc87eac0541a7d24648a001d553406f4256e744d92df1df8ebe41829a915028"},
{file = "pandas-1.5.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d0d8fd58df5d17ddb8c72a5075d87cd80d71b542571b5f78178fb067fa4e9c72"},
{file = "pandas-1.5.2-cp310-cp310-win_amd64.whl", hash = "sha256:4aed257c7484d01c9a194d9a94758b37d3d751849c05a0050c087a358c41ad1f"},
{file = "pandas-1.5.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:375262829c8c700c3e7cbb336810b94367b9c4889818bbd910d0ecb4e45dc261"},
{file = "pandas-1.5.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:cc3cd122bea268998b79adebbb8343b735a5511ec14efb70a39e7acbc11ccbdc"},
{file = "pandas-1.5.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b4f5a82afa4f1ff482ab8ded2ae8a453a2cdfde2001567b3ca24a4c5c5ca0db3"},
{file = "pandas-1.5.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8092a368d3eb7116e270525329a3e5c15ae796ccdf7ccb17839a73b4f5084a39"},
{file = "pandas-1.5.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6257b314fc14958f8122779e5a1557517b0f8e500cfb2bd53fa1f75a8ad0af2"},
{file = "pandas-1.5.2-cp311-cp311-win_amd64.whl", hash = "sha256:82ae615826da838a8e5d4d630eb70c993ab8636f0eff13cb28aafc4291b632b5"},
{file = "pandas-1.5.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:457d8c3d42314ff47cc2d6c54f8fc0d23954b47977b2caed09cd9635cb75388b"},
{file = "pandas-1.5.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c009a92e81ce836212ce7aa98b219db7961a8b95999b97af566b8dc8c33e9519"},
{file = "pandas-1.5.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:71f510b0efe1629bf2f7c0eadb1ff0b9cf611e87b73cd017e6b7d6adb40e2b3a"},
{file = "pandas-1.5.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a40dd1e9f22e01e66ed534d6a965eb99546b41d4d52dbdb66565608fde48203f"},
{file = "pandas-1.5.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ae7e989f12628f41e804847a8cc2943d362440132919a69429d4dea1f164da0"},
{file = "pandas-1.5.2-cp38-cp38-win32.whl", hash = "sha256:530948945e7b6c95e6fa7aa4be2be25764af53fba93fe76d912e35d1c9ee46f5"},
{file = "pandas-1.5.2-cp38-cp38-win_amd64.whl", hash = "sha256:73f219fdc1777cf3c45fde7f0708732ec6950dfc598afc50588d0d285fddaefc"},
{file = "pandas-1.5.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:9608000a5a45f663be6af5c70c3cbe634fa19243e720eb380c0d378666bc7702"},
{file = "pandas-1.5.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:315e19a3e5c2ab47a67467fc0362cb36c7c60a93b6457f675d7d9615edad2ebe"},
{file = "pandas-1.5.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e18bc3764cbb5e118be139b3b611bc3fbc5d3be42a7e827d1096f46087b395eb"},
{file = "pandas-1.5.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0183cb04a057cc38fde5244909fca9826d5d57c4a5b7390c0cc3fa7acd9fa883"},
{file = "pandas-1.5.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:344021ed3e639e017b452aa8f5f6bf38a8806f5852e217a7594417fb9bbfa00e"},
{file = "pandas-1.5.2-cp39-cp39-win32.whl", hash = "sha256:e7469271497960b6a781eaa930cba8af400dd59b62ec9ca2f4d31a19f2f91090"},
{file = "pandas-1.5.2-cp39-cp39-win_amd64.whl", hash = "sha256:c218796d59d5abd8780170c937b812c9637e84c32f8271bbf9845970f8c1351f"},
{file = "pandas-1.5.2.tar.gz", hash = "sha256:220b98d15cee0b2cd839a6358bd1f273d0356bf964c1a1aeb32d47db0215488b"},
]
pandocfilters = [
{file = "pandocfilters-1.5.0-py2.py3-none-any.whl", hash = "sha256:33aae3f25fd1a026079f5d27bdd52496f0e0803b3469282162bafdcbdf6ef14f"},
{file = "pandocfilters-1.5.0.tar.gz", hash = "sha256:0b679503337d233b4339a817bfc8c50064e2eff681314376a47cb582305a7a38"},
]
parso = [
{file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"},
{file = "parso-0.8.3.tar.gz", hash = "sha256:8c07be290bb59f03588915921e29e8a50002acaf2cdc5fa0e0114f91709fafa0"},
]
partd = [
{file = "partd-1.3.0-py3-none-any.whl", hash = "sha256:6393a0c898a0ad945728e34e52de0df3ae295c5aff2e2926ba7cc3c60a734a15"},
{file = "partd-1.3.0.tar.gz", hash = "sha256:ce91abcdc6178d668bcaa431791a5a917d902341cb193f543fe445d494660485"},
]
pastel = [
{file = "pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364"},
{file = "pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d"},
]
pathos = [
{file = "pathos-0.2.9-py2-none-any.whl", hash = "sha256:6a6ddb514ce2719f63fb88d5ec4f4490e436b636b54f1102d952c9f7c52f18e2"},
{file = "pathos-0.2.9-py3-none-any.whl", hash = "sha256:1c44373d8692897d5d15a8aa3b3a442ddc0814c5e848f4ff0ded5491f34b1dac"},
{file = "pathos-0.2.9.tar.gz", hash = "sha256:a8dbddcd3d9af32ada7c6dc088d845588c513a29a0ba19ab9f64c5cd83692934"},
]
pathspec = [
{file = "pathspec-0.10.2-py3-none-any.whl", hash = "sha256:88c2606f2c1e818b978540f73ecc908e13999c6c3a383daf3705652ae79807a5"},
{file = "pathspec-0.10.2.tar.gz", hash = "sha256:8f6bf73e5758fd365ef5d58ce09ac7c27d2833a8d7da51712eac6e27e35141b0"},
]
pathy = [
{file = "pathy-0.9.0-py3-none-any.whl", hash = "sha256:7ac1ddae1d3013b83e693a2236f29661983cc8c0bcc52efca683f48d3663adae"},
{file = "pathy-0.9.0.tar.gz", hash = "sha256:5a9bd1d33b6a7980e6616e055814445b4646443151ef08fdd130fcbc7a2579c4"},
]
patsy = [
{file = "patsy-0.5.3-py2.py3-none-any.whl", hash = "sha256:7eb5349754ed6aa982af81f636479b1b8db9d5b1a6e957a6016ec0534b5c86b7"},
{file = "patsy-0.5.3.tar.gz", hash = "sha256:bdc18001875e319bc91c812c1eb6a10be4bb13cb81eb763f466179dca3b67277"},
]
pexpect = [
{file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
{file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
]
pickleshare = [
{file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
{file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
]
pillow = [
{file = "Pillow-9.3.0-1-cp37-cp37m-win32.whl", hash = "sha256:e6ea6b856a74d560d9326c0f5895ef8050126acfdc7ca08ad703eb0081e82b74"},
{file = "Pillow-9.3.0-1-cp37-cp37m-win_amd64.whl", hash = "sha256:32a44128c4bdca7f31de5be641187367fe2a450ad83b833ef78910397db491aa"},
{file = "Pillow-9.3.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:0b7257127d646ff8676ec8a15520013a698d1fdc48bc2a79ba4e53df792526f2"},
{file = "Pillow-9.3.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b90f7616ea170e92820775ed47e136208e04c967271c9ef615b6fbd08d9af0e3"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68943d632f1f9e3dce98908e873b3a090f6cba1cbb1b892a9e8d97c938871fbe"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:be55f8457cd1eac957af0c3f5ece7bc3f033f89b114ef30f710882717670b2a8"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d77adcd56a42d00cc1be30843d3426aa4e660cab4a61021dc84467123f7a00c"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:829f97c8e258593b9daa80638aee3789b7df9da5cf1336035016d76f03b8860c"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:801ec82e4188e935c7f5e22e006d01611d6b41661bba9fe45b60e7ac1a8f84de"},
{file = "Pillow-9.3.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:871b72c3643e516db4ecf20efe735deb27fe30ca17800e661d769faab45a18d7"},
{file = "Pillow-9.3.0-cp310-cp310-win32.whl", hash = "sha256:655a83b0058ba47c7c52e4e2df5ecf484c1b0b0349805896dd350cbc416bdd91"},
{file = "Pillow-9.3.0-cp310-cp310-win_amd64.whl", hash = "sha256:9f47eabcd2ded7698106b05c2c338672d16a6f2a485e74481f524e2a23c2794b"},
{file = "Pillow-9.3.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:57751894f6618fd4308ed8e0c36c333e2f5469744c34729a27532b3db106ee20"},
{file = "Pillow-9.3.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7db8b751ad307d7cf238f02101e8e36a128a6cb199326e867d1398067381bff4"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3033fbe1feb1b59394615a1cafaee85e49d01b51d54de0cbf6aa8e64182518a1"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:22b012ea2d065fd163ca096f4e37e47cd8b59cf4b0fd47bfca6abb93df70b34c"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b9a65733d103311331875c1dca05cb4606997fd33d6acfed695b1232ba1df193"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:502526a2cbfa431d9fc2a079bdd9061a2397b842bb6bc4239bb176da00993812"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:90fb88843d3902fe7c9586d439d1e8c05258f41da473952aa8b328d8b907498c"},
{file = "Pillow-9.3.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:89dca0ce00a2b49024df6325925555d406b14aa3efc2f752dbb5940c52c56b11"},
{file = "Pillow-9.3.0-cp311-cp311-win32.whl", hash = "sha256:3168434d303babf495d4ba58fc22d6604f6e2afb97adc6a423e917dab828939c"},
{file = "Pillow-9.3.0-cp311-cp311-win_amd64.whl", hash = "sha256:18498994b29e1cf86d505edcb7edbe814d133d2232d256db8c7a8ceb34d18cef"},
{file = "Pillow-9.3.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:772a91fc0e03eaf922c63badeca75e91baa80fe2f5f87bdaed4280662aad25c9"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa4107d1b306cdf8953edde0534562607fe8811b6c4d9a486298ad31de733b2"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b4012d06c846dc2b80651b120e2cdd787b013deb39c09f407727ba90015c684f"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77ec3e7be99629898c9a6d24a09de089fa5356ee408cdffffe62d67bb75fdd72"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:6c738585d7a9961d8c2821a1eb3dcb978d14e238be3d70f0a706f7fa9316946b"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:828989c45c245518065a110434246c44a56a8b2b2f6347d1409c787e6e4651ee"},
{file = "Pillow-9.3.0-cp37-cp37m-win32.whl", hash = "sha256:82409ffe29d70fd733ff3c1025a602abb3e67405d41b9403b00b01debc4c9a29"},
{file = "Pillow-9.3.0-cp37-cp37m-win_amd64.whl", hash = "sha256:41e0051336807468be450d52b8edd12ac60bebaa97fe10c8b660f116e50b30e4"},
{file = "Pillow-9.3.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:b03ae6f1a1878233ac620c98f3459f79fd77c7e3c2b20d460284e1fb370557d4"},
{file = "Pillow-9.3.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4390e9ce199fc1951fcfa65795f239a8a4944117b5935a9317fb320e7767b40f"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40e1ce476a7804b0fb74bcfa80b0a2206ea6a882938eaba917f7a0f004b42502"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a0a06a052c5f37b4ed81c613a455a81f9a3a69429b4fd7bb913c3fa98abefc20"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:03150abd92771742d4a8cd6f2fa6246d847dcd2e332a18d0c15cc75bf6703040"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:15c42fb9dea42465dfd902fb0ecf584b8848ceb28b41ee2b58f866411be33f07"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:51e0e543a33ed92db9f5ef69a0356e0b1a7a6b6a71b80df99f1d181ae5875636"},
{file = "Pillow-9.3.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:3dd6caf940756101205dffc5367babf288a30043d35f80936f9bfb37f8355b32"},
{file = "Pillow-9.3.0-cp38-cp38-win32.whl", hash = "sha256:f1ff2ee69f10f13a9596480335f406dd1f70c3650349e2be67ca3139280cade0"},
{file = "Pillow-9.3.0-cp38-cp38-win_amd64.whl", hash = "sha256:276a5ca930c913f714e372b2591a22c4bd3b81a418c0f6635ba832daec1cbcfc"},
{file = "Pillow-9.3.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:73bd195e43f3fadecfc50c682f5055ec32ee2c933243cafbfdec69ab1aa87cad"},
{file = "Pillow-9.3.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1c7c8ae3864846fc95f4611c78129301e203aaa2af813b703c55d10cc1628535"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2e0918e03aa0c72ea56edbb00d4d664294815aa11291a11504a377ea018330d3"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b0915e734b33a474d76c28e07292f196cdf2a590a0d25bcc06e64e545f2d146c"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:af0372acb5d3598f36ec0914deed2a63f6bcdb7b606da04dc19a88d31bf0c05b"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:ad58d27a5b0262c0c19b47d54c5802db9b34d38bbf886665b626aff83c74bacd"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:97aabc5c50312afa5e0a2b07c17d4ac5e865b250986f8afe2b02d772567a380c"},
{file = "Pillow-9.3.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:9aaa107275d8527e9d6e7670b64aabaaa36e5b6bd71a1015ddd21da0d4e06448"},
{file = "Pillow-9.3.0-cp39-cp39-win32.whl", hash = "sha256:bac18ab8d2d1e6b4ce25e3424f709aceef668347db8637c2296bcf41acb7cf48"},
{file = "Pillow-9.3.0-cp39-cp39-win_amd64.whl", hash = "sha256:b472b5ea442148d1c3e2209f20f1e0bb0eb556538690fa70b5e1f79fa0ba8dc2"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:ab388aaa3f6ce52ac1cb8e122c4bd46657c15905904b3120a6248b5b8b0bc228"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dbb8e7f2abee51cef77673be97760abff1674ed32847ce04b4af90f610144c7b"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bca31dd6014cb8b0b2db1e46081b0ca7d936f856da3b39744aef499db5d84d02"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c7025dce65566eb6e89f56c9509d4f628fddcedb131d9465cacd3d8bac337e7e"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ebf2029c1f464c59b8bdbe5143c79fa2045a581ac53679733d3a91d400ff9efb"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b59430236b8e58840a0dfb4099a0e8717ffb779c952426a69ae435ca1f57210c"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:12ce4932caf2ddf3e41d17fc9c02d67126935a44b86df6a206cf0d7161548627"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ae5331c23ce118c53b172fa64a4c037eb83c9165aba3a7ba9ddd3ec9fa64a699"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:0b07fffc13f474264c336298d1b4ce01d9c5a011415b79d4ee5527bb69ae6f65"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:073adb2ae23431d3b9bcbcff3fe698b62ed47211d0716b067385538a1b0f28b8"},
{file = "Pillow-9.3.0.tar.gz", hash = "sha256:c935a22a557a560108d780f9a0fc426dd7459940dc54faa49d83249c8d3e760f"},
]
pip = [
{file = "pip-22.3.1-py3-none-any.whl", hash = "sha256:908c78e6bc29b676ede1c4d57981d490cb892eb45cd8c214ab6298125119e077"},
{file = "pip-22.3.1.tar.gz", hash = "sha256:65fd48317359f3af8e593943e6ae1506b66325085ea64b706a998c6e83eeaf38"},
]
pkgutil-resolve-name = [
{file = "pkgutil_resolve_name-1.3.10-py3-none-any.whl", hash = "sha256:ca27cc078d25c5ad71a9de0a7a330146c4e014c2462d9af19c6b828280649c5e"},
{file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
]
platformdirs = [
{file = "platformdirs-2.5.4-py3-none-any.whl", hash = "sha256:af0276409f9a02373d540bf8480021a048711d572745aef4b7842dad245eba10"},
{file = "platformdirs-2.5.4.tar.gz", hash = "sha256:1006647646d80f16130f052404c6b901e80ee4ed6bef6792e1f238a8969106f7"},
]
plotly = [
{file = "plotly-5.11.0-py2.py3-none-any.whl", hash = "sha256:52fd74b08aa4fd5a55b9d3034a30dbb746e572d7ed84897422f927fdf687ea5f"},
{file = "plotly-5.11.0.tar.gz", hash = "sha256:4efef479c2ec1d86dcdac8405b6ca70ca65649a77408e39a7e84a1ea2db6c787"},
]
pluggy = [
{file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"},
{file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"},
]
poethepoet = [
{file = "poethepoet-0.16.4-py3-none-any.whl", hash = "sha256:1f05dce92ca6457d018696b614ba2149261380f30ceb21c196daf19c0c2e1fcd"},
{file = "poethepoet-0.16.4.tar.gz", hash = "sha256:a80f6bba64812515c406ffc218aff833951b17854eb111f724b48c44f9759af5"},
]
pox = [
{file = "pox-0.3.2-py3-none-any.whl", hash = "sha256:56fe2f099ecd8a557b8948082504492de90e8598c34733c9b1fdeca8f7b6de61"},
{file = "pox-0.3.2.tar.gz", hash = "sha256:e825225297638d6e3d49415f8cfb65407a5d15e56f2fb7fe9d9b9e3050c65ee1"},
]
ppft = [
{file = "ppft-1.7.6.6-py3-none-any.whl", hash = "sha256:f355d2caeed8bd7c9e4a860c471f31f7e66d1ada2791ab5458ea7dca15a51e41"},
{file = "ppft-1.7.6.6.tar.gz", hash = "sha256:f933f0404f3e808bc860745acb3b79cd4fe31ea19a20889a645f900415be60f1"},
]
preshed = [
{file = "preshed-3.0.8-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ea4b6df8ef7af38e864235256793bc3056e9699d991afcf6256fa298858582fc"},
{file = "preshed-3.0.8-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8e945fc814bdc29564a2ce137c237b3a9848aa1e76a1160369b6e0d328151fdd"},
{file = "preshed-3.0.8-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9a4833530fe53001c351974e0c8bb660211b8d0358e592af185fec1ae12b2d0"},
{file = "preshed-3.0.8-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e1472ee231f323b4f4368b1b5f8f08481ed43af89697d45450c6ae4af46ac08a"},
{file = "preshed-3.0.8-cp310-cp310-win_amd64.whl", hash = "sha256:c8a2e2931eea7e500fbf8e014b69022f3fab2e35a70da882e2fc753e5e487ae3"},
{file = "preshed-3.0.8-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0e1bb8701df7861af26a312225bdf7c4822ac06fcf75aeb60fe2b0a20e64c222"},
{file = "preshed-3.0.8-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e9aef2b0b7687aecef48b1c6ff657d407ff24e75462877dcb888fa904c4a9c6d"},
{file = "preshed-3.0.8-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:854d58a8913ebf3b193b0dc8064155b034e8987de25f26838dfeca09151fda8a"},
{file = "preshed-3.0.8-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:135e2ac0db1a3948d6ec295598c7e182b52c394663f2fcfe36a97ae51186be21"},
{file = "preshed-3.0.8-cp311-cp311-win_amd64.whl", hash = "sha256:019d8fa4161035811fb2804d03214143298739e162d0ad24e087bd46c50970f5"},
{file = "preshed-3.0.8-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6a49ce52856fbb3ef4f1cc744c53f5d7e1ca370b1939620ac2509a6d25e02a50"},
{file = "preshed-3.0.8-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fdbc2957b36115a576c515ffe963919f19d2683f3c76c9304ae88ef59f6b5ca6"},
{file = "preshed-3.0.8-cp36-cp36m-win_amd64.whl", hash = "sha256:09cc9da2ac1b23010ce7d88a5e20f1033595e6dd80be14318e43b9409f4c7697"},
{file = "preshed-3.0.8-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e19c8069f1a1450f835f23d47724530cf716d581fcafb398f534d044f806b8c2"},
{file = "preshed-3.0.8-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25b5ef5e387a0e17ff41202a8c1816184ab6fb3c0d0b847bf8add0ed5941eb8d"},
{file = "preshed-3.0.8-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:53d3e2456a085425c66af7baba62d7eaa24aa5e460e1a9e02c401a2ed59abd7b"},
{file = "preshed-3.0.8-cp37-cp37m-win_amd64.whl", hash = "sha256:85e98a618fb36cdcc37501d8b9b8c1246651cc2f2db3a70702832523e0ae12f4"},
{file = "preshed-3.0.8-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7f8837bf616335464f3713cbf562a3dcaad22c3ca9193f957018964ef871a68b"},
{file = "preshed-3.0.8-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:720593baf2c2e295f855192974799e486da5f50d4548db93c44f5726a43cefb9"},
{file = "preshed-3.0.8-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e0ad3d860b9ce88a74cf7414bb4b1c6fd833813e7b818e76f49272c4974b19ce"},
{file = "preshed-3.0.8-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fd19d48440b152657966a52e627780c0ddbe9d907b8d7ee4598505e80a3c55c7"},
{file = "preshed-3.0.8-cp38-cp38-win_amd64.whl", hash = "sha256:246e7c6890dc7fe9b10f0e31de3346b906e3862b6ef42fcbede37968f46a73bf"},
{file = "preshed-3.0.8-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:67643e66691770dc3434b01671648f481e3455209ce953727ef2330b16790aaa"},
{file = "preshed-3.0.8-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0ae25a010c9f551aa2247ee621457f679e07c57fc99d3fd44f84cb40b925f12c"},
{file = "preshed-3.0.8-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5a6a7fcf7dd2e7711051b3f0432da9ec9c748954c989f49d2cd8eabf8c2d953e"},
{file = "preshed-3.0.8-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5942858170c4f53d9afc6352a86bbc72fc96cc4d8964b6415492114a5920d3ed"},
{file = "preshed-3.0.8-cp39-cp39-win_amd64.whl", hash = "sha256:06793022a56782ef51d74f1399925a2ba958e50c5cfbc6fa5b25c4945e158a07"},
{file = "preshed-3.0.8.tar.gz", hash = "sha256:6c74c70078809bfddda17be96483c41d06d717934b07cab7921011d81758b357"},
]
progressbar2 = [
{file = "progressbar2-4.2.0-py2.py3-none-any.whl", hash = "sha256:1a8e201211f99a85df55f720b3b6da7fb5c8cdef56792c4547205be2de5ea606"},
{file = "progressbar2-4.2.0.tar.gz", hash = "sha256:1393922fcb64598944ad457569fbeb4b3ac189ef50b5adb9cef3284e87e394ce"},
]
prometheus-client = [
{file = "prometheus_client-0.15.0-py3-none-any.whl", hash = "sha256:db7c05cbd13a0f79975592d112320f2605a325969b270a94b71dcabc47b931d2"},
{file = "prometheus_client-0.15.0.tar.gz", hash = "sha256:be26aa452490cfcf6da953f9436e95a9f2b4d578ca80094b4458930e5f584ab1"},
]
prompt-toolkit = [
{file = "prompt_toolkit-3.0.33-py3-none-any.whl", hash = "sha256:ced598b222f6f4029c0800cefaa6a17373fb580cd093223003475ce32805c35b"},
{file = "prompt_toolkit-3.0.33.tar.gz", hash = "sha256:535c29c31216c77302877d5120aef6c94ff573748a5b5ca5b1b1f76f5e700c73"},
]
protobuf = [
{file = "protobuf-3.19.6-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:010be24d5a44be7b0613750ab40bc8b8cedc796db468eae6c779b395f50d1fa1"},
{file = "protobuf-3.19.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:11478547958c2dfea921920617eb457bc26867b0d1aa065ab05f35080c5d9eb6"},
{file = "protobuf-3.19.6-cp310-cp310-win32.whl", hash = "sha256:559670e006e3173308c9254d63facb2c03865818f22204037ab76f7a0ff70b5f"},
{file = "protobuf-3.19.6-cp310-cp310-win_amd64.whl", hash = "sha256:347b393d4dd06fb93a77620781e11c058b3b0a5289262f094379ada2920a3730"},
{file = "protobuf-3.19.6-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:a8ce5ae0de28b51dff886fb922012dad885e66176663950cb2344c0439ecb473"},
{file = "protobuf-3.19.6-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90b0d02163c4e67279ddb6dc25e063db0130fc299aefabb5d481053509fae5c8"},
{file = "protobuf-3.19.6-cp36-cp36m-win32.whl", hash = "sha256:30f5370d50295b246eaa0296533403961f7e64b03ea12265d6dfce3a391d8992"},
{file = "protobuf-3.19.6-cp36-cp36m-win_amd64.whl", hash = "sha256:0c0714b025ec057b5a7600cb66ce7c693815f897cfda6d6efb58201c472e3437"},
{file = "protobuf-3.19.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5057c64052a1f1dd7d4450e9aac25af6bf36cfbfb3a1cd89d16393a036c49157"},
{file = "protobuf-3.19.6-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:bb6776bd18f01ffe9920e78e03a8676530a5d6c5911934c6a1ac6eb78973ecb6"},
{file = "protobuf-3.19.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:84a04134866861b11556a82dd91ea6daf1f4925746b992f277b84013a7cc1229"},
{file = "protobuf-3.19.6-cp37-cp37m-win32.whl", hash = "sha256:4bc98de3cdccfb5cd769620d5785b92c662b6bfad03a202b83799b6ed3fa1fa7"},
{file = "protobuf-3.19.6-cp37-cp37m-win_amd64.whl", hash = "sha256:aa3b82ca1f24ab5326dcf4ea00fcbda703e986b22f3d27541654f749564d778b"},
{file = "protobuf-3.19.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2b2d2913bcda0e0ec9a784d194bc490f5dc3d9d71d322d070b11a0ade32ff6ba"},
{file = "protobuf-3.19.6-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:d0b635cefebd7a8a0f92020562dead912f81f401af7e71f16bf9506ff3bdbb38"},
{file = "protobuf-3.19.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7a552af4dc34793803f4e735aabe97ffc45962dfd3a237bdde242bff5a3de684"},
{file = "protobuf-3.19.6-cp38-cp38-win32.whl", hash = "sha256:0469bc66160180165e4e29de7f445e57a34ab68f49357392c5b2f54c656ab25e"},
{file = "protobuf-3.19.6-cp38-cp38-win_amd64.whl", hash = "sha256:91d5f1e139ff92c37e0ff07f391101df77e55ebb97f46bbc1535298d72019462"},
{file = "protobuf-3.19.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c0ccd3f940fe7f3b35a261b1dd1b4fc850c8fde9f74207015431f174be5976b3"},
{file = "protobuf-3.19.6-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:30a15015d86b9c3b8d6bf78d5b8c7749f2512c29f168ca259c9d7727604d0e39"},
{file = "protobuf-3.19.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:878b4cd080a21ddda6ac6d1e163403ec6eea2e206cf225982ae04567d39be7b0"},
{file = "protobuf-3.19.6-cp39-cp39-win32.whl", hash = "sha256:5a0d7539a1b1fb7e76bf5faa0b44b30f812758e989e59c40f77a7dab320e79b9"},
{file = "protobuf-3.19.6-cp39-cp39-win_amd64.whl", hash = "sha256:bbf5cea5048272e1c60d235c7bd12ce1b14b8a16e76917f371c718bd3005f045"},
{file = "protobuf-3.19.6-py2.py3-none-any.whl", hash = "sha256:14082457dc02be946f60b15aad35e9f5c69e738f80ebbc0900a19bc83734a5a4"},
{file = "protobuf-3.19.6.tar.gz", hash = "sha256:5f5540d57a43042389e87661c6eaa50f47c19c6176e8cf1c4f287aeefeccb5c4"},
]
psutil = [
{file = "psutil-5.9.4-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:c1ca331af862803a42677c120aff8a814a804e09832f166f226bfd22b56feee8"},
{file = "psutil-5.9.4-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:68908971daf802203f3d37e78d3f8831b6d1014864d7a85937941bb35f09aefe"},
{file = "psutil-5.9.4-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:3ff89f9b835100a825b14c2808a106b6fdcc4b15483141482a12c725e7f78549"},
{file = "psutil-5.9.4-cp27-cp27m-win32.whl", hash = "sha256:852dd5d9f8a47169fe62fd4a971aa07859476c2ba22c2254d4a1baa4e10b95ad"},
{file = "psutil-5.9.4-cp27-cp27m-win_amd64.whl", hash = "sha256:9120cd39dca5c5e1c54b59a41d205023d436799b1c8c4d3ff71af18535728e94"},
{file = "psutil-5.9.4-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:6b92c532979bafc2df23ddc785ed116fced1f492ad90a6830cf24f4d1ea27d24"},
{file = "psutil-5.9.4-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:efeae04f9516907be44904cc7ce08defb6b665128992a56957abc9b61dca94b7"},
{file = "psutil-5.9.4-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:54d5b184728298f2ca8567bf83c422b706200bcbbfafdc06718264f9393cfeb7"},
{file = "psutil-5.9.4-cp36-abi3-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:16653106f3b59386ffe10e0bad3bb6299e169d5327d3f187614b1cb8f24cf2e1"},
{file = "psutil-5.9.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:54c0d3d8e0078b7666984e11b12b88af2db11d11249a8ac8920dd5ef68a66e08"},
{file = "psutil-5.9.4-cp36-abi3-win32.whl", hash = "sha256:149555f59a69b33f056ba1c4eb22bb7bf24332ce631c44a319cec09f876aaeff"},
{file = "psutil-5.9.4-cp36-abi3-win_amd64.whl", hash = "sha256:fd8522436a6ada7b4aad6638662966de0d61d241cb821239b2ae7013d41a43d4"},
{file = "psutil-5.9.4-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:6001c809253a29599bc0dfd5179d9f8a5779f9dffea1da0f13c53ee568115e1e"},
{file = "psutil-5.9.4.tar.gz", hash = "sha256:3d7f9739eb435d4b1338944abe23f49584bde5395f27487d2ee25ad9a8774a62"},
]
ptyprocess = [
{file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
{file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
]
pure-eval = [
{file = "pure_eval-0.2.2-py3-none-any.whl", hash = "sha256:01eaab343580944bc56080ebe0a674b39ec44a945e6d09ba7db3cb8cec289350"},
{file = "pure_eval-0.2.2.tar.gz", hash = "sha256:2b45320af6dfaa1750f543d714b6d1c520a1688dec6fd24d339063ce0aaa9ac3"},
]
py = [
{file = "py-1.11.0-py2.py3-none-any.whl", hash = "sha256:607c53218732647dff4acdfcd50cb62615cedf612e72d1724fb1a0cc6405b378"},
{file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"},
]
pyasn1 = [
{file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"},
{file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"},
]
pyasn1-modules = [
{file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"},
{file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"},
]
pycodestyle = [
{file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"},
{file = "pycodestyle-2.8.0.tar.gz", hash = "sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"},
]
pycparser = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pydantic = [
{file = "pydantic-1.10.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bb6ad4489af1bac6955d38ebcb95079a836af31e4c4f74aba1ca05bb9f6027bd"},
{file = "pydantic-1.10.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a1f5a63a6dfe19d719b1b6e6106561869d2efaca6167f84f5ab9347887d78b98"},
{file = "pydantic-1.10.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:352aedb1d71b8b0736c6d56ad2bd34c6982720644b0624462059ab29bd6e5912"},
{file = "pydantic-1.10.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:19b3b9ccf97af2b7519c42032441a891a5e05c68368f40865a90eb88833c2559"},
{file = "pydantic-1.10.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e9069e1b01525a96e6ff49e25876d90d5a563bc31c658289a8772ae186552236"},
{file = "pydantic-1.10.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:355639d9afc76bcb9b0c3000ddcd08472ae75318a6eb67a15866b87e2efa168c"},
{file = "pydantic-1.10.2-cp310-cp310-win_amd64.whl", hash = "sha256:ae544c47bec47a86bc7d350f965d8b15540e27e5aa4f55170ac6a75e5f73b644"},
{file = "pydantic-1.10.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a4c805731c33a8db4b6ace45ce440c4ef5336e712508b4d9e1aafa617dc9907f"},
{file = "pydantic-1.10.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d49f3db871575e0426b12e2f32fdb25e579dea16486a26e5a0474af87cb1ab0a"},
{file = "pydantic-1.10.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:37c90345ec7dd2f1bcef82ce49b6235b40f282b94d3eec47e801baf864d15525"},
{file = "pydantic-1.10.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b5ba54d026c2bd2cb769d3468885f23f43710f651688e91f5fb1edcf0ee9283"},
{file = "pydantic-1.10.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:05e00dbebbe810b33c7a7362f231893183bcc4251f3f2ff991c31d5c08240c42"},
{file = "pydantic-1.10.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:2d0567e60eb01bccda3a4df01df677adf6b437958d35c12a3ac3e0f078b0ee52"},
{file = "pydantic-1.10.2-cp311-cp311-win_amd64.whl", hash = "sha256:c6f981882aea41e021f72779ce2a4e87267458cc4d39ea990729e21ef18f0f8c"},
{file = "pydantic-1.10.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c4aac8e7103bf598373208f6299fa9a5cfd1fc571f2d40bf1dd1955a63d6eeb5"},
{file = "pydantic-1.10.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:81a7b66c3f499108b448f3f004801fcd7d7165fb4200acb03f1c2402da73ce4c"},
{file = "pydantic-1.10.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bedf309630209e78582ffacda64a21f96f3ed2e51fbf3962d4d488e503420254"},
{file = "pydantic-1.10.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:9300fcbebf85f6339a02c6994b2eb3ff1b9c8c14f502058b5bf349d42447dcf5"},
{file = "pydantic-1.10.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:216f3bcbf19c726b1cc22b099dd409aa371f55c08800bcea4c44c8f74b73478d"},
{file = "pydantic-1.10.2-cp37-cp37m-win_amd64.whl", hash = "sha256:dd3f9a40c16daf323cf913593083698caee97df2804aa36c4b3175d5ac1b92a2"},
{file = "pydantic-1.10.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b97890e56a694486f772d36efd2ba31612739bc6f3caeee50e9e7e3ebd2fdd13"},
{file = "pydantic-1.10.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:9cabf4a7f05a776e7793e72793cd92cc865ea0e83a819f9ae4ecccb1b8aa6116"},
{file = "pydantic-1.10.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:06094d18dd5e6f2bbf93efa54991c3240964bb663b87729ac340eb5014310624"},
{file = "pydantic-1.10.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc78cc83110d2f275ec1970e7a831f4e371ee92405332ebfe9860a715f8336e1"},
{file = "pydantic-1.10.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:1ee433e274268a4b0c8fde7ad9d58ecba12b069a033ecc4645bb6303c062d2e9"},
{file = "pydantic-1.10.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:7c2abc4393dea97a4ccbb4ec7d8658d4e22c4765b7b9b9445588f16c71ad9965"},
{file = "pydantic-1.10.2-cp38-cp38-win_amd64.whl", hash = "sha256:0b959f4d8211fc964772b595ebb25f7652da3f22322c007b6fed26846a40685e"},
{file = "pydantic-1.10.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c33602f93bfb67779f9c507e4d69451664524389546bacfe1bee13cae6dc7488"},
{file = "pydantic-1.10.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5760e164b807a48a8f25f8aa1a6d857e6ce62e7ec83ea5d5c5a802eac81bad41"},
{file = "pydantic-1.10.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6eb843dcc411b6a2237a694f5e1d649fc66c6064d02b204a7e9d194dff81eb4b"},
{file = "pydantic-1.10.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4b8795290deaae348c4eba0cebb196e1c6b98bdbe7f50b2d0d9a4a99716342fe"},
{file = "pydantic-1.10.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:e0bedafe4bc165ad0a56ac0bd7695df25c50f76961da29c050712596cf092d6d"},
{file = "pydantic-1.10.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:2e05aed07fa02231dbf03d0adb1be1d79cabb09025dd45aa094aa8b4e7b9dcda"},
{file = "pydantic-1.10.2-cp39-cp39-win_amd64.whl", hash = "sha256:c1ba1afb396148bbc70e9eaa8c06c1716fdddabaf86e7027c5988bae2a829ab6"},
{file = "pydantic-1.10.2-py3-none-any.whl", hash = "sha256:1b6ee725bd6e83ec78b1aa32c5b1fa67a3a65badddde3976bca5fe4568f27709"},
{file = "pydantic-1.10.2.tar.gz", hash = "sha256:91b8e218852ef6007c2b98cd861601c6a09f1aa32bbbb74fab5b1c33d4a1e410"},
]
pydata-sphinx-theme = [
{file = "pydata_sphinx_theme-0.9.0-py3-none-any.whl", hash = "sha256:b22b442a6d6437e5eaf0a1f057169ffcb31eaa9f10be7d5481a125e735c71c12"},
{file = "pydata_sphinx_theme-0.9.0.tar.gz", hash = "sha256:03598a86915b596f4bf80bef79a4d33276a83e670bf360def699dbb9f99dc57a"},
]
pydot = [
{file = "pydot-1.4.2-py2.py3-none-any.whl", hash = "sha256:66c98190c65b8d2e2382a441b4c0edfdb4f4c025ef9cb9874de478fb0793a451"},
{file = "pydot-1.4.2.tar.gz", hash = "sha256:248081a39bcb56784deb018977e428605c1c758f10897a339fce1dd728ff007d"},
]
pydotplus = [
{file = "pydotplus-2.0.2.tar.gz", hash = "sha256:91e85e9ee9b85d2391ead7d635e3d9c7f5f44fd60a60e59b13e2403fa66505c4"},
]
pyflakes = [
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
]
pygam = [
{file = "pygam-0.8.0-py2.py3-none-any.whl", hash = "sha256:198bd478700520b7c399cc4bcbc011e46850969c32fb09ef0b7a4bbb14e842a5"},
{file = "pygam-0.8.0.tar.gz", hash = "sha256:5cae01aea8b2fede72a6da0aba1490213af54b3476745666af26bbe700479166"},
]
pygments = [
{file = "Pygments-2.13.0-py3-none-any.whl", hash = "sha256:f643f331ab57ba3c9d89212ee4a2dabc6e94f117cf4eefde99a0574720d14c42"},
{file = "Pygments-2.13.0.tar.gz", hash = "sha256:56a8508ae95f98e2b9bdf93a6be5ae3f7d8af858b43e02c5a2ff083726be40c1"},
]
pygraphviz = [
{file = "pygraphviz-1.10.zip", hash = "sha256:457e093a888128903251a266a8cc16b4ba93f3f6334b3ebfed92c7471a74d867"},
]
pyparsing = [
{file = "pyparsing-3.0.9-py3-none-any.whl", hash = "sha256:5026bae9a10eeaefb61dab2f09052b9f4307d44aee4eda64b309723d8d206bbc"},
{file = "pyparsing-3.0.9.tar.gz", hash = "sha256:2b020ecf7d21b687f219b71ecad3631f644a47f01403fa1d1036b0c6416d70fb"},
]
pyro-api = [
{file = "pyro-api-0.1.2.tar.gz", hash = "sha256:a1b900d9580aa1c2fab3b123ab7ff33413744da7c5f440bd4aadc4d40d14d920"},
{file = "pyro_api-0.1.2-py3-none-any.whl", hash = "sha256:10e0e42e9e4401ce464dab79c870e50dfb4f413d326fa777f3582928ef9caf8f"},
]
pyro-ppl = [
{file = "pyro-ppl-1.8.3.tar.gz", hash = "sha256:3edd4381b020d12e8ab50ebe0298c7a68d150b8a024f998ad86fdac7a308d50e"},
{file = "pyro_ppl-1.8.3-py3-none-any.whl", hash = "sha256:cf642cb8bd1a54ad9c69960a5910e423b33f5de3480589b5dcc5f11236b403fb"},
]
pyrsistent = [
{file = "pyrsistent-0.19.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d6982b5a0237e1b7d876b60265564648a69b14017f3b5f908c5be2de3f9abb7a"},
{file = "pyrsistent-0.19.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:187d5730b0507d9285a96fca9716310d572e5464cadd19f22b63a6976254d77a"},
{file = "pyrsistent-0.19.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:055ab45d5911d7cae397dc418808d8802fb95262751872c841c170b0dbf51eed"},
{file = "pyrsistent-0.19.2-cp310-cp310-win32.whl", hash = "sha256:456cb30ca8bff00596519f2c53e42c245c09e1a4543945703acd4312949bfd41"},
{file = "pyrsistent-0.19.2-cp310-cp310-win_amd64.whl", hash = "sha256:b39725209e06759217d1ac5fcdb510e98670af9e37223985f330b611f62e7425"},
{file = "pyrsistent-0.19.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:2aede922a488861de0ad00c7630a6e2d57e8023e4be72d9d7147a9fcd2d30712"},
{file = "pyrsistent-0.19.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:879b4c2f4d41585c42df4d7654ddffff1239dc4065bc88b745f0341828b83e78"},
{file = "pyrsistent-0.19.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c43bec251bbd10e3cb58ced80609c5c1eb238da9ca78b964aea410fb820d00d6"},
{file = "pyrsistent-0.19.2-cp37-cp37m-win32.whl", hash = "sha256:d690b18ac4b3e3cab73b0b7aa7dbe65978a172ff94970ff98d82f2031f8971c2"},
{file = "pyrsistent-0.19.2-cp37-cp37m-win_amd64.whl", hash = "sha256:3ba4134a3ff0fc7ad225b6b457d1309f4698108fb6b35532d015dca8f5abed73"},
{file = "pyrsistent-0.19.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a178209e2df710e3f142cbd05313ba0c5ebed0a55d78d9945ac7a4e09d923308"},
{file = "pyrsistent-0.19.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e371b844cec09d8dc424d940e54bba8f67a03ebea20ff7b7b0d56f526c71d584"},
{file = "pyrsistent-0.19.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:111156137b2e71f3a9936baf27cb322e8024dac3dc54ec7fb9f0bcf3249e68bb"},
{file = "pyrsistent-0.19.2-cp38-cp38-win32.whl", hash = "sha256:e5d8f84d81e3729c3b506657dddfe46e8ba9c330bf1858ee33108f8bb2adb38a"},
{file = "pyrsistent-0.19.2-cp38-cp38-win_amd64.whl", hash = "sha256:9cd3e9978d12b5d99cbdc727a3022da0430ad007dacf33d0bf554b96427f33ab"},
{file = "pyrsistent-0.19.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f1258f4e6c42ad0b20f9cfcc3ada5bd6b83374516cd01c0960e3cb75fdca6770"},
{file = "pyrsistent-0.19.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21455e2b16000440e896ab99e8304617151981ed40c29e9507ef1c2e4314ee95"},
{file = "pyrsistent-0.19.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bfd880614c6237243ff53a0539f1cb26987a6dc8ac6e66e0c5a40617296a045e"},
{file = "pyrsistent-0.19.2-cp39-cp39-win32.whl", hash = "sha256:71d332b0320642b3261e9fee47ab9e65872c2bd90260e5d225dabeed93cbd42b"},
{file = "pyrsistent-0.19.2-cp39-cp39-win_amd64.whl", hash = "sha256:dec3eac7549869365fe263831f576c8457f6c833937c68542d08fde73457d291"},
{file = "pyrsistent-0.19.2-py3-none-any.whl", hash = "sha256:ea6b79a02a28550c98b6ca9c35b9f492beaa54d7c5c9e9949555893c8a9234d0"},
{file = "pyrsistent-0.19.2.tar.gz", hash = "sha256:bfa0351be89c9fcbcb8c9879b826f4353be10f58f8a677efab0c017bf7137ec2"},
]
pytest = [
{file = "pytest-7.2.0-py3-none-any.whl", hash = "sha256:892f933d339f068883b6fd5a459f03d85bfcb355e4981e146d2c7616c21fef71"},
{file = "pytest-7.2.0.tar.gz", hash = "sha256:c4014eb40e10f11f355ad4e3c2fb2c6c6d1919c73f3b5a433de4708202cade59"},
]
pytest-cov = [
{file = "pytest-cov-3.0.0.tar.gz", hash = "sha256:e7f0f5b1617d2210a2cabc266dfe2f4c75a8d32fb89eafb7ad9d06f6d076d470"},
{file = "pytest_cov-3.0.0-py3-none-any.whl", hash = "sha256:578d5d15ac4a25e5f961c938b85a05b09fdaae9deef3bb6de9a6e766622ca7a6"},
]
pytest-split = [
{file = "pytest-split-0.8.0.tar.gz", hash = "sha256:8571a3f60ca8656c698ed86b0a3212bb9e79586ecb201daef9988c336ff0e6ff"},
{file = "pytest_split-0.8.0-py3-none-any.whl", hash = "sha256:2e06b8b1ab7ceb19d0b001548271abaf91d12415a8687086cf40581c555d309f"},
]
python-dateutil = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
]
python-utils = [
{file = "python-utils-3.4.5.tar.gz", hash = "sha256:7e329c427a6d23036cfcc4501638afb31b2ddc8896f25393562833874b8c6e0a"},
{file = "python_utils-3.4.5-py2.py3-none-any.whl", hash = "sha256:22990259324eae88faa3389d302861a825dbdd217ab40e3ec701851b3337d592"},
]
pytz = [
{file = "pytz-2022.6-py2.py3-none-any.whl", hash = "sha256:222439474e9c98fced559f1709d89e6c9cbf8d79c794ff3eb9f8800064291427"},
{file = "pytz-2022.6.tar.gz", hash = "sha256:e89512406b793ca39f5971bc999cc538ce125c0e51c27941bef4568b460095e2"},
]
pytz-deprecation-shim = [
{file = "pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl", hash = "sha256:8314c9692a636c8eb3bda879b9f119e350e93223ae83e70e80c31675a0fdc1a6"},
{file = "pytz_deprecation_shim-0.1.0.post0.tar.gz", hash = "sha256:af097bae1b616dde5c5744441e2ddc69e74dfdcb0c263129610d85b87445a59d"},
]
pywin32 = [
{file = "pywin32-305-cp310-cp310-win32.whl", hash = "sha256:421f6cd86e84bbb696d54563c48014b12a23ef95a14e0bdba526be756d89f116"},
{file = "pywin32-305-cp310-cp310-win_amd64.whl", hash = "sha256:73e819c6bed89f44ff1d690498c0a811948f73777e5f97c494c152b850fad478"},
{file = "pywin32-305-cp310-cp310-win_arm64.whl", hash = "sha256:742eb905ce2187133a29365b428e6c3b9001d79accdc30aa8969afba1d8470f4"},
{file = "pywin32-305-cp311-cp311-win32.whl", hash = "sha256:19ca459cd2e66c0e2cc9a09d589f71d827f26d47fe4a9d09175f6aa0256b51c2"},
{file = "pywin32-305-cp311-cp311-win_amd64.whl", hash = "sha256:326f42ab4cfff56e77e3e595aeaf6c216712bbdd91e464d167c6434b28d65990"},
{file = "pywin32-305-cp311-cp311-win_arm64.whl", hash = "sha256:4ecd404b2c6eceaca52f8b2e3e91b2187850a1ad3f8b746d0796a98b4cea04db"},
{file = "pywin32-305-cp36-cp36m-win32.whl", hash = "sha256:48d8b1659284f3c17b68587af047d110d8c44837736b8932c034091683e05863"},
{file = "pywin32-305-cp36-cp36m-win_amd64.whl", hash = "sha256:13362cc5aa93c2beaf489c9c9017c793722aeb56d3e5166dadd5ef82da021fe1"},
{file = "pywin32-305-cp37-cp37m-win32.whl", hash = "sha256:a55db448124d1c1484df22fa8bbcbc45c64da5e6eae74ab095b9ea62e6d00496"},
{file = "pywin32-305-cp37-cp37m-win_amd64.whl", hash = "sha256:109f98980bfb27e78f4df8a51a8198e10b0f347257d1e265bb1a32993d0c973d"},
{file = "pywin32-305-cp38-cp38-win32.whl", hash = "sha256:9dd98384da775afa009bc04863426cb30596fd78c6f8e4e2e5bbf4edf8029504"},
{file = "pywin32-305-cp38-cp38-win_amd64.whl", hash = "sha256:56d7a9c6e1a6835f521788f53b5af7912090674bb84ef5611663ee1595860fc7"},
{file = "pywin32-305-cp39-cp39-win32.whl", hash = "sha256:9d968c677ac4d5cbdaa62fd3014ab241718e619d8e36ef8e11fb930515a1e918"},
{file = "pywin32-305-cp39-cp39-win_amd64.whl", hash = "sha256:50768c6b7c3f0b38b7fb14dd4104da93ebced5f1a50dc0e834594bff6fbe1271"},
]
pywinpty = [
{file = "pywinpty-2.0.9-cp310-none-win_amd64.whl", hash = "sha256:30a7b371446a694a6ce5ef906d70ac04e569de5308c42a2bdc9c3bc9275ec51f"},
{file = "pywinpty-2.0.9-cp311-none-win_amd64.whl", hash = "sha256:d78ef6f4bd7a6c6f94dc1a39ba8fb028540cc39f5cb593e756506db17843125f"},
{file = "pywinpty-2.0.9-cp37-none-win_amd64.whl", hash = "sha256:5ed36aa087e35a3a183f833631b3e4c1ae92fe2faabfce0fa91b77ed3f0f1382"},
{file = "pywinpty-2.0.9-cp38-none-win_amd64.whl", hash = "sha256:2352f44ee913faaec0a02d3c112595e56b8af7feeb8100efc6dc1a8685044199"},
{file = "pywinpty-2.0.9-cp39-none-win_amd64.whl", hash = "sha256:ba75ec55f46c9e17db961d26485b033deb20758b1731e8e208e1e8a387fcf70c"},
{file = "pywinpty-2.0.9.tar.gz", hash = "sha256:01b6400dd79212f50a2f01af1c65b781290ff39610853db99bf03962eb9a615f"},
]
pyyaml = [
{file = "PyYAML-6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4db7c7aef085872ef65a8fd7d6d09a14ae91f691dec3e87ee5ee0539d516f53"},
{file = "PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9df7ed3b3d2e0ecfe09e14741b857df43adb5a3ddadc919a2d94fbdf78fea53c"},
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77f396e6ef4c73fdc33a9157446466f1cff553d979bd00ecb64385760c6babdc"},
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a80a78046a72361de73f8f395f1f1e49f956c6be882eed58505a15f3e430962b"},
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f84fbc98b019fef2ee9a1cb3ce93e3187a6df0b2538a651bfb890254ba9f90b5"},
{file = "PyYAML-6.0-cp310-cp310-win32.whl", hash = "sha256:2cd5df3de48857ed0544b34e2d40e9fac445930039f3cfe4bcc592a1f836d513"},
{file = "PyYAML-6.0-cp310-cp310-win_amd64.whl", hash = "sha256:daf496c58a8c52083df09b80c860005194014c3698698d1a57cbcfa182142a3a"},
{file = "PyYAML-6.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d4b0ba9512519522b118090257be113b9468d804b19d63c71dbcf4a48fa32358"},
{file = "PyYAML-6.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:81957921f441d50af23654aa6c5e5eaf9b06aba7f0a19c18a538dc7ef291c5a1"},
{file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa17f5bc4d1b10afd4466fd3a44dc0e245382deca5b3c353d8b757f9e3ecb8d"},
{file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dbad0e9d368bb989f4515da330b88a057617d16b6a8245084f1b05400f24609f"},
{file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:432557aa2c09802be39460360ddffd48156e30721f5e8d917f01d31694216782"},
{file = "PyYAML-6.0-cp311-cp311-win32.whl", hash = "sha256:bfaef573a63ba8923503d27530362590ff4f576c626d86a9fed95822a8255fd7"},
{file = "PyYAML-6.0-cp311-cp311-win_amd64.whl", hash = "sha256:01b45c0191e6d66c470b6cf1b9531a771a83c1c4208272ead47a3ae4f2f603bf"},
{file = "PyYAML-6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:897b80890765f037df3403d22bab41627ca8811ae55e9a722fd0392850ec4d86"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50602afada6d6cbfad699b0c7bb50d5ccffa7e46a3d738092afddc1f9758427f"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:48c346915c114f5fdb3ead70312bd042a953a8ce5c7106d5bfb1a5254e47da92"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:98c4d36e99714e55cfbaaee6dd5badbc9a1ec339ebfc3b1f52e293aee6bb71a4"},
{file = "PyYAML-6.0-cp36-cp36m-win32.whl", hash = "sha256:0283c35a6a9fbf047493e3a0ce8d79ef5030852c51e9d911a27badfde0605293"},
{file = "PyYAML-6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:07751360502caac1c067a8132d150cf3d61339af5691fe9e87803040dbc5db57"},
{file = "PyYAML-6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:819b3830a1543db06c4d4b865e70ded25be52a2e0631ccd2f6a47a2822f2fd7c"},
{file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:473f9edb243cb1935ab5a084eb238d842fb8f404ed2193a915d1784b5a6b5fc0"},
{file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ce82d761c532fe4ec3f87fc45688bdd3a4c1dc5e0b4a19814b9009a29baefd4"},
{file = "PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:231710d57adfd809ef5d34183b8ed1eeae3f76459c18fb4a0b373ad56bedcdd9"},
{file = "PyYAML-6.0-cp37-cp37m-win32.whl", hash = "sha256:c5687b8d43cf58545ade1fe3e055f70eac7a5a1a0bf42824308d868289a95737"},
{file = "PyYAML-6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:d15a181d1ecd0d4270dc32edb46f7cb7733c7c508857278d3d378d14d606db2d"},
{file = "PyYAML-6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0b4624f379dab24d3725ffde76559cff63d9ec94e1736b556dacdfebe5ab6d4b"},
{file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:213c60cd50106436cc818accf5baa1aba61c0189ff610f64f4a3e8c6726218ba"},
{file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9fa600030013c4de8165339db93d182b9431076eb98eb40ee068700c9c813e34"},
{file = "PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:277a0ef2981ca40581a47093e9e2d13b3f1fbbeffae064c1d21bfceba2030287"},
{file = "PyYAML-6.0-cp38-cp38-win32.whl", hash = "sha256:d4eccecf9adf6fbcc6861a38015c2a64f38b9d94838ac1810a9023a0609e1b78"},
{file = "PyYAML-6.0-cp38-cp38-win_amd64.whl", hash = "sha256:1e4747bc279b4f613a09eb64bba2ba602d8a6664c6ce6396a4d0cd413a50ce07"},
{file = "PyYAML-6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:055d937d65826939cb044fc8c9b08889e8c743fdc6a32b33e2390f66013e449b"},
{file = "PyYAML-6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e61ceaab6f49fb8bdfaa0f92c4b57bcfbea54c09277b1b4f7ac376bfb7a7c174"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d67d839ede4ed1b28a4e8909735fc992a923cdb84e618544973d7dfc71540803"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cba8c411ef271aa037d7357a2bc8f9ee8b58b9965831d9e51baf703280dc73d3"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:40527857252b61eacd1d9af500c3337ba8deb8fc298940291486c465c8b46ec0"},
{file = "PyYAML-6.0-cp39-cp39-win32.whl", hash = "sha256:b5b9eccad747aabaaffbc6064800670f0c297e52c12754eb1d976c57e4f74dcb"},
{file = "PyYAML-6.0-cp39-cp39-win_amd64.whl", hash = "sha256:b3d267842bf12586ba6c734f89d1f5b871df0273157918b0ccefa29deb05c21c"},
{file = "PyYAML-6.0.tar.gz", hash = "sha256:68fb519c14306fec9720a2a5b45bc9f0c8d1b9c72adf45c37baedfcd949c35a2"},
]
pyzmq = [
{file = "pyzmq-24.0.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:28b119ba97129d3001673a697b7cce47fe6de1f7255d104c2f01108a5179a066"},
{file = "pyzmq-24.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bcbebd369493d68162cddb74a9c1fcebd139dfbb7ddb23d8f8e43e6c87bac3a6"},
{file = "pyzmq-24.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ae61446166983c663cee42c852ed63899e43e484abf080089f771df4b9d272ef"},
{file = "pyzmq-24.0.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:87f7ac99b15270db8d53f28c3c7b968612993a90a5cf359da354efe96f5372b4"},
{file = "pyzmq-24.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9dca7c3956b03b7663fac4d150f5e6d4f6f38b2462c1e9afd83bcf7019f17913"},
{file = "pyzmq-24.0.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:8c78bfe20d4c890cb5580a3b9290f700c570e167d4cdcc55feec07030297a5e3"},
{file = "pyzmq-24.0.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:48f721f070726cd2a6e44f3c33f8ee4b24188e4b816e6dd8ba542c8c3bb5b246"},
{file = "pyzmq-24.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:afe1f3bc486d0ce40abb0a0c9adb39aed3bbac36ebdc596487b0cceba55c21c1"},
{file = "pyzmq-24.0.1-cp310-cp310-win32.whl", hash = "sha256:3e6192dbcefaaa52ed81be88525a54a445f4b4fe2fffcae7fe40ebb58bd06bfd"},
{file = "pyzmq-24.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:86de64468cad9c6d269f32a6390e210ca5ada568c7a55de8e681ca3b897bb340"},
{file = "pyzmq-24.0.1-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:838812c65ed5f7c2bd11f7b098d2e5d01685a3f6d1f82849423b570bae698c00"},
{file = "pyzmq-24.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:dfb992dbcd88d8254471760879d48fb20836d91baa90f181c957122f9592b3dc"},
{file = "pyzmq-24.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7abddb2bd5489d30ffeb4b93a428130886c171b4d355ccd226e83254fcb6b9ef"},
{file = "pyzmq-24.0.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:94010bd61bc168c103a5b3b0f56ed3b616688192db7cd5b1d626e49f28ff51b3"},
{file = "pyzmq-24.0.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:8242543c522d84d033fe79be04cb559b80d7eb98ad81b137ff7e0a9020f00ace"},
{file = "pyzmq-24.0.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ccb94342d13e3bf3ffa6e62f95b5e3f0bc6bfa94558cb37f4b3d09d6feb536ff"},
{file = "pyzmq-24.0.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:6640f83df0ae4ae1104d4c62b77e9ef39be85ebe53f636388707d532bee2b7b8"},
{file = "pyzmq-24.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:a180dbd5ea5d47c2d3b716d5c19cc3fb162d1c8db93b21a1295d69585bfddac1"},
{file = "pyzmq-24.0.1-cp311-cp311-win32.whl", hash = "sha256:624321120f7e60336be8ec74a172ae7fba5c3ed5bf787cc85f7e9986c9e0ebc2"},
{file = "pyzmq-24.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:1724117bae69e091309ffb8255412c4651d3f6355560d9af312d547f6c5bc8b8"},
{file = "pyzmq-24.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:15975747462ec49fdc863af906bab87c43b2491403ab37a6d88410635786b0f4"},
{file = "pyzmq-24.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b947e264f0e77d30dcbccbb00f49f900b204b922eb0c3a9f0afd61aaa1cedc3d"},
{file = "pyzmq-24.0.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0ec91f1bad66f3ee8c6deb65fa1fe418e8ad803efedd69c35f3b5502f43bd1dc"},
{file = "pyzmq-24.0.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:db03704b3506455d86ec72c3358a779e9b1d07b61220dfb43702b7b668edcd0d"},
{file = "pyzmq-24.0.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:e7e66b4e403c2836ac74f26c4b65d8ac0ca1eef41dfcac2d013b7482befaad83"},
{file = "pyzmq-24.0.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:7a23ccc1083c260fa9685c93e3b170baba45aeed4b524deb3f426b0c40c11639"},
{file = "pyzmq-24.0.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:fa0ae3275ef706c0309556061185dd0e4c4cd3b7d6f67ae617e4e677c7a41e2e"},
{file = "pyzmq-24.0.1-cp36-cp36m-win32.whl", hash = "sha256:f01de4ec083daebf210531e2cca3bdb1608dbbbe00a9723e261d92087a1f6ebc"},
{file = "pyzmq-24.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:de4217b9eb8b541cf2b7fde4401ce9d9a411cc0af85d410f9d6f4333f43640be"},
{file = "pyzmq-24.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:78068e8678ca023594e4a0ab558905c1033b2d3e806a0ad9e3094e231e115a33"},
{file = "pyzmq-24.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77c2713faf25a953c69cf0f723d1b7dd83827b0834e6c41e3fb3bbc6765914a1"},
{file = "pyzmq-24.0.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:8bb4af15f305056e95ca1bd086239b9ebc6ad55e9f49076d27d80027f72752f6"},
{file = "pyzmq-24.0.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0f14cffd32e9c4c73da66db97853a6aeceaac34acdc0fae9e5bbc9370281864c"},
{file = "pyzmq-24.0.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:0108358dab8c6b27ff6b985c2af4b12665c1bc659648284153ee501000f5c107"},
{file = "pyzmq-24.0.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:d66689e840e75221b0b290b0befa86f059fb35e1ee6443bce51516d4d61b6b99"},
{file = "pyzmq-24.0.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:ae08ac90aa8fa14caafc7a6251bd218bf6dac518b7bff09caaa5e781119ba3f2"},
{file = "pyzmq-24.0.1-cp37-cp37m-win32.whl", hash = "sha256:8421aa8c9b45ea608c205db9e1c0c855c7e54d0e9c2c2f337ce024f6843cab3b"},
{file = "pyzmq-24.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:54d8b9c5e288362ec8595c1d98666d36f2070fd0c2f76e2b3c60fbad9bd76227"},
{file = "pyzmq-24.0.1-cp38-cp38-macosx_10_15_universal2.whl", hash = "sha256:acbd0a6d61cc954b9f535daaa9ec26b0a60a0d4353c5f7c1438ebc88a359a47e"},
{file = "pyzmq-24.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:47b11a729d61a47df56346283a4a800fa379ae6a85870d5a2e1e4956c828eedc"},
{file = "pyzmq-24.0.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abe6eb10122f0d746a0d510c2039ae8edb27bc9af29f6d1b05a66cc2401353ff"},
{file = "pyzmq-24.0.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:07bec1a1b22dacf718f2c0e71b49600bb6a31a88f06527dfd0b5aababe3fa3f7"},
{file = "pyzmq-24.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f0d945a85b70da97ae86113faf9f1b9294efe66bd4a5d6f82f2676d567338b66"},
{file = "pyzmq-24.0.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:1b7928bb7580736ffac5baf814097be342ba08d3cfdfb48e52773ec959572287"},
{file = "pyzmq-24.0.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:b946da90dc2799bcafa682692c1d2139b2a96ec3c24fa9fc6f5b0da782675330"},
{file = "pyzmq-24.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:c8840f064b1fb377cffd3efeaad2b190c14d4c8da02316dae07571252d20b31f"},
{file = "pyzmq-24.0.1-cp38-cp38-win32.whl", hash = "sha256:4854f9edc5208f63f0841c0c667260ae8d6846cfa233c479e29fdc85d42ebd58"},
{file = "pyzmq-24.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:42d4f97b9795a7aafa152a36fe2ad44549b83a743fd3e77011136def512e6c2a"},
{file = "pyzmq-24.0.1-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:52afb0ac962963fff30cf1be775bc51ae083ef4c1e354266ab20e5382057dd62"},
{file = "pyzmq-24.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8bad8210ad4df68c44ff3685cca3cda448ee46e20d13edcff8909eba6ec01ca4"},
{file = "pyzmq-24.0.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:dabf1a05318d95b1537fd61d9330ef4313ea1216eea128a17615038859da3b3b"},
{file = "pyzmq-24.0.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5bd3d7dfd9cd058eb68d9a905dec854f86649f64d4ddf21f3ec289341386c44b"},
{file = "pyzmq-24.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e8012bce6836d3f20a6c9599f81dfa945f433dab4dbd0c4917a6fb1f998ab33d"},
{file = "pyzmq-24.0.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:c31805d2c8ade9b11feca4674eee2b9cce1fec3e8ddb7bbdd961a09dc76a80ea"},
{file = "pyzmq-24.0.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:3104f4b084ad5d9c0cb87445cc8cfd96bba710bef4a66c2674910127044df209"},
{file = "pyzmq-24.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:df0841f94928f8af9c7a1f0aaaffba1fb74607af023a152f59379c01c53aee58"},
{file = "pyzmq-24.0.1-cp39-cp39-win32.whl", hash = "sha256:a435ef8a3bd95c8a2d316d6e0ff70d0db524f6037411652803e118871d703333"},
{file = "pyzmq-24.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:2032d9cb994ce3b4cba2b8dfae08c7e25bc14ba484c770d4d3be33c27de8c45b"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:bb5635c851eef3a7a54becde6da99485eecf7d068bd885ac8e6d173c4ecd68b0"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:83ea1a398f192957cb986d9206ce229efe0ee75e3c6635baff53ddf39bd718d5"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:941fab0073f0a54dc33d1a0460cb04e0d85893cb0c5e1476c785000f8b359409"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0e8f482c44ccb5884bf3f638f29bea0f8dc68c97e38b2061769c4cb697f6140d"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:613010b5d17906c4367609e6f52e9a2595e35d5cc27d36ff3f1b6fa6e954d944"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:65c94410b5a8355cfcf12fd600a313efee46ce96a09e911ea92cf2acf6708804"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:20e7eeb1166087db636c06cae04a1ef59298627f56fb17da10528ab52a14c87f"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a2712aee7b3834ace51738c15d9ee152cc5a98dc7d57dd93300461b792ab7b43"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1a7c280185c4da99e0cc06c63bdf91f5b0b71deb70d8717f0ab870a43e376db8"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:858375573c9225cc8e5b49bfac846a77b696b8d5e815711b8d4ba3141e6e8879"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:80093b595921eed1a2cead546a683b9e2ae7f4a4592bb2ab22f70d30174f003a"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f3f3154fde2b1ff3aa7b4f9326347ebc89c8ef425ca1db8f665175e6d3bd42f"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abb756147314430bee5d10919b8493c0ccb109ddb7f5dfd2fcd7441266a25b75"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:44e706bac34e9f50779cb8c39f10b53a4d15aebb97235643d3112ac20bd577b4"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:687700f8371643916a1d2c61f3fdaa630407dd205c38afff936545d7b7466066"},
{file = "pyzmq-24.0.1.tar.gz", hash = "sha256:216f5d7dbb67166759e59b0479bca82b8acf9bed6015b526b8eb10143fb08e77"},
]
qtconsole = [
{file = "qtconsole-5.4.0-py3-none-any.whl", hash = "sha256:be13560c19bdb3b54ed9741a915aa701a68d424519e8341ac479a91209e694b2"},
{file = "qtconsole-5.4.0.tar.gz", hash = "sha256:57748ea2fd26320a0b77adba20131cfbb13818c7c96d83fafcb110ff55f58b35"},
]
qtpy = [
{file = "QtPy-2.3.0-py3-none-any.whl", hash = "sha256:8d6d544fc20facd27360ea189592e6135c614785f0dec0b4f083289de6beb408"},
{file = "QtPy-2.3.0.tar.gz", hash = "sha256:0603c9c83ccc035a4717a12908bf6bc6cb22509827ea2ec0e94c2da7c9ed57c5"},
]
requests = [
{file = "requests-2.28.1-py3-none-any.whl", hash = "sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349"},
{file = "requests-2.28.1.tar.gz", hash = "sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983"},
]
requests-oauthlib = [
{file = "requests-oauthlib-1.3.1.tar.gz", hash = "sha256:75beac4a47881eeb94d5ea5d6ad31ef88856affe2332b9aafb52c6452ccf0d7a"},
{file = "requests_oauthlib-1.3.1-py2.py3-none-any.whl", hash = "sha256:2577c501a2fb8d05a304c09d090d6e47c306fef15809d102b327cf8364bddab5"},
]
rpy2 = [
{file = "rpy2-3.5.6-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:7f56bb66d95aaa59f52c82bdff3bb268a5745cc3779839ca1ac9aecfc411c17a"},
{file = "rpy2-3.5.6-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:defff796b43fe230e1e698a1bc353b7a4a25d4d9de856ee1bcffd6831edc825c"},
{file = "rpy2-3.5.6-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:a3f74cd54bd2e21a94274ae5306113e24f8a15c034b15be931188939292b49f7"},
{file = "rpy2-3.5.6-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:6a2e4be001b98c00f084a561cfcf9ca52f938cd8fcd8acfa0fbfc6a8be219339"},
{file = "rpy2-3.5.6.tar.gz", hash = "sha256:3404f1031d2d8ff8a1002656ab8e394b8ac16dd34ca43af68deed102f396e771"},
]
rsa = [
{file = "rsa-4.9-py3-none-any.whl", hash = "sha256:90260d9058e514786967344d0ef75fa8727eed8a7d2e43ce9f4bcf1b536174f7"},
{file = "rsa-4.9.tar.gz", hash = "sha256:e38464a49c6c85d7f1351b0126661487a7e0a14a50f1675ec50eb34d4f20ef21"},
]
s3transfer = [
{file = "s3transfer-0.6.0-py3-none-any.whl", hash = "sha256:06176b74f3a15f61f1b4f25a1fc29a4429040b7647133a463da8fa5bd28d5ecd"},
{file = "s3transfer-0.6.0.tar.gz", hash = "sha256:2ed07d3866f523cc561bf4a00fc5535827981b117dd7876f036b0c1aca42c947"},
]
scikit-learn = [
{file = "scikit-learn-1.0.2.tar.gz", hash = "sha256:b5870959a5484b614f26d31ca4c17524b1b0317522199dc985c3b4256e030767"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:da3c84694ff693b5b3194d8752ccf935a665b8b5edc33a283122f4273ca3e687"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:75307d9ea39236cad7eea87143155eea24d48f93f3a2f9389c817f7019f00705"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f14517e174bd7332f1cca2c959e704696a5e0ba246eb8763e6c24876d8710049"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9aac97e57c196206179f674f09bc6bffcd0284e2ba95b7fe0b402ac3f986023"},
{file = "scikit_learn-1.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:d93d4c28370aea8a7cbf6015e8a669cd5d69f856cc2aa44e7a590fb805bb5583"},
{file = "scikit_learn-1.0.2-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:85260fb430b795d806251dd3bb05e6f48cdc777ac31f2bcf2bc8bbed3270a8f5"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a053a6a527c87c5c4fa7bf1ab2556fa16d8345cf99b6c5a19030a4a7cd8fd2c0"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:245c9b5a67445f6f044411e16a93a554edc1efdcce94d3fc0bc6a4b9ac30b752"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:158faf30684c92a78e12da19c73feff9641a928a8024b4fa5ec11d583f3d8a87"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:08ef968f6b72033c16c479c966bf37ccd49b06ea91b765e1cc27afefe723920b"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16455ace947d8d9e5391435c2977178d0ff03a261571e67f627c8fee0f9d431a"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win32.whl", hash = "sha256:2f3b453e0b149898577e301d27e098dfe1a36943f7bb0ad704d1e548efc3b448"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win_amd64.whl", hash = "sha256:46f431ec59dead665e1370314dbebc99ead05e1c0a9df42f22d6a0e00044820f"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:ff3fa8ea0e09e38677762afc6e14cad77b5e125b0ea70c9bba1992f02c93b028"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:9369b030e155f8188743eb4893ac17a27f81d28a884af460870c7c072f114243"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7d6b2475f1c23a698b48515217eb26b45a6598c7b1840ba23b3c5acece658dbb"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:285db0352e635b9e3392b0b426bc48c3b485512d3b4ac3c7a44ec2a2ba061e66"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cb33fe1dc6f73dc19e67b264dbb5dde2a0539b986435fdd78ed978c14654830"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b1391d1a6e2268485a63c3073111fe3ba6ec5145fc957481cfd0652be571226d"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc3744dabc56b50bec73624aeca02e0def06b03cb287de26836e730659c5d29c"},
{file = "scikit_learn-1.0.2-cp38-cp38-win32.whl", hash = "sha256:a999c9f02ff9570c783069f1074f06fe7386ec65b84c983db5aeb8144356a355"},
{file = "scikit_learn-1.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:7626a34eabbf370a638f32d1a3ad50526844ba58d63e3ab81ba91e2a7c6d037e"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:a90b60048f9ffdd962d2ad2fb16367a87ac34d76e02550968719eb7b5716fd10"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:7a93c1292799620df90348800d5ac06f3794c1316ca247525fa31169f6d25855"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:eabceab574f471de0b0eb3f2ecf2eee9f10b3106570481d007ed1c84ebf6d6a1"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:55f2f3a8414e14fbee03782f9fe16cca0f141d639d2b1c1a36779fa069e1db57"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80095a1e4b93bd33261ef03b9bc86d6db649f988ea4dbcf7110d0cded8d7213d"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fa38a1b9b38ae1fad2863eff5e0d69608567453fdfc850c992e6e47eb764e846"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ff746a69ff2ef25f62b36338c615dd15954ddc3ab8e73530237dd73235e76d62"},
{file = "scikit_learn-1.0.2-cp39-cp39-win32.whl", hash = "sha256:e174242caecb11e4abf169342641778f68e1bfaba80cd18acd6bc84286b9a534"},
{file = "scikit_learn-1.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:b54a62c6e318ddbfa7d22c383466d38d2ee770ebdb5ddb668d56a099f6eaf75f"},
]
scipy = [
{file = "scipy-1.8.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:65b77f20202599c51eb2771d11a6b899b97989159b7975e9b5259594f1d35ef4"},
{file = "scipy-1.8.1-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:e013aed00ed776d790be4cb32826adb72799c61e318676172495383ba4570aa4"},
{file = "scipy-1.8.1-cp310-cp310-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:02b567e722d62bddd4ac253dafb01ce7ed8742cf8031aea030a41414b86c1125"},
{file = "scipy-1.8.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1da52b45ce1a24a4a22db6c157c38b39885a990a566748fc904ec9f03ed8c6ba"},
{file = "scipy-1.8.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a0aa8220b89b2e3748a2836fbfa116194378910f1a6e78e4675a095bcd2c762d"},
{file = "scipy-1.8.1-cp310-cp310-win_amd64.whl", hash = "sha256:4e53a55f6a4f22de01ffe1d2f016e30adedb67a699a310cdcac312806807ca81"},
{file = "scipy-1.8.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:28d2cab0c6ac5aa131cc5071a3a1d8e1366dad82288d9ec2ca44df78fb50e649"},
{file = "scipy-1.8.1-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:6311e3ae9cc75f77c33076cb2794fb0606f14c8f1b1c9ff8ce6005ba2c283621"},
{file = "scipy-1.8.1-cp38-cp38-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:3b69b90c9419884efeffaac2c38376d6ef566e6e730a231e15722b0ab58f0328"},
{file = "scipy-1.8.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:6cc6b33139eb63f30725d5f7fa175763dc2df6a8f38ddf8df971f7c345b652dc"},
{file = "scipy-1.8.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9c4e3ae8a716c8b3151e16c05edb1daf4cb4d866caa385e861556aff41300c14"},
{file = "scipy-1.8.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23b22fbeef3807966ea42d8163322366dd89da9bebdc075da7034cee3a1441ca"},
{file = "scipy-1.8.1-cp38-cp38-win32.whl", hash = "sha256:4b93ec6f4c3c4d041b26b5f179a6aab8f5045423117ae7a45ba9710301d7e462"},
{file = "scipy-1.8.1-cp38-cp38-win_amd64.whl", hash = "sha256:70ebc84134cf0c504ce6a5f12d6db92cb2a8a53a49437a6bb4edca0bc101f11c"},
{file = "scipy-1.8.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f3e7a8867f307e3359cc0ed2c63b61a1e33a19080f92fe377bc7d49f646f2ec1"},
{file = "scipy-1.8.1-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:2ef0fbc8bcf102c1998c1f16f15befe7cffba90895d6e84861cd6c6a33fb54f6"},
{file = "scipy-1.8.1-cp39-cp39-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:83606129247e7610b58d0e1e93d2c5133959e9cf93555d3c27e536892f1ba1f2"},
{file = "scipy-1.8.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:93d07494a8900d55492401917a119948ed330b8c3f1d700e0b904a578f10ead4"},
{file = "scipy-1.8.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3b3c8924252caaffc54d4a99f1360aeec001e61267595561089f8b5900821bb"},
{file = "scipy-1.8.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70de2f11bf64ca9921fda018864c78af7147025e467ce9f4a11bc877266900a6"},
{file = "scipy-1.8.1-cp39-cp39-win32.whl", hash = "sha256:1166514aa3bbf04cb5941027c6e294a000bba0cf00f5cdac6c77f2dad479b434"},
{file = "scipy-1.8.1-cp39-cp39-win_amd64.whl", hash = "sha256:9dd4012ac599a1e7eb63c114d1eee1bcfc6dc75a29b589ff0ad0bb3d9412034f"},
{file = "scipy-1.8.1.tar.gz", hash = "sha256:9e3fb1b0e896f14a85aa9a28d5f755daaeeb54c897b746df7a55ccb02b340f33"},
{file = "scipy-1.9.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:1884b66a54887e21addf9c16fb588720a8309a57b2e258ae1c7986d4444d3bc0"},
{file = "scipy-1.9.3-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:83b89e9586c62e787f5012e8475fbb12185bafb996a03257e9675cd73d3736dd"},
{file = "scipy-1.9.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1a72d885fa44247f92743fc20732ae55564ff2a519e8302fb7e18717c5355a8b"},
{file = "scipy-1.9.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d01e1dd7b15bd2449c8bfc6b7cc67d630700ed655654f0dfcf121600bad205c9"},
{file = "scipy-1.9.3-cp310-cp310-win_amd64.whl", hash = "sha256:68239b6aa6f9c593da8be1509a05cb7f9efe98b80f43a5861cd24c7557e98523"},
{file = "scipy-1.9.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b41bc822679ad1c9a5f023bc93f6d0543129ca0f37c1ce294dd9d386f0a21096"},
{file = "scipy-1.9.3-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:90453d2b93ea82a9f434e4e1cba043e779ff67b92f7a0e85d05d286a3625df3c"},
{file = "scipy-1.9.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83c06e62a390a9167da60bedd4575a14c1f58ca9dfde59830fc42e5197283dab"},
{file = "scipy-1.9.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:abaf921531b5aeaafced90157db505e10345e45038c39e5d9b6c7922d68085cb"},
{file = "scipy-1.9.3-cp311-cp311-win_amd64.whl", hash = "sha256:06d2e1b4c491dc7d8eacea139a1b0b295f74e1a1a0f704c375028f8320d16e31"},
{file = "scipy-1.9.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a04cd7d0d3eff6ea4719371cbc44df31411862b9646db617c99718ff68d4840"},
{file = "scipy-1.9.3-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:545c83ffb518094d8c9d83cce216c0c32f8c04aaf28b92cc8283eda0685162d5"},
{file = "scipy-1.9.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0d54222d7a3ba6022fdf5773931b5d7c56efe41ede7f7128c7b1637700409108"},
{file = "scipy-1.9.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cff3a5295234037e39500d35316a4c5794739433528310e117b8a9a0c76d20fc"},
{file = "scipy-1.9.3-cp38-cp38-win_amd64.whl", hash = "sha256:2318bef588acc7a574f5bfdff9c172d0b1bf2c8143d9582e05f878e580a3781e"},
{file = "scipy-1.9.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d644a64e174c16cb4b2e41dfea6af722053e83d066da7343f333a54dae9bc31c"},
{file = "scipy-1.9.3-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:da8245491d73ed0a994ed9c2e380fd058ce2fa8a18da204681f2fe1f57f98f95"},
{file = "scipy-1.9.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4db5b30849606a95dcf519763dd3ab6fe9bd91df49eba517359e450a7d80ce2e"},
{file = "scipy-1.9.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c68db6b290cbd4049012990d7fe71a2abd9ffbe82c0056ebe0f01df8be5436b0"},
{file = "scipy-1.9.3-cp39-cp39-win_amd64.whl", hash = "sha256:5b88e6d91ad9d59478fafe92a7c757d00c59e3bdc3331be8ada76a4f8d683f58"},
{file = "scipy-1.9.3.tar.gz", hash = "sha256:fbc5c05c85c1a02be77b1ff591087c83bc44579c6d2bd9fb798bb64ea5e1a027"},
]
seaborn = [
{file = "seaborn-0.12.1-py3-none-any.whl", hash = "sha256:a9eb39cba095fcb1e4c89a7fab1c57137d70a715a7f2eefcd41c9913c4d4ed65"},
{file = "seaborn-0.12.1.tar.gz", hash = "sha256:bb1eb1d51d3097368c187c3ef089c0288ec1fe8aa1c69fb324c68aa1d02df4c1"},
]
send2trash = [
{file = "Send2Trash-1.8.0-py3-none-any.whl", hash = "sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08"},
{file = "Send2Trash-1.8.0.tar.gz", hash = "sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d"},
]
setuptools = [
{file = "setuptools-65.6.1-py3-none-any.whl", hash = "sha256:9b1b1b4129877c74b0f77de72b64a1084a57ccb106e7252f5fb70f192b3d9055"},
{file = "setuptools-65.6.1.tar.gz", hash = "sha256:1da770a0ee69681e4d2a8196d0b30c16f25d1c8b3d3e755baaedc90f8db04963"},
]
setuptools-scm = [
{file = "setuptools_scm-7.0.5-py3-none-any.whl", hash = "sha256:7930f720905e03ccd1e1d821db521bff7ec2ac9cf0ceb6552dd73d24a45d3b02"},
{file = "setuptools_scm-7.0.5.tar.gz", hash = "sha256:031e13af771d6f892b941adb6ea04545bbf91ebc5ce68c78aaf3fff6e1fb4844"},
]
shap = [
{file = "shap-0.40.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:8bb8b4c01bd33592412dae5246286f62efbb24ad774b63e59b8b16969b915b6d"},
{file = "shap-0.40.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:d2844acab55e18bcb3d691237a720301223a38805e6e43752e6717f3a8b2cc28"},
{file = "shap-0.40.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:e7dd3040b0ec91bc9f477a354973d231d3a6beebe2fa7a5c6a565a79ba7746e8"},
{file = "shap-0.40.0-cp36-cp36m-win32.whl", hash = "sha256:86ea1466244c7e0d0c5dd91d26a90e0b645f5c9d7066810462a921263463529b"},
{file = "shap-0.40.0-cp36-cp36m-win_amd64.whl", hash = "sha256:bbf0cfa30cd8c51f8830d3f25c3881b9949e062124cd0d0b3d8efdc7e0cf5136"},
{file = "shap-0.40.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3d3c5ace8bd5222b455fa5650f9043146e19d80d701f95b25c4c5fb81f628547"},
{file = "shap-0.40.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:18b4ca36a43409b784dc76810f76aaa504c467eac17fa89ef5ee330cb460b2b7"},
{file = "shap-0.40.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:dbb1ec9b2c05c3939425529437c5f3cfba7a3929fed0e820fb84a42e82358cdd"},
{file = "shap-0.40.0-cp37-cp37m-win32.whl", hash = "sha256:0d12f7d86481afd000d5f144c10cadb31d52fb1f77f68659472d6f6d89f7843b"},
{file = "shap-0.40.0-cp37-cp37m-win_amd64.whl", hash = "sha256:dbd07e48fc7f4d5916f6cdd9dbb8d29b7711a265cc9beac92e7d4a4d9e738bc7"},
{file = "shap-0.40.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:399325caecc7306eb7de17ac19aa797abbf2fcda47d2bb4588d9492adb2dce65"},
{file = "shap-0.40.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:4ec50bd0aa24efe1add177371b8b62080484efb87c6dbcf321895c5a08cf68d6"},
{file = "shap-0.40.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:e2b5f2d3cac82de0c49afde6529bebb6d5b20334325640267bf25dce572175a1"},
{file = "shap-0.40.0-cp38-cp38-win32.whl", hash = "sha256:ba06256568747aaab9ad0091306550bfe826c1f195bf2cf57b405ae1de16faed"},
{file = "shap-0.40.0-cp38-cp38-win_amd64.whl", hash = "sha256:fb1b325a55fdf58061d332ed3308d44162084d4cb5f53f2c7774ce943d60b0ad"},
{file = "shap-0.40.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f282fa12ca6fc594bcadca389309d733f73fe071e29ab49cb6e51beaa8b01a1a"},
{file = "shap-0.40.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:2e72a47407f010f845b3ed6cb4f5160f0907ec8ab97df2bca164ebcb263b4205"},
{file = "shap-0.40.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:649c905f9a4629839142e1769235989fb61730eb789a70d27ec7593eb02186a7"},
{file = "shap-0.40.0-cp39-cp39-win32.whl", hash = "sha256:5c220632ba57426d450dcc8ca43c55f657fe18e18f5d223d2a4e2aa02d905047"},
{file = "shap-0.40.0-cp39-cp39-win_amd64.whl", hash = "sha256:46e7084ce021eea450306bf7434adaead53921fd32504f04d1804569839e2979"},
{file = "shap-0.40.0.tar.gz", hash = "sha256:add0a27bb4eb57f0a363c2c4265b1a1328a8c15b01c14c7d432d9cc387dd8579"},
]
six = [
{file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"},
{file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"},
]
slicer = [
{file = "slicer-0.0.7-py3-none-any.whl", hash = "sha256:0b94faa5251c0f23782c03f7b7eedda91d80144059645f452c4bc80fab875976"},
{file = "slicer-0.0.7.tar.gz", hash = "sha256:f5d5f7b45f98d155b9c0ba6554fa9770c6b26d5793a3e77a1030fb56910ebeec"},
]
smart-open = [
{file = "smart_open-5.2.1-py3-none-any.whl", hash = "sha256:71d14489da58b60ce12fc3ecb823facc59a8b23cd1b58edb97175640350d3a62"},
{file = "smart_open-5.2.1.tar.gz", hash = "sha256:75abf758717a92a8f53aa96953f0c245c8cedf8e1e4184903db3659b419d4c17"},
]
sniffio = [
{file = "sniffio-1.3.0-py3-none-any.whl", hash = "sha256:eecefdce1e5bbfb7ad2eeaabf7c1eeb404d7757c379bd1f7e5cce9d8bf425384"},
{file = "sniffio-1.3.0.tar.gz", hash = "sha256:e60305c5e5d314f5389259b7f22aaa33d8f7dee49763119234af3755c55b9101"},
]
snowballstemmer = [
{file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
{file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
]
sortedcontainers = [
{file = "sortedcontainers-2.4.0-py2.py3-none-any.whl", hash = "sha256:a163dcaede0f1c021485e957a39245190e74249897e2ae4b2aa38595db237ee0"},
{file = "sortedcontainers-2.4.0.tar.gz", hash = "sha256:25caa5a06cc30b6b83d11423433f65d1f9d76c4c6a0c90e3379eaa43b9bfdb88"},
]
soupsieve = [
{file = "soupsieve-2.3.2.post1-py3-none-any.whl", hash = "sha256:3b2503d3c7084a42b1ebd08116e5f81aadfaea95863628c80a3b774a11b7c759"},
{file = "soupsieve-2.3.2.post1.tar.gz", hash = "sha256:fc53893b3da2c33de295667a0e19f078c14bf86544af307354de5fcf12a3f30d"},
]
spacy = [
{file = "spacy-3.4.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e546b314f619502ae03e5eb9a0cfd09ca7a9db265bcdd8a3af83cfb0f1432e55"},
{file = "spacy-3.4.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ded11aa8966236aab145b4d2d024b3eb61ac50078362d77d9ed7d8c240ef0f4a"},
{file = "spacy-3.4.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:462e141f514d78cff85685b5b12eb8cadac0bad2f7820149cbe18d03ccb2e59c"},
{file = "spacy-3.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c966d25b3f3e49f5de08546b3638928f49678c365cbbebd0eec28f74e0adb539"},
{file = "spacy-3.4.3-cp310-cp310-win_amd64.whl", hash = "sha256:2ddba486c4c981abe6f1e3fd72648dc8811966e5f0e05808f9c9fab155c388d7"},
{file = "spacy-3.4.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3c87117dd335fba44d1c0d77602f0763c3addf4e7ef9bdbe9a495466c3484c69"},
{file = "spacy-3.4.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3ce3938720f48eaeeb360a7f623f15a0d9efd1a688d5d740e3d4cdcd6f6da8a3"},
{file = "spacy-3.4.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6ad6bf5e4e7f0bc2ef94b7ff6fe59abd766f74c192bca2f17430a3b3cd5bda5a"},
{file = "spacy-3.4.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6644c678bd7af567c6ce679f71d64119282e7d6f1a6f787162a91be3ea39333"},
{file = "spacy-3.4.3-cp311-cp311-win_amd64.whl", hash = "sha256:e6b871de8857a6820140358db3943180fdbe03d44ed792155cee6cb95f4ac4ea"},
{file = "spacy-3.4.3-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d211c2b8894354bf8d961af9a9dcab38f764e1dcddd7b80760e438fcd4c9fe43"},
{file = "spacy-3.4.3-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2ea41f9de30435456235c4182d8bc2eb54a0a64719856e66e780350bb4c8cfbe"},
{file = "spacy-3.4.3-cp36-cp36m-win_amd64.whl", hash = "sha256:afaf6e716cbac4a0fbfa9e9bf95decff223936597ddd03ea869118a7576aa1b1"},
{file = "spacy-3.4.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7115da36369b3c537caf2fe08e0b45528bd091c7f56ba3580af1e6fdfa9b1081"},
{file = "spacy-3.4.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3b3e629c889cac9656151286ec1232c6a948ce0d44a39f1ef5e60fed4f183a10"},
{file = "spacy-3.4.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9277cd0fcb96ee5dd885f7e96c639f21afd96198d61ca32100446afbff4dfbef"},
{file = "spacy-3.4.3-cp37-cp37m-win_amd64.whl", hash = "sha256:a36bd06a5a147350e5f5f6903c4777296c37b18199251bb41056c3a73aa4494f"},
{file = "spacy-3.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bdafcd0823ca804c39d0bed9e677eb7d0235b1259563d0fd4d3a201c71108af8"},
{file = "spacy-3.4.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0cdc23a48e6543402b4c56ebf2d36246001175c29fd56d3081efcec684651abc"},
{file = "spacy-3.4.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:455c2fbd1de24b6fe34fa121d87525134d7498f9f458ebc8274d7940b473999e"},
{file = "spacy-3.4.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d1c85279fbb6b75d7fb8d7c59c2b734502e51271cad90926e8df1d21b67da5aa"},
{file = "spacy-3.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:5c0d65f39184f522b4e67b965a42d121a3b2d799362682fe8847b64b0ce5bc7c"},
{file = "spacy-3.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a7b97ec21ed773edb2479ae5d6c7686b8034f418df6bccd9218f5c3c2b7cf888"},
{file = "spacy-3.4.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:36a9a506029842795099fd97ad95f0da2845c319020fcc7164cbf33650726f83"},
{file = "spacy-3.4.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5ab293eb1423fa05c7ee71b2fedda57c2b4a4ca8dc054ce678809457287b01dc"},
{file = "spacy-3.4.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bb6d0f185126decc8392cde7d28eb6e85ba4bca15424713288cccc49c2a3c52b"},
{file = "spacy-3.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:676ab9ab2cf94ba48caa306f185a166e85bd35b388ec24512c8ba7dfcbc7517e"},
{file = "spacy-3.4.3.tar.gz", hash = "sha256:22698cf5175e2b697e82699fcccee3092b42137a57d352df208d71657fd693bb"},
]
spacy-legacy = [
{file = "spacy-legacy-3.0.10.tar.gz", hash = "sha256:16104595d8ab1b7267f817a449ad1f986eb1f2a2edf1050748f08739a479679a"},
{file = "spacy_legacy-3.0.10-py2.py3-none-any.whl", hash = "sha256:8526a54d178dee9b7f218d43e5c21362c59056c5da23380b319b56043e9211f3"},
]
spacy-loggers = [
{file = "spacy-loggers-1.0.3.tar.gz", hash = "sha256:00f6fd554db9fd1fde6501b23e1f0e72f6eef14bb1e7fc15456d11d1d2de92ca"},
{file = "spacy_loggers-1.0.3-py3-none-any.whl", hash = "sha256:f74386b390a023f9615dcb499b7b4ad63338236a8187f0ec4dfe265a9f665ee8"},
]
sparse = [
{file = "sparse-0.13.0-py2.py3-none-any.whl", hash = "sha256:95ed0b649a0663b1488756ad4cf242b0a9bb2c9a25bc752a7c6ca9fbe8258966"},
{file = "sparse-0.13.0.tar.gz", hash = "sha256:685dc994aa770ee1b23f2d5392819c8429f27958771f8dceb2c4fb80210d5915"},
]
sphinx = [
{file = "Sphinx-5.3.0.tar.gz", hash = "sha256:51026de0a9ff9fc13c05d74913ad66047e104f56a129ff73e174eb5c3ee794b5"},
{file = "sphinx-5.3.0-py3-none-any.whl", hash = "sha256:060ca5c9f7ba57a08a1219e547b269fadf125ae25b06b9fa7f66768efb652d6d"},
]
sphinx-copybutton = [
{file = "sphinx-copybutton-0.5.0.tar.gz", hash = "sha256:a0c059daadd03c27ba750da534a92a63e7a36a7736dcf684f26ee346199787f6"},
{file = "sphinx_copybutton-0.5.0-py3-none-any.whl", hash = "sha256:9684dec7434bd73f0eea58dda93f9bb879d24bff2d8b187b1f2ec08dfe7b5f48"},
]
sphinx-design = [
{file = "sphinx_design-0.3.0-py3-none-any.whl", hash = "sha256:823c1dd74f31efb3285ec2f1254caefed29d762a40cd676f58413a1e4ed5cc96"},
{file = "sphinx_design-0.3.0.tar.gz", hash = "sha256:7183fa1fae55b37ef01bda5125a21ee841f5bbcbf59a35382be598180c4cefba"},
]
sphinx-rtd-theme = [
{file = "sphinx_rtd_theme-1.1.1-py2.py3-none-any.whl", hash = "sha256:31faa07d3e97c8955637fc3f1423a5ab2c44b74b8cc558a51498c202ce5cbda7"},
{file = "sphinx_rtd_theme-1.1.1.tar.gz", hash = "sha256:6146c845f1e1947b3c3dd4432c28998a1693ccc742b4f9ad7c63129f0757c103"},
]
sphinxcontrib-applehelp = [
{file = "sphinxcontrib-applehelp-1.0.2.tar.gz", hash = "sha256:a072735ec80e7675e3f432fcae8610ecf509c5f1869d17e2eecff44389cdbc58"},
{file = "sphinxcontrib_applehelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:806111e5e962be97c29ec4c1e7fe277bfd19e9652fb1a4392105b43e01af885a"},
]
sphinxcontrib-devhelp = [
{file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
{file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
]
sphinxcontrib-googleanalytics = []
sphinxcontrib-htmlhelp = [
{file = "sphinxcontrib-htmlhelp-2.0.0.tar.gz", hash = "sha256:f5f8bb2d0d629f398bf47d0d69c07bc13b65f75a81ad9e2f71a63d4b7a2f6db2"},
{file = "sphinxcontrib_htmlhelp-2.0.0-py2.py3-none-any.whl", hash = "sha256:d412243dfb797ae3ec2b59eca0e52dac12e75a241bf0e4eb861e450d06c6ed07"},
]
sphinxcontrib-jsmath = [
{file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
{file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
]
sphinxcontrib-qthelp = [
{file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
{file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
]
sphinxcontrib-serializinghtml = [
{file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
{file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
]
srsly = [
{file = "srsly-2.4.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:8fed31ef8acbb5fead2152824ef39e12d749fcd254968689ba5991dd257b63b4"},
{file = "srsly-2.4.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:04d0b4cd91e098cdac12d2c28e256b1181ba98bcd00e460b8e42dee3e8542804"},
{file = "srsly-2.4.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6d83bea1f774b54d9313a374a95f11a776d37bcedcda93c526bf7f1cb5f26428"},
{file = "srsly-2.4.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cae5d48a0bda55a3728f49976ea0b652f508dbc5ac3e849f41b64a5753ec7f0a"},
{file = "srsly-2.4.5-cp310-cp310-win_amd64.whl", hash = "sha256:f74c64934423bcc2d3508cf3a079c7034e5cde988255dc57c7a09794c78f0610"},
{file = "srsly-2.4.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0f9abb7857f9363f1ac52123db94dfe1c4af8959a39d698eff791d17e45e00b6"},
{file = "srsly-2.4.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f48d40c3b3d20e38410e7a95fa5b4050c035f467b0793aaf67188b1edad37fe3"},
{file = "srsly-2.4.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1434759effec2ee266a24acd9b53793a81cac01fc1e6321c623195eda1b9c7df"},
{file = "srsly-2.4.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5e7b0cd9853b0d9e00ad23d26199c1e44d8fd74096cbbbabc92447a915bcfd78"},
{file = "srsly-2.4.5-cp311-cp311-win_amd64.whl", hash = "sha256:874010587a807264963de9a1c91668c43cee9ed2f683f5406bdf5a34dfe12cca"},
{file = "srsly-2.4.5-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa4e1fe143275339d1c4a74e46d4c75168eed8b200f44f2ea023d45ff089a2f"},
{file = "srsly-2.4.5-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1c4291ee125796fb05e778e9ca8f9a829e8c314b757826f2e1d533e424a93531"},
{file = "srsly-2.4.5-cp36-cp36m-win_amd64.whl", hash = "sha256:8f258ee69aefb053258ac2e4f4b9d597e622b79f78874534430e864cef0be199"},
{file = "srsly-2.4.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:ace951c3088204bd66f30326f93ab6e615ce1562a461a8a464759d99fa9c2a02"},
{file = "srsly-2.4.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:facab907801fbcb0e54b3532e04bc6a0709184d68004ef3a129e8c7e3ca63d82"},
{file = "srsly-2.4.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a49c089541a9a0a27ccb841a596350b7ee1d6adfc7ebd28eddedfd34dc9f12c5"},
{file = "srsly-2.4.5-cp37-cp37m-win_amd64.whl", hash = "sha256:db6bc02bd1e3372a3636e47b22098107c9df2cf12d220321b51c586ba17904b3"},
{file = "srsly-2.4.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9a95c682de8c6e6145199f10a7c597647ff7d398fb28874f845ba7d34a86a033"},
{file = "srsly-2.4.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:8c26c5c0e07ea7bb7b8b8735e1b2261fea308c2c883b99211d11747162c6d897"},
{file = "srsly-2.4.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e0043eff95be45acb5ce09cebb80ebdb9f2b6856aa3a15979e6fe3cc9a486753"},
{file = "srsly-2.4.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a2075124d4872e754af966e76f3258cd526eeac84f0995ee8cd561fd4cf1b68e"},
{file = "srsly-2.4.5-cp38-cp38-win_amd64.whl", hash = "sha256:1a41e5b10902c885cabe326ba86d549d7011e38534c45bed158ecb8abd4b44ce"},
{file = "srsly-2.4.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b5a96f0ae15b651fa3fd87421bd93e61c6dc46c0831cbe275c9b790d253126b5"},
{file = "srsly-2.4.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:764906e9f4c2ac5f748c49d95c8bf79648404ebc548864f9cb1fa0707942d830"},
{file = "srsly-2.4.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:95afe9625badaf5ce326e37b21362423d7e8578a5ec9c85b15c3fca93205a883"},
{file = "srsly-2.4.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90359cc3c5601afd45ec12c52bde1cf1ccbe0dc7d4244fd1f8d0c9e100c71707"},
{file = "srsly-2.4.5-cp39-cp39-win_amd64.whl", hash = "sha256:2d3b0d32be2267fb489da172d71399ac59f763189b47dbe68eedb0817afaa6dc"},
{file = "srsly-2.4.5.tar.gz", hash = "sha256:c842258967baa527cea9367986e42b8143a1a890e7d4a18d25a36edc3c7a33c7"},
]
stack-data = [
{file = "stack_data-0.6.1-py3-none-any.whl", hash = "sha256:960cb054d6a1b2fdd9cbd529e365b3c163e8dabf1272e02cfe36b58403cff5c6"},
{file = "stack_data-0.6.1.tar.gz", hash = "sha256:6c9a10eb5f342415fe085db551d673955611afb821551f554d91772415464315"},
]
statsmodels = [
{file = "statsmodels-0.13.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c75319fddded9507cc310fc3980e4ae4d64e3ff37b322ad5e203a84f89d85203"},
{file = "statsmodels-0.13.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6f148920ef27c7ba69a5735724f65de9422c0c8bcef71b50c846b823ceab8840"},
{file = "statsmodels-0.13.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cc4d3e866bfe0c4f804bca362d0e7e29d24b840aaba8d35a754387e16d2a119"},
{file = "statsmodels-0.13.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:072950d6f7820a6b0bd6a27b2d792a6d6f952a1d2f62f0dcf8dd808799475855"},
{file = "statsmodels-0.13.5-cp310-cp310-win_amd64.whl", hash = "sha256:159ae9962c61b31dcffe6356d72ae3d074bc597ad9273ec93ae653fe607b8516"},
{file = "statsmodels-0.13.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:9061c0d5ee4f3038b590afedd527a925e5de27195dc342381bac7675b2c5efe4"},
{file = "statsmodels-0.13.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e1d89cba5fafc1bf8e75296fdfad0b619de2bfb5e6c132913991d207f3ead675"},
{file = "statsmodels-0.13.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:01bc16e7c66acb30cd3dda6004c43212c758223d1966131226024a5c99ec5a7e"},
{file = "statsmodels-0.13.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d5cd9ab5de2c7489b890213cba2aec3d6468eaaec547041c2dfcb1e03411f7e"},
{file = "statsmodels-0.13.5-cp311-cp311-win_amd64.whl", hash = "sha256:857d5c0564a68a7ef77dc2252bb43c994c0699919b4e1f06a9852c2fbb588765"},
{file = "statsmodels-0.13.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5a5348b2757ab31c5c31b498f25eff2ea3c42086bef3d3b88847c25a30bdab9c"},
{file = "statsmodels-0.13.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9b21648e3a8e7514839ba000a48e495cdd8bb55f1b71c608cf314b05541e283b"},
{file = "statsmodels-0.13.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b829eada6cec07990f5e6820a152af4871c601fd458f76a896fb79ae2114985"},
{file = "statsmodels-0.13.5-cp37-cp37m-win_amd64.whl", hash = "sha256:872b3a8186ef20f647c7ab5ace512a8fc050148f3c2f366460ab359eec3d9695"},
{file = "statsmodels-0.13.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bc1abb81d24f56425febd5a22bb852a1b98e53b80c4a67f50938f9512f154141"},
{file = "statsmodels-0.13.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:a2c46f1b0811a9736db37badeb102c0903f33bec80145ced3aa54df61aee5c2b"},
{file = "statsmodels-0.13.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:947f79ba9662359f1cfa6e943851f17f72b06e55f4a7c7a2928ed3bc57ed6cb8"},
{file = "statsmodels-0.13.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:046251c939c51e7632bcc8c6d6f31b8ca0eaffdf726d2498463f8de3735c9a82"},
{file = "statsmodels-0.13.5-cp38-cp38-win_amd64.whl", hash = "sha256:84f720e8d611ef8f297e6d2ffa7248764e223ef7221a3fc136e47ae089609611"},
{file = "statsmodels-0.13.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b0d1d24e4adf96ec3c64d9a027dcee2c5d5096bb0dad33b4d91034c0a3c40371"},
{file = "statsmodels-0.13.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0f0e5c9c58fb6cba41db01504ec8dd018c96a95152266b7d5d67e0de98840474"},
{file = "statsmodels-0.13.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5b034aa4b9ad4f4d21abc4dd4841be0809a446db14c7aa5c8a65090aea9f1143"},
{file = "statsmodels-0.13.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:73f97565c29241e839ffcef74fa995afdfe781910ccc27c189e5890193085958"},
{file = "statsmodels-0.13.5-cp39-cp39-win_amd64.whl", hash = "sha256:2ff331e508f2d1a53d3a188305477f4cf05cd8c52beb6483885eb3d51c8be3ad"},
{file = "statsmodels-0.13.5.tar.gz", hash = "sha256:593526acae1c0fda0ea6c48439f67c3943094c542fe769f8b90fe9e6c6cc4871"},
]
sympy = [
{file = "sympy-1.11.1-py3-none-any.whl", hash = "sha256:938f984ee2b1e8eae8a07b884c8b7a1146010040fccddc6539c54f401c8f6fcf"},
{file = "sympy-1.11.1.tar.gz", hash = "sha256:e32380dce63cb7c0108ed525570092fd45168bdae2faa17e528221ef72e88658"},
]
tblib = [
{file = "tblib-1.7.0-py2.py3-none-any.whl", hash = "sha256:289fa7359e580950e7d9743eab36b0691f0310fce64dee7d9c31065b8f723e23"},
{file = "tblib-1.7.0.tar.gz", hash = "sha256:059bd77306ea7b419d4f76016aef6d7027cc8a0785579b5aad198803435f882c"},
]
tenacity = [
{file = "tenacity-8.1.0-py3-none-any.whl", hash = "sha256:35525cd47f82830069f0d6b73f7eb83bc5b73ee2fff0437952cedf98b27653ac"},
{file = "tenacity-8.1.0.tar.gz", hash = "sha256:e48c437fdf9340f5666b92cd7990e96bc5fc955e1298baf4a907e3972067a445"},
]
tensorboard = [
{file = "tensorboard-2.11.0-py3-none-any.whl", hash = "sha256:a0e592ee87962e17af3f0dce7faae3fbbd239030159e9e625cce810b7e35c53d"},
]
tensorboard-data-server = [
{file = "tensorboard_data_server-0.6.1-py3-none-any.whl", hash = "sha256:809fe9887682d35c1f7d1f54f0f40f98bb1f771b14265b453ca051e2ce58fca7"},
{file = "tensorboard_data_server-0.6.1-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:fa8cef9be4fcae2f2363c88176638baf2da19c5ec90addb49b1cde05c95c88ee"},
{file = "tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl", hash = "sha256:d8237580755e58eff68d1f3abefb5b1e39ae5c8b127cc40920f9c4fb33f4b98a"},
]
tensorboard-plugin-wit = [
{file = "tensorboard_plugin_wit-1.8.1-py3-none-any.whl", hash = "sha256:ff26bdd583d155aa951ee3b152b3d0cffae8005dc697f72b44a8e8c2a77a8cbe"},
]
tensorflow = [
{file = "tensorflow-2.11.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:6c049fec6c2040685d6f43a63e17ccc5d6b0abc16b70cc6f5e7d691262b5d2d0"},
{file = "tensorflow-2.11.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bcc8380820cea8f68f6c90b8aee5432e8537e5bb9ec79ac61a98e6a9a02c7d40"},
{file = "tensorflow-2.11.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d973458241c8771bf95d4ba68ad5d67b094f72dd181c2d562ffab538c1b0dad7"},
{file = "tensorflow-2.11.0-cp310-cp310-win_amd64.whl", hash = "sha256:d470b772ee3c291a8c7be2331e7c379e0c338223c0bf532f5906d4556f17580d"},
{file = "tensorflow-2.11.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:d29c1179149fa469ad68234c52c83081d037ead243f90e826074e2563a0f938a"},
{file = "tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2cdba2fce00d6c924470d4fb65d5e95a4b6571a863860608c0c13f0393f4ca0d"},
{file = "tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2ab20f93d2b52a44b414ec6dcf82aa12110e90e0920039a27108de28ae2728"},
{file = "tensorflow-2.11.0-cp37-cp37m-win_amd64.whl", hash = "sha256:445510f092f7827e1f60f59b8bfb58e664aaf05d07daaa21c5735a7f76ca2b25"},
{file = "tensorflow-2.11.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:056d29f2212342536ce3856aa47910a2515eb97ec0a6cc29ed47fc4be1369ec8"},
{file = "tensorflow-2.11.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:17b29d6d360fad545ab1127db52592efd3f19ac55c1a45e5014da328ae867ab4"},
{file = "tensorflow-2.11.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:335ab5cccd7a1c46e3d89d9d46913f0715e8032df8d7438f9743b3fb97b39f69"},
{file = "tensorflow-2.11.0-cp38-cp38-win_amd64.whl", hash = "sha256:d48da37c8ae711eb38047a56a052ca8bb4ee018a91a479e42b7a8d117628c32e"},
{file = "tensorflow-2.11.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:d9cf25bca641f2e5c77caa3bfd8dd6b892a7aec0695c54d2a7c9f52a54a8d487"},
{file = "tensorflow-2.11.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0d28f9691ebc48c0075e271023b3f147ae2bc29a3d3a7f42d45019c6b4a700d2"},
{file = "tensorflow-2.11.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:276a44210d956701899dc78ad0aa116a0071f22fb0bcc1ea6bb59f7646b08d11"},
{file = "tensorflow-2.11.0-cp39-cp39-win_amd64.whl", hash = "sha256:cc3444fe1d58c65a195a69656bf56015bf19dc2916da607d784b0a1e215ec008"},
]
tensorflow-estimator = [
{file = "tensorflow_estimator-2.11.0-py2.py3-none-any.whl", hash = "sha256:ea3b64acfff3d9a244f06178c9bdedcbdd3f125b67d0888dba8229498d06468b"},
]
tensorflow-io-gcs-filesystem = [
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:22753dc28c949bfaf29b573ee376370762c88d80330fe95cfb291261eb5e927a"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:52988659f405166df79905e9859bc84ae2a71e3ff61522ba32a95e4dce8e66d2"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp310-cp310-win_amd64.whl", hash = "sha256:698d7f89e09812b9afeb47c3860797343a22f997c64ab9dab98132c61daa8a7d"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:bbf245883aa52ec687b66d0fcbe0f5f0a92d98c0b1c53e6a736039a3548d29a1"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp311-cp311-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:6d95f306ff225c5053fd06deeab3e3a2716357923cb40c44d566c11be779caa3"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp311-cp311-win_amd64.whl", hash = "sha256:5fbef5836e70026245d8d9e692c44dae2c6dbc208c743d01f5b7a2978d6b6bc6"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:00cf6a92f1f9f90b2ba2d728870bcd2a70b116316d0817ab0b91dd390c25b3fd"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f76cbe1a784841c223f6861e5f6c7e53aa6232cb626d57e76881a0638c365de6"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp37-cp37m-win_amd64.whl", hash = "sha256:c5d99f56c12a349905ff684142e4d2df06ae68ecf50c4aad5449a5f81731d858"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:b6e2d275020fb4d1a952cd3fa546483f4e46ad91d64e90d3458e5ca3d12f6477"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a6670e0da16c884267e896ea5c3334d6fd319bd6ff7cf917043a9f3b2babb1b3"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp38-cp38-win_amd64.whl", hash = "sha256:bfed720fc691d3f45802a7bed420716805aef0939c11cebf25798906201f626e"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:cc062ce13ec95fb64b1fd426818a6d2b0e5be9692bc0e43a19cce115b6da4336"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:366e1eff8dbd6b64333d7061e2a8efd081ae4742614f717ced08d8cc9379eb50"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp39-cp39-win_amd64.whl", hash = "sha256:9484893779324b2d34874b0aacf3b824eb4f22d782e75df029cbccab2e607974"},
]
termcolor = [
{file = "termcolor-2.1.1-py3-none-any.whl", hash = "sha256:fa852e957f97252205e105dd55bbc23b419a70fec0085708fc0515e399f304fd"},
{file = "termcolor-2.1.1.tar.gz", hash = "sha256:67cee2009adc6449c650f6bcf3bdeed00c8ba53a8cda5362733c53e0a39fb70b"},
]
terminado = [
{file = "terminado-0.17.0-py3-none-any.whl", hash = "sha256:bf6fe52accd06d0661d7611cc73202121ec6ee51e46d8185d489ac074ca457c2"},
{file = "terminado-0.17.0.tar.gz", hash = "sha256:520feaa3aeab8ad64a69ca779be54be9234edb2d0d6567e76c93c2c9a4e6e43f"},
]
thinc = [
{file = "thinc-8.1.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5dc6629e4770a13dec34eda3c4d89302f1b5c91ac4663cd53f876a4e761fcc00"},
{file = "thinc-8.1.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8af5639de41a08d358fac073ac116faefe75289d9bed5c1fbf6c7a54724529ea"},
{file = "thinc-8.1.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4d66eeacc29769bf4238a0666f05e38d75dce60ab609eea5089975e6d8b82721"},
{file = "thinc-8.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:25fcf9b53317f3addca048f1295d4708a95c526821295fe42398e23520514373"},
{file = "thinc-8.1.5-cp310-cp310-win_amd64.whl", hash = "sha256:a683f5280601f2fa1625e738e2b6ce481d17b07350823164f5863aab6b8b8a5d"},
{file = "thinc-8.1.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:404af2a714d6e688d27f7816042bca85766cbc57808aa9afb3309ad786000726"},
{file = "thinc-8.1.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ee28aa9773cb69d6c95d0c58b3fa9997c88840ad1eb877576f407a5b3b0f93c0"},
{file = "thinc-8.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7acccd5fb2fcd6caab1f3ad9d3f6acd1c6194a638dceccb5a33bd6f1875221ab"},
{file = "thinc-8.1.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1dc59ab558c85f901ac8299eb8ff1be14404b4d47e5ed3f94f897e25496e4f80"},
{file = "thinc-8.1.5-cp311-cp311-win_amd64.whl", hash = "sha256:07a4cf13c6f0259f32c9d023e2d32d0f5e0aa12ce0422792dbadd24fa1e0379e"},
{file = "thinc-8.1.5-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3ad722c4b1351a712bf8759307ea1213f236aee4a170b2ff31f7908f31b34261"},
{file = "thinc-8.1.5-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:076d68f6c27862b66e15af3622651c58f66b3d3b1c69beadbf1c13da294f05cc"},
{file = "thinc-8.1.5-cp36-cp36m-win_amd64.whl", hash = "sha256:91a8ef8dd565b6aa9b3161b97eece079993109be156f4e8501c8bd36e02b6f3f"},
{file = "thinc-8.1.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:73538c0e596d1f281678354f6508d4af5fad3ae0743b069a96628f2a96085fa5"},
{file = "thinc-8.1.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea5e6502565fe72f9a975f6fe5d1be9d19914d2a3abb3158da08b4adffaa97c6"},
{file = "thinc-8.1.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d202e79e3d785a2931d580d3dafaa6ca357c5656c82341121731a3491a1c8887"},
{file = "thinc-8.1.5-cp37-cp37m-win_amd64.whl", hash = "sha256:61dfa235c891c1fa24f9607cd0cad264806adeb70d267162c6e5d91fb9f78640"},
{file = "thinc-8.1.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b62a4247cce4c3a07014b9386b9045dbc15a83aa46102a7fcd5d8eec21fa463a"},
{file = "thinc-8.1.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:345d15eb45743b305a35dd1dc77d282248e55e45a0a84c38d2dfc9fad6130125"},
{file = "thinc-8.1.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6793340b5ada30f11d9beaa6001ade6d80cf3a7877d701ec1710552145dabb33"},
{file = "thinc-8.1.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fa07750e65cc7d3bd922bf2046a10ef28cf22497990da13c3ca154b25449b758"},
{file = "thinc-8.1.5-cp38-cp38-win_amd64.whl", hash = "sha256:b7c1b8417e6bebcebe0bbded816b7b6587a1e239539109897e15cf8463dbed10"},
{file = "thinc-8.1.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ad96acada56e4a0509b834c2e0950a5066727ddfc8d2201b83f7bca8751886aa"},
{file = "thinc-8.1.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5d0144cccb3fb08b15bba73a97f83c0f311a388417fb89d5bb4451abe559b0a2"},
{file = "thinc-8.1.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ced446d2af306a29b0c9ba8940a6631e2e9ef287f9643f4a1d539d69e9fc7266"},
{file = "thinc-8.1.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bb376234c44f173445651c9bf397d05622e31c09a98f81cee98f5908d674380"},
{file = "thinc-8.1.5-cp39-cp39-win_amd64.whl", hash = "sha256:16be051c6f71d967fe87c3bda3a760699539cf75fee6b32527ea38feb3002e56"},
{file = "thinc-8.1.5.tar.gz", hash = "sha256:4d3e4de33d2d0eae7c1455c60c680e453b0204c29e3d2d548d7a9e7fe08ccfbd"},
]
threadpoolctl = [
{file = "threadpoolctl-3.1.0-py3-none-any.whl", hash = "sha256:8b99adda265feb6773280df41eece7b2e6561b772d21ffd52e372f999024907b"},
{file = "threadpoolctl-3.1.0.tar.gz", hash = "sha256:a335baacfaa4400ae1f0d8e3a58d6674d2f8828e3716bb2802c44955ad391380"},
]
tinycss2 = [
{file = "tinycss2-1.2.1-py3-none-any.whl", hash = "sha256:2b80a96d41e7c3914b8cda8bc7f705a4d9c49275616e886103dd839dfc847847"},
{file = "tinycss2-1.2.1.tar.gz", hash = "sha256:8cff3a8f066c2ec677c06dbc7b45619804a6938478d9d73c284b29d14ecb0627"},
]
tokenize-rt = [
{file = "tokenize_rt-5.0.0-py2.py3-none-any.whl", hash = "sha256:c67772c662c6b3dc65edf66808577968fb10badfc2042e3027196bed4daf9e5a"},
{file = "tokenize_rt-5.0.0.tar.gz", hash = "sha256:3160bc0c3e8491312d0485171dea861fc160a240f5f5766b72a1165408d10740"},
]
tomli = [
{file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
]
toolz = [
{file = "toolz-0.12.0-py3-none-any.whl", hash = "sha256:2059bd4148deb1884bb0eb770a3cde70e7f954cfbbdc2285f1f2de01fd21eb6f"},
{file = "toolz-0.12.0.tar.gz", hash = "sha256:88c570861c440ee3f2f6037c4654613228ff40c93a6c25e0eba70d17282c6194"},
]
torch = [
{file = "torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:9c038662db894a23e49e385df13d47b2a777ffd56d9bcd5b832593fab0a7e286"},
{file = "torch-1.12.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:4e1b9c14cf13fd2ab8d769529050629a0e68a6fc5cb8e84b4a3cc1dd8c4fe541"},
{file = "torch-1.12.1-cp310-cp310-win_amd64.whl", hash = "sha256:e9c8f4a311ac29fc7e8e955cfb7733deb5dbe1bdaabf5d4af2765695824b7e0d"},
{file = "torch-1.12.1-cp310-none-macosx_10_9_x86_64.whl", hash = "sha256:976c3f997cea38ee91a0dd3c3a42322785414748d1761ef926b789dfa97c6134"},
{file = "torch-1.12.1-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:68104e4715a55c4bb29a85c6a8d57d820e0757da363be1ba680fa8cc5be17b52"},
{file = "torch-1.12.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:743784ccea0dc8f2a3fe6a536bec8c4763bd82c1352f314937cb4008d4805de1"},
{file = "torch-1.12.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b5dbcca369800ce99ba7ae6dee3466607a66958afca3b740690d88168752abcf"},
{file = "torch-1.12.1-cp37-cp37m-win_amd64.whl", hash = "sha256:f3b52a634e62821e747e872084ab32fbcb01b7fa7dbb7471b6218279f02a178a"},
{file = "torch-1.12.1-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:8a34a2fbbaa07c921e1b203f59d3d6e00ed379f2b384445773bd14e328a5b6c8"},
{file = "torch-1.12.1-cp37-none-macosx_11_0_arm64.whl", hash = "sha256:42f639501928caabb9d1d55ddd17f07cd694de146686c24489ab8c615c2871f2"},
{file = "torch-1.12.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:0b44601ec56f7dd44ad8afc00846051162ef9c26a8579dda0a02194327f2d55e"},
{file = "torch-1.12.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:cd26d8c5640c3a28c526d41ccdca14cf1cbca0d0f2e14e8263a7ac17194ab1d2"},
{file = "torch-1.12.1-cp38-cp38-win_amd64.whl", hash = "sha256:42e115dab26f60c29e298559dbec88444175528b729ae994ec4c65d56fe267dd"},
{file = "torch-1.12.1-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:a8320ba9ad87e80ca5a6a016e46ada4d1ba0c54626e135d99b2129a4541c509d"},
{file = "torch-1.12.1-cp38-none-macosx_11_0_arm64.whl", hash = "sha256:03e31c37711db2cd201e02de5826de875529e45a55631d317aadce2f1ed45aa8"},
{file = "torch-1.12.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:9b356aea223772cd754edb4d9ecf2a025909b8615a7668ac7d5130f86e7ec421"},
{file = "torch-1.12.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:6cf6f54b43c0c30335428195589bd00e764a6d27f3b9ba637aaa8c11aaf93073"},
{file = "torch-1.12.1-cp39-cp39-win_amd64.whl", hash = "sha256:f00c721f489089dc6364a01fd84906348fe02243d0af737f944fddb36003400d"},
{file = "torch-1.12.1-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:bfec2843daa654f04fda23ba823af03e7b6f7650a873cdb726752d0e3718dada"},
{file = "torch-1.12.1-cp39-none-macosx_11_0_arm64.whl", hash = "sha256:69fe2cae7c39ccadd65a123793d30e0db881f1c1927945519c5c17323131437e"},
]
torchvision = [
{file = "torchvision-0.13.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:19286a733c69dcbd417b86793df807bd227db5786ed787c17297741a9b0d0fc7"},
{file = "torchvision-0.13.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:08f592ea61836ebeceb5c97f4d7a813b9d7dc651bbf7ce4401563ccfae6a21fc"},
{file = "torchvision-0.13.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:ef5fe3ec1848123cd0ec74c07658192b3147dcd38e507308c790d5943e87b88c"},
{file = "torchvision-0.13.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:099874088df104d54d8008f2a28539ca0117b512daed8bf3c2bbfa2b7ccb187a"},
{file = "torchvision-0.13.1-cp310-cp310-win_amd64.whl", hash = "sha256:8e4d02e4d8a203e0c09c10dfb478214c224d080d31efc0dbf36d9c4051f7f3c6"},
{file = "torchvision-0.13.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5e631241bee3661de64f83616656224af2e3512eb2580da7c08e08b8c965a8ac"},
{file = "torchvision-0.13.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:899eec0b9f3b99b96d6f85b9aa58c002db41c672437677b553015b9135b3be7e"},
{file = "torchvision-0.13.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:83e9e2457f23110fd53b0177e1bc621518d6ea2108f570e853b768ce36b7c679"},
{file = "torchvision-0.13.1-cp37-cp37m-win_amd64.whl", hash = "sha256:7552e80fa222252b8b217a951c85e172a710ea4cad0ae0c06fbb67addece7871"},
{file = "torchvision-0.13.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f230a1a40ed70d51e463ce43df243ec520902f8725de2502e485efc5eea9d864"},
{file = "torchvision-0.13.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e9a563894f9fa40692e24d1aa58c3ef040450017cfed3598ff9637f404f3fe3b"},
{file = "torchvision-0.13.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7cb789ceefe6dcd0dc8eeda37bfc45efb7cf34770eac9533861d51ca508eb5b3"},
{file = "torchvision-0.13.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:87c137f343197769a51333076e66bfcd576301d2cd8614b06657187c71b06c4f"},
{file = "torchvision-0.13.1-cp38-cp38-win_amd64.whl", hash = "sha256:4d8bf321c4380854ef04613935fdd415dce29d1088a7ff99e06e113f0efe9203"},
{file = "torchvision-0.13.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:0298bae3b09ac361866088434008d82b99d6458fe8888c8df90720ef4b347d44"},
{file = "torchvision-0.13.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c5ed609c8bc88c575226400b2232e0309094477c82af38952e0373edef0003fd"},
{file = "torchvision-0.13.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:3567fb3def829229ec217c1e38f08c5128ff7fb65854cac17ebac358ff7aa309"},
{file = "torchvision-0.13.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:b167934a5943242da7b1e59318f911d2d253feeca0d13ad5d832b58eed943401"},
{file = "torchvision-0.13.1-cp39-cp39-win_amd64.whl", hash = "sha256:0e77706cc90462653620e336bb90daf03d7bf1b88c3a9a3037df8d111823a56e"},
]
tornado = [
{file = "tornado-6.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:20f638fd8cc85f3cbae3c732326e96addff0a15e22d80f049e00121651e82e72"},
{file = "tornado-6.2-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:87dcafae3e884462f90c90ecc200defe5e580a7fbbb4365eda7c7c1eb809ebc9"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba09ef14ca9893954244fd872798b4ccb2367c165946ce2dd7376aebdde8e3ac"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8150f721c101abdef99073bf66d3903e292d851bee51910839831caba341a75"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3a2f5999215a3a06a4fc218026cd84c61b8b2b40ac5296a6db1f1451ef04c1e"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5f8c52d219d4995388119af7ccaa0bcec289535747620116a58d830e7c25d8a8"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_i686.whl", hash = "sha256:6fdfabffd8dfcb6cf887428849d30cf19a3ea34c2c248461e1f7d718ad30b66b"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:1d54d13ab8414ed44de07efecb97d4ef7c39f7438cf5e976ccd356bebb1b5fca"},
{file = "tornado-6.2-cp37-abi3-win32.whl", hash = "sha256:5c87076709343557ef8032934ce5f637dbb552efa7b21d08e89ae7619ed0eb23"},
{file = "tornado-6.2-cp37-abi3-win_amd64.whl", hash = "sha256:e5f923aa6a47e133d1cf87d60700889d7eae68988704e20c75fb2d65677a8e4b"},
{file = "tornado-6.2.tar.gz", hash = "sha256:9b630419bde84ec666bfd7ea0a4cb2a8a651c2d5cccdbdd1972a0c859dfc3c13"},
]
tqdm = [
{file = "tqdm-4.64.1-py2.py3-none-any.whl", hash = "sha256:6fee160d6ffcd1b1c68c65f14c829c22832bc401726335ce92c52d395944a6a1"},
{file = "tqdm-4.64.1.tar.gz", hash = "sha256:5f4f682a004951c1b450bc753c710e9280c5746ce6ffedee253ddbcbf54cf1e4"},
]
traitlets = [
{file = "traitlets-5.5.0-py3-none-any.whl", hash = "sha256:1201b2c9f76097195989cdf7f65db9897593b0dfd69e4ac96016661bb6f0d30f"},
{file = "traitlets-5.5.0.tar.gz", hash = "sha256:b122f9ff2f2f6c1709dab289a05555be011c87828e911c0cf4074b85cb780a79"},
]
typer = [
{file = "typer-0.7.0-py3-none-any.whl", hash = "sha256:b5e704f4e48ec263de1c0b3a2387cd405a13767d2f907f44c1a08cbad96f606d"},
{file = "typer-0.7.0.tar.gz", hash = "sha256:ff797846578a9f2a201b53442aedeb543319466870fbe1c701eab66dd7681165"},
]
typing-extensions = [
{file = "typing_extensions-4.4.0-py3-none-any.whl", hash = "sha256:16fa4864408f655d35ec496218b85f79b3437c829e93320c7c9215ccfd92489e"},
{file = "typing_extensions-4.4.0.tar.gz", hash = "sha256:1511434bb92bf8dd198c12b1cc812e800d4181cfcb867674e0f8279cc93087aa"},
]
tzdata = [
{file = "tzdata-2022.6-py2.py3-none-any.whl", hash = "sha256:04a680bdc5b15750c39c12a448885a51134a27ec9af83667663f0b3a1bf3f342"},
{file = "tzdata-2022.6.tar.gz", hash = "sha256:91f11db4503385928c15598c98573e3af07e7229181bee5375bd30f1695ddcae"},
]
tzlocal = [
{file = "tzlocal-4.2-py3-none-any.whl", hash = "sha256:89885494684c929d9191c57aa27502afc87a579be5cdd3225c77c463ea043745"},
{file = "tzlocal-4.2.tar.gz", hash = "sha256:ee5842fa3a795f023514ac2d801c4a81d1743bbe642e3940143326b3a00addd7"},
]
urllib3 = [
{file = "urllib3-1.26.12-py2.py3-none-any.whl", hash = "sha256:b930dd878d5a8afb066a637fbb35144fe7901e3b209d1cd4f524bd0e9deee997"},
{file = "urllib3-1.26.12.tar.gz", hash = "sha256:3fa96cf423e6987997fc326ae8df396db2a8b7c667747d47ddd8ecba91f4a74e"},
]
wasabi = [
{file = "wasabi-0.10.1-py3-none-any.whl", hash = "sha256:fe862cc24034fbc9f04717cd312ab884f71f51a8ecabebc3449b751c2a649d83"},
{file = "wasabi-0.10.1.tar.gz", hash = "sha256:c8e372781be19272942382b14d99314d175518d7822057cb7a97010c4259d249"},
]
wcwidth = [
{file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"},
{file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"},
]
webencodings = [
{file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
{file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
]
websocket-client = [
{file = "websocket-client-1.4.2.tar.gz", hash = "sha256:d6e8f90ca8e2dd4e8027c4561adeb9456b54044312dba655e7cae652ceb9ae59"},
{file = "websocket_client-1.4.2-py3-none-any.whl", hash = "sha256:d6b06432f184438d99ac1f456eaf22fe1ade524c3dd16e661142dc54e9cba574"},
]
werkzeug = [
{file = "Werkzeug-2.2.2-py3-none-any.whl", hash = "sha256:f979ab81f58d7318e064e99c4506445d60135ac5cd2e177a2de0089bfd4c9bd5"},
{file = "Werkzeug-2.2.2.tar.gz", hash = "sha256:7ea2d48322cc7c0f8b3a215ed73eabd7b5d75d0b50e31ab006286ccff9e00b8f"},
]
wheel = [
{file = "wheel-0.38.4-py3-none-any.whl", hash = "sha256:b60533f3f5d530e971d6737ca6d58681ee434818fab630c83a734bb10c083ce8"},
{file = "wheel-0.38.4.tar.gz", hash = "sha256:965f5259b566725405b05e7cf774052044b1ed30119b5d586b2703aafe8719ac"},
]
widgetsnbextension = [
{file = "widgetsnbextension-4.0.3-py3-none-any.whl", hash = "sha256:7f3b0de8fda692d31ef03743b598620e31c2668b835edbd3962d080ccecf31eb"},
{file = "widgetsnbextension-4.0.3.tar.gz", hash = "sha256:34824864c062b0b3030ad78210db5ae6a3960dfb61d5b27562d6631774de0286"},
]
wrapt = [
{file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"},
{file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"},
{file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"},
{file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"},
{file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"},
{file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"},
{file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"},
{file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"},
{file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"},
{file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"},
{file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"},
{file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"},
{file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"},
{file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"},
{file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"},
{file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"},
]
xgboost = [
{file = "xgboost-1.7.1-py3-none-macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:373d8e95f2f0c0a680ee625a96141b0009f334e132be8493e0f6c69026221bbd"},
{file = "xgboost-1.7.1-py3-none-macosx_12_0_arm64.whl", hash = "sha256:91dfd4af12c01c6e683b0412f48744d2d30d6754e33b297e40845e2d136b3d30"},
{file = "xgboost-1.7.1-py3-none-manylinux2014_aarch64.whl", hash = "sha256:18b9fbad68d2af60737618072e77a43f88eec1113a143f9498698eb5db0d9c41"},
{file = "xgboost-1.7.1-py3-none-manylinux2014_x86_64.whl", hash = "sha256:e96305eb8c8b6061d83ac9fef25437e8ebc8d9c9300e75b8d07f35de1031166b"},
{file = "xgboost-1.7.1-py3-none-win_amd64.whl", hash = "sha256:fbe06896e1b12843c7f428ae56da6ac1c5975545d8785f137f73fd591c54e5f5"},
{file = "xgboost-1.7.1.tar.gz", hash = "sha256:bb302c5c33e14bab94603940987940f29203ecb8767a7a719daf579fbfaace64"},
]
zict = [
{file = "zict-2.2.0-py2.py3-none-any.whl", hash = "sha256:dabcc8c8b6833aa3b6602daad50f03da068322c1a90999ff78aed9eecc8fa92c"},
{file = "zict-2.2.0.tar.gz", hash = "sha256:d7366c2e2293314112dcf2432108428a67b927b00005619feefc310d12d833f3"},
]
zipp = [
{file = "zipp-3.10.0-py3-none-any.whl", hash = "sha256:4fcb6f278987a6605757302a6e40e896257570d11c51628968ccb2a47e80c6c1"},
{file = "zipp-3.10.0.tar.gz", hash = "sha256:7a7262fd930bd3e36c50b9a64897aec3fafff3dfdeec9623ae22b40e93f99bb8"},
]
| [[package]]
name = "absl-py"
version = "1.3.0"
description = "Abseil Python Common Libraries, see https://github.com/abseil/abseil-py."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "alabaster"
version = "0.7.12"
description = "A configurable sidebar-enabled Sphinx theme"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "anyio"
version = "3.6.2"
description = "High level compatibility layer for multiple asynchronous event loop implementations"
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
idna = ">=2.8"
sniffio = ">=1.1"
[package.extras]
doc = ["packaging", "sphinx-autodoc-typehints (>=1.2.0)", "sphinx-rtd-theme"]
test = ["contextlib2", "coverage[toml] (>=4.5)", "hypothesis (>=4.0)", "mock (>=4)", "pytest (>=7.0)", "pytest-mock (>=3.6.1)", "trustme", "uvloop (<0.15)", "uvloop (>=0.15)"]
trio = ["trio (>=0.16,<0.22)"]
[[package]]
name = "appnope"
version = "0.1.3"
description = "Disable App Nap on macOS >= 10.9"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "argon2-cffi"
version = "21.3.0"
description = "The secure Argon2 password hashing algorithm."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
argon2-cffi-bindings = "*"
[package.extras]
dev = ["cogapp", "coverage[toml] (>=5.0.2)", "furo", "hypothesis", "pre-commit", "pytest", "sphinx", "sphinx-notfound-page", "tomli"]
docs = ["furo", "sphinx", "sphinx-notfound-page"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pytest"]
[[package]]
name = "argon2-cffi-bindings"
version = "21.2.0"
description = "Low-level CFFI bindings for Argon2"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.0.1"
[package.extras]
dev = ["cogapp", "pre-commit", "pytest", "wheel"]
tests = ["pytest"]
[[package]]
name = "asttokens"
version = "2.1.0"
description = "Annotate AST trees with source code positions"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[package.extras]
test = ["astroid (<=2.5.3)", "pytest"]
[[package]]
name = "astunparse"
version = "1.6.3"
description = "An AST unparser for Python"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = ">=1.6.1,<2.0"
wheel = ">=0.23.0,<1.0"
[[package]]
name = "attrs"
version = "22.1.0"
description = "Classes Without Boilerplate"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.extras]
dev = ["cloudpickle", "coverage[toml] (>=5.0.2)", "furo", "hypothesis", "mypy (>=0.900,!=0.940)", "pre-commit", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "sphinx", "sphinx-notfound-page", "zope.interface"]
docs = ["furo", "sphinx", "sphinx-notfound-page", "zope.interface"]
tests = ["cloudpickle", "coverage[toml] (>=5.0.2)", "hypothesis", "mypy (>=0.900,!=0.940)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "zope.interface"]
tests_no_zope = ["cloudpickle", "coverage[toml] (>=5.0.2)", "hypothesis", "mypy (>=0.900,!=0.940)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins"]
[[package]]
name = "autogluon.common"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
boto3 = "*"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
setuptools = "*"
[package.extras]
tests = ["pytest", "pytest-mypy", "types-requests", "types-setuptools"]
[[package]]
name = "autogluon.core"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
"autogluon.common" = "0.6.0"
boto3 = "*"
dask = ">=2021.09.1,<=2021.11.2"
distributed = ">=2021.09.1,<=2021.11.2"
matplotlib = "*"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
requests = "*"
scikit-learn = ">=1.0.0,<1.2"
scipy = ">=1.5.4,<1.10.0"
tqdm = ">=4.38.0"
[package.extras]
all = ["hyperopt (>=0.2.7,<0.2.8)", "ray (>=2.0,<2.1)", "ray[tune] (>=2.0,<2.1)"]
ray = ["ray (>=2.0,<2.1)"]
raytune = ["hyperopt (>=0.2.7,<0.2.8)", "ray[tune] (>=2.0,<2.1)"]
tests = ["pytest", "pytest-mypy", "types-requests", "types-setuptools"]
[[package]]
name = "autogluon.features"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
"autogluon.common" = "0.6.0"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
psutil = ">=5.7.3,<6"
scikit-learn = ">=1.0.0,<1.2"
[[package]]
name = "autogluon.tabular"
version = "0.6.0"
description = "AutoML for Image, Text, and Tabular Data"
category = "main"
optional = false
python-versions = ">=3.7, <3.10"
[package.dependencies]
"autogluon.core" = "0.6.0"
"autogluon.features" = "0.6.0"
catboost = {version = ">=1.0,<1.2", optional = true, markers = "extra == \"all\""}
fastai = {version = ">=2.3.1,<2.8", optional = true, markers = "extra == \"all\""}
lightgbm = {version = ">=3.3,<3.4", optional = true, markers = "extra == \"all\""}
networkx = ">=2.3,<3.0"
numpy = ">=1.21,<1.24"
pandas = ">=1.2.5,<1.4.0 || >1.4.0,<1.6"
psutil = ">=5.7.3,<6"
scikit-learn = ">=1.0.0,<1.2"
scipy = ">=1.5.4,<1.10.0"
torch = {version = ">=1.0,<1.13", optional = true, markers = "extra == \"all\""}
xgboost = {version = ">=1.6,<1.8", optional = true, markers = "extra == \"all\""}
[package.extras]
all = ["catboost (>=1.0,<1.2)", "fastai (>=2.3.1,<2.8)", "lightgbm (>=3.3,<3.4)", "torch (>=1.0,<1.13)", "xgboost (>=1.6,<1.8)"]
catboost = ["catboost (>=1.0,<1.2)"]
fastai = ["fastai (>=2.3.1,<2.8)", "torch (>=1.0,<1.13)"]
imodels = ["imodels (>=1.3.0)"]
lightgbm = ["lightgbm (>=3.3,<3.4)"]
skex = ["scikit-learn-intelex (>=2021.5,<2021.6)"]
skl2onnx = ["skl2onnx (>=1.12.0,<1.13.0)"]
tests = ["imodels (>=1.3.0)", "skl2onnx (>=1.12.0,<1.13.0)", "vowpalwabbit (>=8.10,<8.11)"]
vowpalwabbit = ["vowpalwabbit (>=8.10,<8.11)"]
xgboost = ["xgboost (>=1.6,<1.8)"]
[[package]]
name = "Babel"
version = "2.11.0"
description = "Internationalization utilities"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pytz = ">=2015.7"
[[package]]
name = "backcall"
version = "0.2.0"
description = "Specifications for callback functions passed in to an API"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "backports.zoneinfo"
version = "0.2.1"
description = "Backport of the standard library zoneinfo module"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
tzdata = ["tzdata"]
[[package]]
name = "beautifulsoup4"
version = "4.11.1"
description = "Screen-scraping library"
category = "dev"
optional = false
python-versions = ">=3.6.0"
[package.dependencies]
soupsieve = ">1.2"
[package.extras]
html5lib = ["html5lib"]
lxml = ["lxml"]
[[package]]
name = "black"
version = "22.10.0"
description = "The uncompromising code formatter."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
click = ">=8.0.0"
ipython = {version = ">=7.8.0", optional = true, markers = "extra == \"jupyter\""}
mypy-extensions = ">=0.4.3"
pathspec = ">=0.9.0"
platformdirs = ">=2"
tokenize-rt = {version = ">=3.2.0", optional = true, markers = "extra == \"jupyter\""}
tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
[package.extras]
colorama = ["colorama (>=0.4.3)"]
d = ["aiohttp (>=3.7.4)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "bleach"
version = "5.0.1"
description = "An easy safelist-based HTML-sanitizing tool."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.9.0"
webencodings = "*"
[package.extras]
css = ["tinycss2 (>=1.1.0,<1.2)"]
dev = ["Sphinx (==4.3.2)", "black (==22.3.0)", "build (==0.8.0)", "flake8 (==4.0.1)", "hashin (==0.17.0)", "mypy (==0.961)", "pip-tools (==6.6.2)", "pytest (==7.1.2)", "tox (==3.25.0)", "twine (==4.0.1)", "wheel (==0.37.1)"]
[[package]]
name = "blis"
version = "0.7.9"
description = "The Blis BLAS-like linear algebra library, as a self-contained C-extension."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.15.0"
[[package]]
name = "boto3"
version = "1.26.17"
description = "The AWS SDK for Python"
category = "main"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
botocore = ">=1.29.17,<1.30.0"
jmespath = ">=0.7.1,<2.0.0"
s3transfer = ">=0.6.0,<0.7.0"
[package.extras]
crt = ["botocore[crt] (>=1.21.0,<2.0a0)"]
[[package]]
name = "botocore"
version = "1.29.17"
description = "Low-level, data-driven core of boto 3."
category = "main"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
jmespath = ">=0.7.1,<2.0.0"
python-dateutil = ">=2.1,<3.0.0"
urllib3 = ">=1.25.4,<1.27"
[package.extras]
crt = ["awscrt (==0.14.0)"]
[[package]]
name = "cachetools"
version = "5.2.0"
description = "Extensible memoizing collections and decorators"
category = "dev"
optional = false
python-versions = "~=3.7"
[[package]]
name = "catalogue"
version = "2.0.8"
description = "Super lightweight function registries for your library"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "catboost"
version = "1.1.1"
description = "Catboost Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
graphviz = "*"
matplotlib = "*"
numpy = ">=1.16.0"
pandas = ">=0.24.0"
plotly = "*"
scipy = "*"
six = "*"
[[package]]
name = "causal-learn"
version = "0.1.3.0"
description = "causal-learn Python Package"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
graphviz = "*"
matplotlib = "*"
networkx = "*"
numpy = "*"
pandas = "*"
pydot = "*"
scikit-learn = "*"
scipy = "*"
statsmodels = "*"
tqdm = "*"
[[package]]
name = "causalml"
version = "0.13.0"
description = "Python Package for Uplift Modeling and Causal Inference with Machine Learning Algorithms"
category = "main"
optional = true
python-versions = ">=3.7"
develop = false
[package.dependencies]
Cython = ">=0.28.0"
dill = "*"
forestci = "0.6"
graphviz = "*"
lightgbm = "*"
matplotlib = "*"
numpy = ">=1.18.5"
packaging = "*"
pandas = ">=0.24.1"
pathos = "0.2.9"
pip = ">=10.0"
pydotplus = "*"
pygam = "*"
pyro-ppl = "*"
scikit-learn = "<=1.0.2"
scipy = ">=1.4.1"
seaborn = "*"
setuptools = ">=41.0.0"
shap = "*"
statsmodels = ">=0.9.0"
torch = "*"
tqdm = "*"
xgboost = "*"
[package.extras]
tf = ["tensorflow (>=2.4.0)"]
[package.source]
type = "git"
url = "https://github.com/uber/causalml"
reference = "master"
resolved_reference = "7050c74c257254de3600f69d49bda84a3ac152e2"
[[package]]
name = "certifi"
version = "2022.9.24"
description = "Python package for providing Mozilla's CA Bundle."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cffi"
version = "1.15.1"
description = "Foreign Function Interface for Python calling C code."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pycparser = "*"
[[package]]
name = "charset-normalizer"
version = "2.1.1"
description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
category = "main"
optional = false
python-versions = ">=3.6.0"
[package.extras]
unicode_backport = ["unicodedata2"]
[[package]]
name = "click"
version = "8.1.3"
description = "Composable command line interface toolkit"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "cloudpickle"
version = "2.2.0"
description = "Extended pickling support for Python objects"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "colorama"
version = "0.4.6"
description = "Cross-platform colored terminal text."
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
[[package]]
name = "comm"
version = "0.1.1"
description = "Jupyter Python Comm implementation, for usage in ipykernel, xeus-python etc."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
traitlets = ">5.3"
[package.extras]
test = ["pytest"]
[[package]]
name = "confection"
version = "0.0.3"
description = "The sweetest config system for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0"
srsly = ">=2.4.0,<3.0.0"
[[package]]
name = "contourpy"
version = "1.0.6"
description = "Python library for calculating contours of 2D quadrilateral grids"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.16"
[package.extras]
bokeh = ["bokeh", "selenium"]
docs = ["docutils (<0.18)", "sphinx (<=5.2.0)", "sphinx-rtd-theme"]
test = ["Pillow", "flake8", "isort", "matplotlib", "pytest"]
test-minimal = ["pytest"]
test-no-codebase = ["Pillow", "matplotlib", "pytest"]
[[package]]
name = "coverage"
version = "6.5.0"
description = "Code coverage measurement for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
tomli = {version = "*", optional = true, markers = "python_full_version <= \"3.11.0a6\" and extra == \"toml\""}
[package.extras]
toml = ["tomli"]
[[package]]
name = "cycler"
version = "0.11.0"
description = "Composable style cycles"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cymem"
version = "2.0.7"
description = "Manage calls to calloc/free through Cython"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "Cython"
version = "0.29.32"
description = "The Cython compiler for writing C extensions for the Python language."
category = "main"
optional = false
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "dask"
version = "2021.11.2"
description = "Parallel PyData with Task Scheduling"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
cloudpickle = ">=1.1.1"
fsspec = ">=0.6.0"
packaging = ">=20.0"
partd = ">=0.3.10"
pyyaml = "*"
toolz = ">=0.8.2"
[package.extras]
array = ["numpy (>=1.18)"]
complete = ["bokeh (>=1.0.0,!=2.0.0)", "distributed (==2021.11.2)", "jinja2", "numpy (>=1.18)", "pandas (>=1.0)"]
dataframe = ["numpy (>=1.18)", "pandas (>=1.0)"]
diagnostics = ["bokeh (>=1.0.0,!=2.0.0)", "jinja2"]
distributed = ["distributed (==2021.11.2)"]
test = ["pre-commit", "pytest", "pytest-rerunfailures", "pytest-xdist"]
[[package]]
name = "debugpy"
version = "1.6.3"
description = "An implementation of the Debug Adapter Protocol for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "decorator"
version = "5.1.1"
description = "Decorators for Humans"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "defusedxml"
version = "0.7.1"
description = "XML bomb protection for Python stdlib modules"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "dill"
version = "0.3.6"
description = "serialize all of python"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
graph = ["objgraph (>=1.7.2)"]
[[package]]
name = "distributed"
version = "2021.11.2"
description = "Distributed scheduler for Dask"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
click = ">=6.6"
cloudpickle = ">=1.5.0"
dask = "2021.11.2"
jinja2 = "*"
msgpack = ">=0.6.0"
psutil = ">=5.0"
pyyaml = "*"
setuptools = "*"
sortedcontainers = "<2.0.0 || >2.0.0,<2.0.1 || >2.0.1"
tblib = ">=1.6.0"
toolz = ">=0.8.2"
tornado = {version = ">=6.0.3", markers = "python_version >= \"3.8\""}
zict = ">=0.1.3"
[[package]]
name = "docutils"
version = "0.17.1"
description = "Docutils -- Python Documentation Utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "econml"
version = "0.14.0"
description = "This package contains several methods for calculating Conditional Average Treatment Effects"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
joblib = ">=0.13.0"
lightgbm = "*"
numpy = "*"
pandas = "*"
scikit-learn = ">0.22.0,<1.2"
scipy = ">1.4.0"
shap = ">=0.38.1,<0.41.0"
sparse = "*"
statsmodels = ">=0.10"
[package.extras]
all = ["azure-cli", "dowhy (<0.9)", "keras (<2.4)", "matplotlib (<3.6.0)", "protobuf (<4)", "tensorflow (>1.10,<2.3)"]
automl = ["azure-cli"]
dowhy = ["dowhy (<0.9)"]
plt = ["graphviz", "matplotlib (<3.6.0)"]
tf = ["keras (<2.4)", "protobuf (<4)", "tensorflow (>1.10,<2.3)"]
[[package]]
name = "entrypoints"
version = "0.4"
description = "Discover and load entry points from installed packages."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "exceptiongroup"
version = "1.0.4"
description = "Backport of PEP 654 (exception groups)"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pytest (>=6)"]
[[package]]
name = "executing"
version = "1.2.0"
description = "Get the currently executing AST node of a frame, and other information"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
tests = ["asttokens", "littleutils", "pytest", "rich"]
[[package]]
name = "fastai"
version = "2.7.10"
description = "fastai simplifies training fast and accurate neural nets using modern best practices"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastcore = ">=1.4.5,<1.6"
fastdownload = ">=0.0.5,<2"
fastprogress = ">=0.2.4"
matplotlib = "*"
packaging = "*"
pandas = "*"
pillow = ">6.0.0"
pip = "*"
pyyaml = "*"
requests = "*"
scikit-learn = "*"
scipy = "*"
spacy = "<4"
torch = ">=1.7,<1.14"
torchvision = ">=0.8.2"
[package.extras]
dev = ["accelerate (>=0.10.0)", "albumentations", "captum (>=0.3)", "catalyst", "comet-ml", "flask", "flask-compress", "ipywidgets", "kornia", "neptune-client", "ninja", "opencv-python", "pyarrow", "pydicom", "pytorch-ignite", "pytorch-lightning", "scikit-image", "sentencepiece", "tensorboard", "timm (>=0.6.2.dev)", "transformers", "wandb"]
[[package]]
name = "fastcore"
version = "1.5.27"
description = "Python supercharged for fastai development"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
pip = "*"
[package.extras]
dev = ["jupyterlab", "matplotlib", "nbdev (>=0.2.39)", "numpy", "pandas", "pillow", "torch"]
[[package]]
name = "fastdownload"
version = "0.0.7"
description = "A general purpose data downloading library."
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
fastcore = ">=1.3.26"
fastprogress = "*"
[[package]]
name = "fastjsonschema"
version = "2.16.2"
description = "Fastest Python implementation of JSON schema"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
devel = ["colorama", "json-spec", "jsonschema", "pylint", "pytest", "pytest-benchmark", "pytest-cache", "validictory"]
[[package]]
name = "fastprogress"
version = "1.0.3"
description = "A nested progress with plotting options for fastai"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "flake8"
version = "4.0.1"
description = "the modular source code checker: pep8 pyflakes and co"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mccabe = ">=0.6.0,<0.7.0"
pycodestyle = ">=2.8.0,<2.9.0"
pyflakes = ">=2.4.0,<2.5.0"
[[package]]
name = "flaky"
version = "3.7.0"
description = "Plugin for nose or pytest that automatically reruns flaky tests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "flatbuffers"
version = "22.11.23"
description = "The FlatBuffers serialization format for Python"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "fonttools"
version = "4.38.0"
description = "Tools to manipulate font files"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
all = ["brotli (>=1.0.1)", "brotlicffi (>=0.8.0)", "fs (>=2.2.0,<3)", "lxml (>=4.0,<5)", "lz4 (>=1.7.4.2)", "matplotlib", "munkres", "scipy", "skia-pathops (>=0.5.0)", "sympy", "uharfbuzz (>=0.23.0)", "unicodedata2 (>=14.0.0)", "xattr", "zopfli (>=0.1.4)"]
graphite = ["lz4 (>=1.7.4.2)"]
interpolatable = ["munkres", "scipy"]
lxml = ["lxml (>=4.0,<5)"]
pathops = ["skia-pathops (>=0.5.0)"]
plot = ["matplotlib"]
repacker = ["uharfbuzz (>=0.23.0)"]
symfont = ["sympy"]
type1 = ["xattr"]
ufo = ["fs (>=2.2.0,<3)"]
unicode = ["unicodedata2 (>=14.0.0)"]
woff = ["brotli (>=1.0.1)", "brotlicffi (>=0.8.0)", "zopfli (>=0.1.4)"]
[[package]]
name = "forestci"
version = "0.6"
description = "forestci: confidence intervals for scikit-learn forest algorithms"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
numpy = ">=1.20"
scikit-learn = ">=0.23.1"
[[package]]
name = "fsspec"
version = "2022.11.0"
description = "File-system specification"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
abfs = ["adlfs"]
adl = ["adlfs"]
arrow = ["pyarrow (>=1)"]
dask = ["dask", "distributed"]
dropbox = ["dropbox", "dropboxdrivefs", "requests"]
entrypoints = ["importlib-metadata"]
fuse = ["fusepy"]
gcs = ["gcsfs"]
git = ["pygit2"]
github = ["requests"]
gs = ["gcsfs"]
gui = ["panel"]
hdfs = ["pyarrow (>=1)"]
http = ["aiohttp (!=4.0.0a0,!=4.0.0a1)", "requests"]
libarchive = ["libarchive-c"]
oci = ["ocifs"]
s3 = ["s3fs"]
sftp = ["paramiko"]
smb = ["smbprotocol"]
ssh = ["paramiko"]
tqdm = ["tqdm"]
[[package]]
name = "future"
version = "0.18.2"
description = "Clean single-source support for Python 3 and 2"
category = "main"
optional = true
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "gast"
version = "0.4.0"
description = "Python AST that abstracts the underlying Python version"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "google-auth"
version = "2.14.1"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*"
[package.dependencies]
cachetools = ">=2.0.0,<6.0"
pyasn1-modules = ">=0.2.1"
rsa = {version = ">=3.1.4,<5", markers = "python_version >= \"3.6\""}
six = ">=1.9.0"
[package.extras]
aiohttp = ["aiohttp (>=3.6.2,<4.0.0dev)", "requests (>=2.20.0,<3.0.0dev)"]
enterprise_cert = ["cryptography (==36.0.2)", "pyopenssl (==22.0.0)"]
pyopenssl = ["cryptography (>=38.0.3)", "pyopenssl (>=20.0.0)"]
reauth = ["pyu2f (>=0.1.5)"]
[[package]]
name = "google-auth-oauthlib"
version = "0.4.6"
description = "Google Authentication Library"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
google-auth = ">=1.0.0"
requests-oauthlib = ">=0.7.0"
[package.extras]
tool = ["click (>=6.0.0)"]
[[package]]
name = "google-pasta"
version = "0.2.0"
description = "pasta is an AST-based Python refactoring library"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
six = "*"
[[package]]
name = "graphviz"
version = "0.20.1"
description = "Simple Python interface for Graphviz"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
dev = ["flake8", "pep8-naming", "tox (>=3)", "twine", "wheel"]
docs = ["sphinx (>=5)", "sphinx-autodoc-typehints", "sphinx-rtd-theme"]
test = ["coverage", "mock (>=4)", "pytest (>=7)", "pytest-cov", "pytest-mock (>=3)"]
[[package]]
name = "grpcio"
version = "1.50.0"
description = "HTTP/2-based RPC framework"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
six = ">=1.5.2"
[package.extras]
protobuf = ["grpcio-tools (>=1.50.0)"]
[[package]]
name = "h5py"
version = "3.7.0"
description = "Read and write HDF5 files from Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.14.5"
[[package]]
name = "HeapDict"
version = "1.0.1"
description = "a heap with decrease-key and increase-key operations"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "idna"
version = "3.4"
description = "Internationalized Domain Names in Applications (IDNA)"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "imagesize"
version = "1.4.1"
description = "Getting image size from png/jpeg/jpeg2000/gif file"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "importlib-metadata"
version = "5.1.0"
description = "Read metadata from Python packages"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = ">=0.5"
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"]
perf = ["ipython"]
testing = ["flake8 (<5)", "flufl.flake8", "importlib-resources (>=1.3)", "packaging", "pyfakefs", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)", "pytest-perf (>=0.9.2)"]
[[package]]
name = "importlib-resources"
version = "5.10.0"
description = "Read resources from Python packages"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
zipp = {version = ">=3.1.0", markers = "python_version < \"3.10\""}
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"]
testing = ["flake8 (<5)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
[[package]]
name = "iniconfig"
version = "1.1.1"
description = "iniconfig: brain-dead simple config-ini parsing"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipykernel"
version = "6.18.1"
description = "IPython Kernel for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "platform_system == \"Darwin\""}
comm = ">=0.1"
debugpy = ">=1.0"
ipython = ">=7.23.1"
jupyter-client = ">=6.1.12"
matplotlib-inline = ">=0.1"
nest-asyncio = "*"
packaging = "*"
psutil = "*"
pyzmq = ">=17"
tornado = ">=6.1"
traitlets = ">=5.1.0"
[package.extras]
cov = ["coverage[toml]", "curio", "matplotlib", "pytest-cov", "trio"]
docs = ["myst-parser", "pydata-sphinx-theme", "sphinx", "sphinxcontrib-github-alt"]
test = ["flaky", "ipyparallel", "pre-commit", "pytest (>=7.0)", "pytest-asyncio", "pytest-cov", "pytest-timeout"]
[[package]]
name = "ipython"
version = "8.7.0"
description = "IPython: Productive Interactive Computing"
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
appnope = {version = "*", markers = "sys_platform == \"darwin\""}
backcall = "*"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
decorator = "*"
jedi = ">=0.16"
matplotlib-inline = "*"
pexpect = {version = ">4.3", markers = "sys_platform != \"win32\""}
pickleshare = "*"
prompt-toolkit = ">=3.0.11,<3.1.0"
pygments = ">=2.4.0"
stack-data = "*"
traitlets = ">=5"
[package.extras]
all = ["black", "curio", "docrepr", "ipykernel", "ipyparallel", "ipywidgets", "matplotlib", "matplotlib (!=3.2.0)", "nbconvert", "nbformat", "notebook", "numpy (>=1.20)", "pandas", "pytest (<7)", "pytest (<7.1)", "pytest-asyncio", "qtconsole", "setuptools (>=18.5)", "sphinx (>=1.3)", "sphinx-rtd-theme", "stack-data", "testpath", "trio", "typing-extensions"]
black = ["black"]
doc = ["docrepr", "ipykernel", "matplotlib", "pytest (<7)", "pytest (<7.1)", "pytest-asyncio", "setuptools (>=18.5)", "sphinx (>=1.3)", "sphinx-rtd-theme", "stack-data", "testpath", "typing-extensions"]
kernel = ["ipykernel"]
nbconvert = ["nbconvert"]
nbformat = ["nbformat"]
notebook = ["ipywidgets", "notebook"]
parallel = ["ipyparallel"]
qtconsole = ["qtconsole"]
test = ["pytest (<7.1)", "pytest-asyncio", "testpath"]
test_extra = ["curio", "matplotlib (!=3.2.0)", "nbformat", "numpy (>=1.20)", "pandas", "pytest (<7.1)", "pytest-asyncio", "testpath", "trio"]
[[package]]
name = "ipython_genutils"
version = "0.2.0"
description = "Vestigial utilities from IPython"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "ipywidgets"
version = "8.0.2"
description = "Jupyter interactive widgets"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = ">=4.5.1"
ipython = ">=6.1.0"
jupyterlab-widgets = ">=3.0,<4.0"
traitlets = ">=4.3.1"
widgetsnbextension = ">=4.0,<5.0"
[package.extras]
test = ["jsonschema", "pytest (>=3.6.0)", "pytest-cov", "pytz"]
[[package]]
name = "isort"
version = "5.10.1"
description = "A Python utility / library to sort Python imports."
category = "dev"
optional = false
python-versions = ">=3.6.1,<4.0"
[package.extras]
colors = ["colorama (>=0.4.3,<0.5.0)"]
pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
plugins = ["setuptools"]
requirements_deprecated_finder = ["pip-api", "pipreqs"]
[[package]]
name = "jedi"
version = "0.18.2"
description = "An autocompletion tool for Python that can be used for text editors."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
parso = ">=0.8.0,<0.9.0"
[package.extras]
docs = ["Jinja2 (==2.11.3)", "MarkupSafe (==1.1.1)", "Pygments (==2.8.1)", "alabaster (==0.7.12)", "babel (==2.9.1)", "chardet (==4.0.0)", "commonmark (==0.8.1)", "docutils (==0.17.1)", "future (==0.18.2)", "idna (==2.10)", "imagesize (==1.2.0)", "mock (==1.0.1)", "packaging (==20.9)", "pyparsing (==2.4.7)", "pytz (==2021.1)", "readthedocs-sphinx-ext (==2.1.4)", "recommonmark (==0.5.0)", "requests (==2.25.1)", "six (==1.15.0)", "snowballstemmer (==2.1.0)", "sphinx (==1.8.5)", "sphinx-rtd-theme (==0.4.3)", "sphinxcontrib-serializinghtml (==1.1.4)", "sphinxcontrib-websupport (==1.2.4)", "urllib3 (==1.26.4)"]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["Django (<3.1)", "attrs", "colorama", "docopt", "pytest (<7.0.0)"]
[[package]]
name = "Jinja2"
version = "3.1.2"
description = "A very fast and expressive template engine."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.0"
[package.extras]
i18n = ["Babel (>=2.7)"]
[[package]]
name = "jmespath"
version = "1.0.1"
description = "JSON Matching Expressions"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "joblib"
version = "1.2.0"
description = "Lightweight pipelining with Python functions"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jsonschema"
version = "4.17.1"
description = "An implementation of JSON Schema validation for Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=17.4.0"
importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""}
pkgutil-resolve-name = {version = ">=1.3.10", markers = "python_version < \"3.9\""}
pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
[package.extras]
format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"]
format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=1.11)"]
[[package]]
name = "jupyter"
version = "1.0.0"
description = "Jupyter metapackage. Install all the Jupyter components in one go."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ipykernel = "*"
ipywidgets = "*"
jupyter-console = "*"
nbconvert = "*"
notebook = "*"
qtconsole = "*"
[[package]]
name = "jupyter-client"
version = "7.4.7"
description = "Jupyter protocol implementation and client libraries"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
entrypoints = "*"
jupyter-core = ">=4.9.2"
nest-asyncio = ">=1.5.4"
python-dateutil = ">=2.8.2"
pyzmq = ">=23.0"
tornado = ">=6.2"
traitlets = "*"
[package.extras]
doc = ["ipykernel", "myst-parser", "sphinx (>=1.3.6)", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
test = ["codecov", "coverage", "ipykernel (>=6.12)", "ipython", "mypy", "pre-commit", "pytest", "pytest-asyncio (>=0.18)", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-console"
version = "6.4.4"
description = "Jupyter terminal console"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ipykernel = "*"
ipython = "*"
jupyter-client = ">=7.0.0"
prompt-toolkit = ">=2.0.0,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.1.0"
pygments = "*"
[package.extras]
test = ["pexpect"]
[[package]]
name = "jupyter-core"
version = "5.1.0"
description = "Jupyter core package. A base package on which Jupyter projects rely."
category = "dev"
optional = false
python-versions = ">=3.8"
[package.dependencies]
platformdirs = ">=2.5"
pywin32 = {version = ">=1.0", markers = "sys_platform == \"win32\" and platform_python_implementation != \"PyPy\""}
traitlets = ">=5.3"
[package.extras]
docs = ["myst-parser", "sphinxcontrib-github-alt", "traitlets"]
test = ["ipykernel", "pre-commit", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "jupyter-server"
version = "1.23.3"
description = "The backend—i.e. core services, APIs, and REST endpoints—to Jupyter web applications."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
anyio = ">=3.1.0,<4"
argon2-cffi = "*"
jinja2 = "*"
jupyter-client = ">=6.1.12"
jupyter-core = ">=4.7.0"
nbconvert = ">=6.4.4"
nbformat = ">=5.2.0"
packaging = "*"
prometheus-client = "*"
pywinpty = {version = "*", markers = "os_name == \"nt\""}
pyzmq = ">=17"
Send2Trash = "*"
terminado = ">=0.8.3"
tornado = ">=6.1.0"
traitlets = ">=5.1"
websocket-client = "*"
[package.extras]
test = ["coverage", "ipykernel", "pre-commit", "pytest (>=7.0)", "pytest-console-scripts", "pytest-cov", "pytest-mock", "pytest-timeout", "pytest-tornasync", "requests"]
[[package]]
name = "jupyterlab-pygments"
version = "0.2.2"
description = "Pygments theme using JupyterLab CSS variables"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "jupyterlab-widgets"
version = "3.0.3"
description = "Jupyter interactive widgets for JupyterLab"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "keras"
version = "2.11.0"
description = "Deep learning for humans."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "kiwisolver"
version = "1.4.4"
description = "A fast implementation of the Cassowary constraint solver"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "langcodes"
version = "3.3.0"
description = "Tools for labeling human languages with IETF language tags"
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
data = ["language-data (>=1.1,<2.0)"]
[[package]]
name = "libclang"
version = "14.0.6"
description = "Clang Python Bindings, mirrored from the official LLVM repo: https://github.com/llvm/llvm-project/tree/main/clang/bindings/python, to make the installation process easier."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "lightgbm"
version = "3.3.3"
description = "LightGBM Python Package"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = "*"
scikit-learn = "!=0.22.0"
scipy = "*"
wheel = "*"
[package.extras]
dask = ["dask[array] (>=2.0.0)", "dask[dataframe] (>=2.0.0)", "dask[distributed] (>=2.0.0)", "pandas"]
[[package]]
name = "llvmlite"
version = "0.36.0"
description = "lightweight wrapper around basic LLVM functionality"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[[package]]
name = "locket"
version = "1.0.0"
description = "File-based locks for Python on Linux and Windows"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "Markdown"
version = "3.4.1"
description = "Python implementation of Markdown."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
importlib-metadata = {version = ">=4.4", markers = "python_version < \"3.10\""}
[package.extras]
testing = ["coverage", "pyyaml"]
[[package]]
name = "MarkupSafe"
version = "2.1.1"
description = "Safely add untrusted strings to HTML/XML markup."
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "matplotlib"
version = "3.6.2"
description = "Python plotting package"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
contourpy = ">=1.0.1"
cycler = ">=0.10"
fonttools = ">=4.22.0"
kiwisolver = ">=1.0.1"
numpy = ">=1.19"
packaging = ">=20.0"
pillow = ">=6.2.0"
pyparsing = ">=2.2.1"
python-dateutil = ">=2.7"
setuptools_scm = ">=7"
[[package]]
name = "matplotlib-inline"
version = "0.1.6"
description = "Inline Matplotlib backend for Jupyter"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.dependencies]
traitlets = "*"
[[package]]
name = "mccabe"
version = "0.6.1"
description = "McCabe checker, plugin for flake8"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mistune"
version = "2.0.4"
description = "A sane Markdown parser with useful plugins and renderers"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mpmath"
version = "1.2.1"
description = "Python library for arbitrary-precision floating-point arithmetic"
category = "main"
optional = false
python-versions = "*"
[package.extras]
develop = ["codecov", "pycodestyle", "pytest (>=4.6)", "pytest-cov", "wheel"]
tests = ["pytest (>=4.6)"]
[[package]]
name = "msgpack"
version = "1.0.4"
description = "MessagePack serializer"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "multiprocess"
version = "0.70.14"
description = "better multiprocessing and multithreading in python"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
dill = ">=0.3.6"
[[package]]
name = "murmurhash"
version = "1.0.9"
description = "Cython bindings for MurmurHash"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "mypy"
version = "0.971"
description = "Optional static typing for Python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
mypy-extensions = ">=0.4.3"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typing-extensions = ">=3.10"
[package.extras]
dmypy = ["psutil (>=4.0)"]
python2 = ["typed-ast (>=1.4.0,<2)"]
reports = ["lxml"]
[[package]]
name = "mypy-extensions"
version = "0.4.3"
description = "Experimental type system extensions for programs checked with the mypy typechecker."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "nbclassic"
version = "0.4.8"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=6.1.1"
jupyter-core = ">=4.6.1"
jupyter-server = ">=1.8"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
notebook-shim = ">=0.1.0"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["myst-parser", "nbsphinx", "sphinx", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
json-logging = ["json-logging"]
test = ["coverage", "nbval", "pytest", "pytest-cov", "pytest-playwright", "pytest-tornasync", "requests", "requests-unixsocket", "testpath"]
[[package]]
name = "nbclient"
version = "0.7.0"
description = "A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor."
category = "dev"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
jupyter-client = ">=6.1.5"
nbformat = ">=5.0"
nest-asyncio = "*"
traitlets = ">=5.2.2"
[package.extras]
sphinx = ["Sphinx (>=1.7)", "autodoc-traits", "mock", "moto", "myst-parser", "sphinx-book-theme"]
test = ["black", "check-manifest", "flake8", "ipykernel", "ipython", "ipywidgets", "mypy", "nbconvert", "pip (>=18.1)", "pre-commit", "pytest (>=4.1)", "pytest-asyncio", "pytest-cov (>=2.6.1)", "setuptools (>=60.0)", "testpath", "twine (>=1.11.0)", "xmltodict"]
[[package]]
name = "nbconvert"
version = "7.0.0rc3"
description = "Converting Jupyter Notebooks"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
bleach = "*"
defusedxml = "*"
importlib-metadata = {version = ">=3.6", markers = "python_version < \"3.10\""}
jinja2 = ">=3.0"
jupyter-core = ">=4.7"
jupyterlab-pygments = "*"
markupsafe = ">=2.0"
mistune = ">=2.0.2,<3"
nbclient = ">=0.5.0"
nbformat = ">=5.1"
packaging = "*"
pandocfilters = ">=1.4.1"
pygments = ">=2.4.1"
tinycss2 = "*"
traitlets = ">=5.0"
[package.extras]
all = ["ipykernel", "ipython", "ipywidgets (>=7)", "nbsphinx (>=0.2.12)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency", "sphinx (>=1.5.1)", "sphinx-rtd-theme", "tornado (>=6.1)"]
docs = ["ipython", "nbsphinx (>=0.2.12)", "sphinx (>=1.5.1)", "sphinx-rtd-theme"]
serve = ["tornado (>=6.1)"]
test = ["ipykernel", "ipywidgets (>=7)", "pre-commit", "pyppeteer (>=1,<1.1)", "pytest", "pytest-cov", "pytest-dependency"]
webpdf = ["pyppeteer (>=1,<1.1)"]
[[package]]
name = "nbformat"
version = "5.7.0"
description = "The Jupyter Notebook format"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
fastjsonschema = "*"
jsonschema = ">=2.6"
jupyter-core = "*"
traitlets = ">=5.1"
[package.extras]
test = ["check-manifest", "pep440", "pre-commit", "pytest", "testpath"]
[[package]]
name = "nbsphinx"
version = "0.8.10"
description = "Jupyter Notebook Tools for Sphinx"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
docutils = "*"
jinja2 = "*"
nbconvert = "!=5.4"
nbformat = "*"
sphinx = ">=1.8"
traitlets = ">=5"
[[package]]
name = "nest-asyncio"
version = "1.5.6"
description = "Patch asyncio to allow nested event loops"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "networkx"
version = "2.8.8"
description = "Python package for creating and manipulating graphs and networks"
category = "main"
optional = false
python-versions = ">=3.8"
[package.extras]
default = ["matplotlib (>=3.4)", "numpy (>=1.19)", "pandas (>=1.3)", "scipy (>=1.8)"]
developer = ["mypy (>=0.982)", "pre-commit (>=2.20)"]
doc = ["nb2plots (>=0.6)", "numpydoc (>=1.5)", "pillow (>=9.2)", "pydata-sphinx-theme (>=0.11)", "sphinx (>=5.2)", "sphinx-gallery (>=0.11)", "texext (>=0.6.6)"]
extra = ["lxml (>=4.6)", "pydot (>=1.4.2)", "pygraphviz (>=1.9)", "sympy (>=1.10)"]
test = ["codecov (>=2.1)", "pytest (>=7.2)", "pytest-cov (>=4.0)"]
[[package]]
name = "notebook"
version = "6.5.2"
description = "A web-based notebook environment for interactive computing"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
argon2-cffi = "*"
ipykernel = "*"
ipython-genutils = "*"
jinja2 = "*"
jupyter-client = ">=5.3.4"
jupyter-core = ">=4.6.1"
nbclassic = ">=0.4.7"
nbconvert = ">=5"
nbformat = "*"
nest-asyncio = ">=1.5"
prometheus-client = "*"
pyzmq = ">=17"
Send2Trash = ">=1.8.0"
terminado = ">=0.8.3"
tornado = ">=6.1"
traitlets = ">=4.2.1"
[package.extras]
docs = ["myst-parser", "nbsphinx", "sphinx", "sphinx-rtd-theme", "sphinxcontrib-github-alt"]
json-logging = ["json-logging"]
test = ["coverage", "nbval", "pytest", "pytest-cov", "requests", "requests-unixsocket", "selenium (==4.1.5)", "testpath"]
[[package]]
name = "notebook-shim"
version = "0.2.2"
description = "A shim layer for notebook traits and config"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
jupyter-server = ">=1.8,<3"
[package.extras]
test = ["pytest", "pytest-console-scripts", "pytest-tornasync"]
[[package]]
name = "numba"
version = "0.53.1"
description = "compiling Python code using LLVM"
category = "main"
optional = false
python-versions = ">=3.6,<3.10"
[package.dependencies]
llvmlite = ">=0.36.0rc1,<0.37"
numpy = ">=1.15"
setuptools = "*"
[[package]]
name = "numpy"
version = "1.23.5"
description = "NumPy is the fundamental package for array computing with Python."
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "oauthlib"
version = "3.2.2"
description = "A generic, spec-compliant, thorough implementation of the OAuth request-signing logic"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
rsa = ["cryptography (>=3.0.0)"]
signals = ["blinker (>=1.4.0)"]
signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "opt-einsum"
version = "3.3.0"
description = "Optimizing numpys einsum function"
category = "main"
optional = false
python-versions = ">=3.5"
[package.dependencies]
numpy = ">=1.7"
[package.extras]
docs = ["numpydoc", "sphinx (==1.2.3)", "sphinx-rtd-theme", "sphinxcontrib-napoleon"]
tests = ["pytest", "pytest-cov", "pytest-pep8"]
[[package]]
name = "packaging"
version = "21.3"
description = "Core utilities for Python packages"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pyparsing = ">=2.0.2,<3.0.5 || >3.0.5"
[[package]]
name = "pandas"
version = "1.5.2"
description = "Powerful data structures for data analysis, time series, and statistics"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = {version = ">=1.20.3", markers = "python_version < \"3.10\""}
python-dateutil = ">=2.8.1"
pytz = ">=2020.1"
[package.extras]
test = ["hypothesis (>=5.5.3)", "pytest (>=6.0)", "pytest-xdist (>=1.31)"]
[[package]]
name = "pandocfilters"
version = "1.5.0"
description = "Utilities for writing pandoc filters in python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "parso"
version = "0.8.3"
description = "A Python Parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
testing = ["docopt", "pytest (<6.0.0)"]
[[package]]
name = "partd"
version = "1.3.0"
description = "Appendable key-value storage"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
locket = "*"
toolz = "*"
[package.extras]
complete = ["blosc", "numpy (>=1.9.0)", "pandas (>=0.19.0)", "pyzmq"]
[[package]]
name = "pastel"
version = "0.2.1"
description = "Bring colors to your terminal."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pathos"
version = "0.2.9"
description = "parallel graph management and execution in heterogeneous computing"
category = "main"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*"
[package.dependencies]
dill = ">=0.3.5.1"
multiprocess = ">=0.70.13"
pox = ">=0.3.1"
ppft = ">=1.7.6.5"
[[package]]
name = "pathspec"
version = "0.10.2"
description = "Utility library for gitignore style pattern matching of file paths."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pathy"
version = "0.10.0"
description = "pathlib.Path subclasses for local and cloud bucket storage"
category = "main"
optional = false
python-versions = ">= 3.6"
[package.dependencies]
smart-open = ">=5.2.1,<6.0.0"
typer = ">=0.3.0,<1.0.0"
[package.extras]
all = ["azure-storage-blob", "boto3", "google-cloud-storage (>=1.26.0,<2.0.0)", "mock", "pytest", "pytest-coverage", "typer-cli"]
azure = ["azure-storage-blob"]
gcs = ["google-cloud-storage (>=1.26.0,<2.0.0)"]
s3 = ["boto3"]
test = ["mock", "pytest", "pytest-coverage", "typer-cli"]
[[package]]
name = "patsy"
version = "0.5.3"
description = "A Python package for describing statistical models and for building design matrices."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
numpy = ">=1.4"
six = "*"
[package.extras]
test = ["pytest", "pytest-cov", "scipy"]
[[package]]
name = "pexpect"
version = "4.8.0"
description = "Pexpect allows easy control of interactive console applications."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
ptyprocess = ">=0.5"
[[package]]
name = "pickleshare"
version = "0.7.5"
description = "Tiny 'shelve'-like database with concurrency support"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "Pillow"
version = "9.3.0"
description = "Python Imaging Library (Fork)"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"]
tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"]
[[package]]
name = "pip"
version = "22.3.1"
description = "The PyPA recommended tool for installing Python packages."
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pkgutil_resolve_name"
version = "1.3.10"
description = "Resolve a name to an object."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "platformdirs"
version = "2.5.4"
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo (>=2022.9.29)", "proselint (>=0.13)", "sphinx (>=5.3)", "sphinx-autodoc-typehints (>=1.19.4)"]
test = ["appdirs (==1.4.4)", "pytest (>=7.2)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"]
[[package]]
name = "plotly"
version = "5.11.0"
description = "An open-source, interactive data visualization library for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
tenacity = ">=6.2.0"
[[package]]
name = "pluggy"
version = "1.0.0"
description = "plugin and hook calling mechanisms for python"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
dev = ["pre-commit", "tox"]
testing = ["pytest", "pytest-benchmark"]
[[package]]
name = "poethepoet"
version = "0.16.5"
description = "A task runner that works well with poetry."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
pastel = ">=0.2.1,<0.3.0"
tomli = ">=1.2.2"
[package.extras]
poetry-plugin = ["poetry (>=1.0,<2.0)"]
[[package]]
name = "pox"
version = "0.3.2"
description = "utilities for filesystem exploration and automated builds"
category = "main"
optional = true
python-versions = ">=3.7"
[[package]]
name = "ppft"
version = "1.7.6.6"
description = "distributed and parallel python"
category = "main"
optional = true
python-versions = ">=3.7"
[package.extras]
dill = ["dill (>=0.3.6)"]
[[package]]
name = "preshed"
version = "3.0.8"
description = "Cython hash table that trusts the keys are pre-hashed"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cymem = ">=2.0.2,<2.1.0"
murmurhash = ">=0.28.0,<1.1.0"
[[package]]
name = "progressbar2"
version = "4.2.0"
description = "A Python Progressbar library to provide visual (yet text based) progress to long running operations."
category = "main"
optional = true
python-versions = ">=3.7.0"
[package.dependencies]
python-utils = ">=3.0.0"
[package.extras]
docs = ["sphinx (>=1.8.5)"]
tests = ["flake8 (>=3.7.7)", "freezegun (>=0.3.11)", "pytest (>=4.6.9)", "pytest-cov (>=2.6.1)", "pytest-mypy", "sphinx (>=1.8.5)"]
[[package]]
name = "prometheus-client"
version = "0.15.0"
description = "Python client for the Prometheus monitoring system."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
twisted = ["twisted"]
[[package]]
name = "prompt-toolkit"
version = "3.0.33"
description = "Library for building powerful interactive command lines in Python"
category = "dev"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
wcwidth = "*"
[[package]]
name = "protobuf"
version = "3.19.6"
description = "Protocol Buffers"
category = "dev"
optional = false
python-versions = ">=3.5"
[[package]]
name = "psutil"
version = "5.9.4"
description = "Cross-platform lib for process and system monitoring in Python."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.extras]
test = ["enum34", "ipaddress", "mock", "pywin32", "wmi"]
[[package]]
name = "ptyprocess"
version = "0.7.0"
description = "Run a subprocess in a pseudo terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pure-eval"
version = "0.2.2"
description = "Safely evaluate AST nodes without side effects"
category = "dev"
optional = false
python-versions = "*"
[package.extras]
tests = ["pytest"]
[[package]]
name = "py"
version = "1.11.0"
description = "library with cross-python path, ini-parsing, io, code, log facilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pyasn1"
version = "0.4.8"
description = "ASN.1 types and codecs"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pyasn1-modules"
version = "0.2.8"
description = "A collection of ASN.1-based protocols modules."
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
pyasn1 = ">=0.4.6,<0.5.0"
[[package]]
name = "pycodestyle"
version = "2.8.0"
description = "Python style guide checker"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pycparser"
version = "2.21"
description = "C parser in Python"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pydantic"
version = "1.10.2"
description = "Data validation and settings management using python type hints"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
typing-extensions = ">=4.1.0"
[package.extras]
dotenv = ["python-dotenv (>=0.10.4)"]
email = ["email-validator (>=1.0.3)"]
[[package]]
name = "pydata-sphinx-theme"
version = "0.9.0"
description = "Bootstrap-based Sphinx theme from the PyData community"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
beautifulsoup4 = "*"
docutils = "!=0.17.0"
packaging = "*"
sphinx = ">=4.0.2"
[package.extras]
coverage = ["codecov", "pydata-sphinx-theme[test]", "pytest-cov"]
dev = ["nox", "pre-commit", "pydata-sphinx-theme[coverage]", "pyyaml"]
doc = ["jupyter_sphinx", "myst-parser", "numpy", "numpydoc", "pandas", "plotly", "pytest", "pytest-regressions", "sphinx-design", "sphinx-sitemap", "sphinxext-rediraffe", "xarray"]
test = ["pydata-sphinx-theme[doc]", "pytest"]
[[package]]
name = "pydot"
version = "1.4.2"
description = "Python interface to Graphviz's Dot"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
pyparsing = ">=2.1.4"
[[package]]
name = "pydotplus"
version = "2.0.2"
description = "Python interface to Graphviz's Dot language"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
pyparsing = ">=2.0.1"
[[package]]
name = "pyflakes"
version = "2.4.0"
description = "passive checker of Python programs"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pygam"
version = "0.8.0"
description = "GAM toolkit"
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
future = "*"
numpy = "*"
progressbar2 = "*"
scipy = "*"
[[package]]
name = "Pygments"
version = "2.13.0"
description = "Pygments is a syntax highlighting package written in Python."
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
plugins = ["importlib-metadata"]
[[package]]
name = "pygraphviz"
version = "1.10"
description = "Python interface to Graphviz"
category = "main"
optional = false
python-versions = ">=3.8"
[[package]]
name = "pyparsing"
version = "3.0.9"
description = "pyparsing module - Classes and methods to define and execute parsing grammars"
category = "main"
optional = false
python-versions = ">=3.6.8"
[package.extras]
diagrams = ["jinja2", "railroad-diagrams"]
[[package]]
name = "pyro-api"
version = "0.1.2"
description = "Generic API for dispatch to Pyro backends."
category = "main"
optional = true
python-versions = "*"
[package.extras]
dev = ["ipython", "sphinx (>=2.0)", "sphinx-rtd-theme"]
test = ["flake8", "pytest (>=5.0)"]
[[package]]
name = "pyro-ppl"
version = "1.8.3"
description = "A Python library for probabilistic modeling and inference"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
numpy = ">=1.7"
opt-einsum = ">=2.3.2"
pyro-api = ">=0.1.1"
torch = ">=1.11.0"
tqdm = ">=4.36"
[package.extras]
dev = ["black (>=21.4b0)", "flake8", "graphviz (>=0.8)", "isort (>=5.0)", "jupyter (>=1.0.0)", "lap", "matplotlib (>=1.3)", "mypy (>=0.812)", "nbformat", "nbsphinx (>=0.3.2)", "nbstripout", "nbval", "ninja", "pandas", "pillow (==8.2.0)", "pypandoc", "pytest (>=5.0)", "pytest-xdist", "scikit-learn", "scipy (>=1.1)", "seaborn (>=0.11.0)", "sphinx", "sphinx-rtd-theme", "torchvision (>=0.12.0)", "visdom (>=0.1.4,<0.2.2)", "wget", "yapf"]
extras = ["graphviz (>=0.8)", "jupyter (>=1.0.0)", "lap", "matplotlib (>=1.3)", "pandas", "pillow (==8.2.0)", "scikit-learn", "seaborn (>=0.11.0)", "torchvision (>=0.12.0)", "visdom (>=0.1.4,<0.2.2)", "wget"]
funsor = ["funsor[torch] (==0.4.3)"]
horovod = ["horovod[pytorch] (>=0.19)"]
profile = ["prettytable", "pytest-benchmark", "snakeviz"]
test = ["black (>=21.4b0)", "flake8", "graphviz (>=0.8)", "jupyter (>=1.0.0)", "lap", "matplotlib (>=1.3)", "nbval", "pandas", "pillow (==8.2.0)", "pytest (>=5.0)", "pytest-cov", "scikit-learn", "scipy (>=1.1)", "seaborn (>=0.11.0)", "torchvision (>=0.12.0)", "visdom (>=0.1.4,<0.2.2)", "wget"]
[[package]]
name = "pyrsistent"
version = "0.19.2"
description = "Persistent/Functional/Immutable data structures"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "pytest"
version = "7.2.0"
description = "pytest: simple powerful testing with Python"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
attrs = ">=19.2.0"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=0.12,<2.0"
tomli = {version = ">=1.0.0", markers = "python_version < \"3.11\""}
[package.extras]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
[[package]]
name = "pytest-cov"
version = "3.0.0"
description = "Pytest plugin for measuring coverage."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
coverage = {version = ">=5.2.1", extras = ["toml"]}
pytest = ">=4.6"
[package.extras]
testing = ["fields", "hunter", "process-tests", "pytest-xdist", "six", "virtualenv"]
[[package]]
name = "pytest-split"
version = "0.8.0"
description = "Pytest plugin which splits the test suite to equally sized sub suites based on test execution time."
category = "dev"
optional = false
python-versions = ">=3.7.1,<4.0"
[package.dependencies]
pytest = ">=5,<8"
[[package]]
name = "python-dateutil"
version = "2.8.2"
description = "Extensions to the standard Python datetime module"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
[package.dependencies]
six = ">=1.5"
[[package]]
name = "python-utils"
version = "3.4.5"
description = "Python Utils is a module with some convenient utilities not included with the standard Python install"
category = "main"
optional = true
python-versions = ">3.6.0"
[package.extras]
docs = ["mock", "python-utils", "sphinx"]
loguru = ["loguru"]
tests = ["flake8", "loguru", "pytest", "pytest-asyncio", "pytest-cov", "pytest-mypy", "sphinx", "types-setuptools"]
[[package]]
name = "pytz"
version = "2022.6"
description = "World timezone definitions, modern and historical"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pytz-deprecation-shim"
version = "0.1.0.post0"
description = "Shims to make deprecation of pytz easier"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version >= \"3.6\" and python_version < \"3.9\""}
tzdata = {version = "*", markers = "python_version >= \"3.6\""}
[[package]]
name = "pywin32"
version = "305"
description = "Python for Window Extensions"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "pywinpty"
version = "2.0.9"
description = "Pseudo terminal support for Windows from Python."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "PyYAML"
version = "6.0"
description = "YAML parser and emitter for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "pyzmq"
version = "24.0.1"
description = "Python bindings for 0MQ"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = {version = "*", markers = "implementation_name == \"pypy\""}
py = {version = "*", markers = "implementation_name == \"pypy\""}
[[package]]
name = "qtconsole"
version = "5.4.0"
description = "Jupyter Qt console"
category = "dev"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
ipykernel = ">=4.1"
ipython-genutils = "*"
jupyter-client = ">=4.1"
jupyter-core = "*"
pygments = "*"
pyzmq = ">=17.1"
qtpy = ">=2.0.1"
traitlets = "<5.2.1 || >5.2.1,<5.2.2 || >5.2.2"
[package.extras]
doc = ["Sphinx (>=1.3)"]
test = ["flaky", "pytest", "pytest-qt"]
[[package]]
name = "QtPy"
version = "2.3.0"
description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = "*"
[package.extras]
test = ["pytest (>=6,!=7.0.0,!=7.0.1)", "pytest-cov (>=3.0.0)", "pytest-qt"]
[[package]]
name = "requests"
version = "2.28.1"
description = "Python HTTP for Humans."
category = "main"
optional = false
python-versions = ">=3.7, <4"
[package.dependencies]
certifi = ">=2017.4.17"
charset-normalizer = ">=2,<3"
idna = ">=2.5,<4"
urllib3 = ">=1.21.1,<1.27"
[package.extras]
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use_chardet_on_py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "requests-oauthlib"
version = "1.3.1"
description = "OAuthlib authentication support for Requests."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
oauthlib = ">=3.0.0"
requests = ">=2.0.0"
[package.extras]
rsa = ["oauthlib[signedtoken] (>=3.0.0)"]
[[package]]
name = "rpy2"
version = "3.5.6"
description = "Python interface to the R language (embedded R)"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
cffi = ">=1.10.0"
jinja2 = "*"
packaging = {version = "*", markers = "platform_system == \"Windows\""}
pytz = "*"
tzlocal = "*"
[package.extras]
all = ["ipython", "numpy", "pandas", "pytest"]
numpy = ["pandas"]
pandas = ["numpy", "pandas"]
test = ["ipython", "numpy", "pandas", "pytest"]
[[package]]
name = "rsa"
version = "4.9"
description = "Pure-Python RSA implementation"
category = "dev"
optional = false
python-versions = ">=3.6,<4"
[package.dependencies]
pyasn1 = ">=0.1.3"
[[package]]
name = "s3transfer"
version = "0.6.0"
description = "An Amazon S3 Transfer Manager"
category = "main"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
botocore = ">=1.12.36,<2.0a.0"
[package.extras]
crt = ["botocore[crt] (>=1.20.29,<2.0a.0)"]
[[package]]
name = "scikit-learn"
version = "1.0.2"
description = "A set of python modules for machine learning and data mining"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
joblib = ">=0.11"
numpy = ">=1.14.6"
scipy = ">=1.1.0"
threadpoolctl = ">=2.0.0"
[package.extras]
benchmark = ["matplotlib (>=2.2.3)", "memory-profiler (>=0.57.0)", "pandas (>=0.25.0)"]
docs = ["Pillow (>=7.1.2)", "matplotlib (>=2.2.3)", "memory-profiler (>=0.57.0)", "numpydoc (>=1.0.0)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "seaborn (>=0.9.0)", "sphinx (>=4.0.1)", "sphinx-gallery (>=0.7.0)", "sphinx-prompt (>=1.3.0)", "sphinxext-opengraph (>=0.4.2)"]
examples = ["matplotlib (>=2.2.3)", "pandas (>=0.25.0)", "scikit-image (>=0.14.5)", "seaborn (>=0.9.0)"]
tests = ["black (>=21.6b0)", "flake8 (>=3.8.2)", "matplotlib (>=2.2.3)", "mypy (>=0.770)", "pandas (>=0.25.0)", "pyamg (>=4.0.0)", "pytest (>=5.0.1)", "pytest-cov (>=2.9.0)", "scikit-image (>=0.14.5)"]
[[package]]
name = "scipy"
version = "1.8.1"
description = "SciPy: Scientific Library for Python"
category = "main"
optional = false
python-versions = ">=3.8,<3.11"
[package.dependencies]
numpy = ">=1.17.3,<1.25.0"
[[package]]
name = "scipy"
version = "1.9.3"
description = "Fundamental algorithms for scientific computing in Python"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = ">=1.18.5,<1.26.0"
[package.extras]
dev = ["flake8", "mypy", "pycodestyle", "typing_extensions"]
doc = ["matplotlib (>2)", "numpydoc", "pydata-sphinx-theme (==0.9.0)", "sphinx (!=4.1.0)", "sphinx-panels (>=0.5.2)", "sphinx-tabs"]
test = ["asv", "gmpy2", "mpmath", "pytest", "pytest-cov", "pytest-xdist", "scikit-umfpack", "threadpoolctl"]
[[package]]
name = "seaborn"
version = "0.12.1"
description = "Statistical data visualization"
category = "main"
optional = true
python-versions = ">=3.7"
[package.dependencies]
matplotlib = ">=3.1,<3.6.1 || >3.6.1"
numpy = ">=1.17"
pandas = ">=0.25"
[package.extras]
dev = ["flake8", "mypy", "pandas-stubs", "pre-commit", "pytest", "pytest-cov", "pytest-xdist"]
docs = ["ipykernel", "nbconvert", "numpydoc", "pydata_sphinx_theme (==0.10.0rc2)", "pyyaml", "sphinx-copybutton", "sphinx-design", "sphinx-issues"]
stats = ["scipy (>=1.3)", "statsmodels (>=0.10)"]
[[package]]
name = "Send2Trash"
version = "1.8.0"
description = "Send file to trash natively under Mac OS X, Windows and Linux."
category = "dev"
optional = false
python-versions = "*"
[package.extras]
nativelib = ["pyobjc-framework-Cocoa", "pywin32"]
objc = ["pyobjc-framework-Cocoa"]
win32 = ["pywin32"]
[[package]]
name = "setuptools"
version = "65.6.3"
description = "Easily download, build, install, upgrade, and uninstall Python packages"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-notfound-page (==0.8.3)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8 (<5)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pip-run (>=8.8)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
[[package]]
name = "setuptools-scm"
version = "7.0.5"
description = "the blessed package to manage your versions by scm tags"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
packaging = ">=20.0"
setuptools = "*"
tomli = ">=1.0.0"
typing-extensions = "*"
[package.extras]
test = ["pytest (>=6.2)", "virtualenv (>20)"]
toml = ["setuptools (>=42)"]
[[package]]
name = "shap"
version = "0.40.0"
description = "A unified approach to explain the output of any machine learning model."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
cloudpickle = "*"
numba = "*"
numpy = "*"
packaging = ">20.9"
pandas = "*"
scikit-learn = "*"
scipy = "*"
slicer = "0.0.7"
tqdm = ">4.25.0"
[package.extras]
all = ["catboost", "ipython", "lightgbm", "lime", "matplotlib", "nbsphinx", "numpydoc", "opencv-python", "pyod", "pyspark", "pytest", "pytest-cov", "pytest-mpl", "sentencepiece", "sphinx", "sphinx_rtd_theme", "torch", "transformers", "xgboost"]
docs = ["ipython", "matplotlib", "nbsphinx", "numpydoc", "sphinx", "sphinx_rtd_theme"]
others = ["lime"]
plots = ["ipython", "matplotlib"]
test = ["catboost", "lightgbm", "opencv-python", "pyod", "pyspark", "pytest", "pytest-cov", "pytest-mpl", "sentencepiece", "torch", "transformers", "xgboost"]
[[package]]
name = "six"
version = "1.16.0"
description = "Python 2 and 3 compatibility utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "slicer"
version = "0.0.7"
description = "A small package for big slicing."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "smart-open"
version = "5.2.1"
description = "Utils for streaming large files (S3, HDFS, GCS, Azure Blob Storage, gzip, bz2...)"
category = "main"
optional = false
python-versions = ">=3.6,<4.0"
[package.extras]
all = ["azure-common", "azure-core", "azure-storage-blob", "boto3", "google-cloud-storage", "requests"]
azure = ["azure-common", "azure-core", "azure-storage-blob"]
gcs = ["google-cloud-storage"]
http = ["requests"]
s3 = ["boto3"]
test = ["azure-common", "azure-core", "azure-storage-blob", "boto3", "google-cloud-storage", "moto[server] (==1.3.14)", "parameterizedtestcase", "paramiko", "pathlib2", "pytest", "pytest-rerunfailures", "requests", "responses"]
webhdfs = ["requests"]
[[package]]
name = "sniffio"
version = "1.3.0"
description = "Sniff out which async library your code is running under"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "snowballstemmer"
version = "2.2.0"
description = "This package provides 29 stemmers for 28 languages generated from Snowball algorithms."
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "sortedcontainers"
version = "2.4.0"
description = "Sorted Containers -- Sorted List, Sorted Dict, Sorted Set"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "soupsieve"
version = "2.3.2.post1"
description = "A modern CSS selector implementation for Beautiful Soup."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "spacy"
version = "3.4.3"
description = "Industrial-strength Natural Language Processing (NLP) in Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
catalogue = ">=2.0.6,<2.1.0"
cymem = ">=2.0.2,<2.1.0"
jinja2 = "*"
langcodes = ">=3.2.0,<4.0.0"
murmurhash = ">=0.28.0,<1.1.0"
numpy = ">=1.15.0"
packaging = ">=20.0"
pathy = ">=0.3.5"
preshed = ">=3.0.2,<3.1.0"
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0"
requests = ">=2.13.0,<3.0.0"
setuptools = "*"
spacy-legacy = ">=3.0.10,<3.1.0"
spacy-loggers = ">=1.0.0,<2.0.0"
srsly = ">=2.4.3,<3.0.0"
thinc = ">=8.1.0,<8.2.0"
tqdm = ">=4.38.0,<5.0.0"
typer = ">=0.3.0,<0.8.0"
wasabi = ">=0.9.1,<1.1.0"
[package.extras]
apple = ["thinc-apple-ops (>=0.1.0.dev0,<1.0.0)"]
cuda = ["cupy (>=5.0.0b4,<12.0.0)"]
cuda-autodetect = ["cupy-wheel (>=11.0.0,<12.0.0)"]
cuda100 = ["cupy-cuda100 (>=5.0.0b4,<12.0.0)"]
cuda101 = ["cupy-cuda101 (>=5.0.0b4,<12.0.0)"]
cuda102 = ["cupy-cuda102 (>=5.0.0b4,<12.0.0)"]
cuda110 = ["cupy-cuda110 (>=5.0.0b4,<12.0.0)"]
cuda111 = ["cupy-cuda111 (>=5.0.0b4,<12.0.0)"]
cuda112 = ["cupy-cuda112 (>=5.0.0b4,<12.0.0)"]
cuda113 = ["cupy-cuda113 (>=5.0.0b4,<12.0.0)"]
cuda114 = ["cupy-cuda114 (>=5.0.0b4,<12.0.0)"]
cuda115 = ["cupy-cuda115 (>=5.0.0b4,<12.0.0)"]
cuda116 = ["cupy-cuda116 (>=5.0.0b4,<12.0.0)"]
cuda117 = ["cupy-cuda117 (>=5.0.0b4,<12.0.0)"]
cuda11x = ["cupy-cuda11x (>=11.0.0,<12.0.0)"]
cuda80 = ["cupy-cuda80 (>=5.0.0b4,<12.0.0)"]
cuda90 = ["cupy-cuda90 (>=5.0.0b4,<12.0.0)"]
cuda91 = ["cupy-cuda91 (>=5.0.0b4,<12.0.0)"]
cuda92 = ["cupy-cuda92 (>=5.0.0b4,<12.0.0)"]
ja = ["sudachidict-core (>=20211220)", "sudachipy (>=0.5.2,!=0.6.1)"]
ko = ["natto-py (>=0.9.0)"]
lookups = ["spacy-lookups-data (>=1.0.3,<1.1.0)"]
ray = ["spacy-ray (>=0.1.0,<1.0.0)"]
th = ["pythainlp (>=2.0)"]
transformers = ["spacy-transformers (>=1.1.2,<1.2.0)"]
[[package]]
name = "spacy-legacy"
version = "3.0.10"
description = "Legacy registered functions for spaCy backwards compatibility"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "spacy-loggers"
version = "1.0.3"
description = "Logging utilities for SpaCy"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
wasabi = ">=0.8.1,<1.1.0"
[[package]]
name = "sparse"
version = "0.13.0"
description = "Sparse n-dimensional arrays"
category = "main"
optional = false
python-versions = ">=3.6, <4"
[package.dependencies]
numba = ">=0.49"
numpy = ">=1.17"
scipy = ">=0.19"
[package.extras]
all = ["dask[array]", "pytest (>=3.5)", "pytest-black", "pytest-cov", "sphinx", "sphinx-rtd-theme", "tox"]
docs = ["sphinx", "sphinx-rtd-theme"]
tests = ["dask[array]", "pytest (>=3.5)", "pytest-black", "pytest-cov"]
tox = ["dask[array]", "pytest (>=3.5)", "pytest-black", "pytest-cov", "tox"]
[[package]]
name = "Sphinx"
version = "5.3.0"
description = "Python documentation generator"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
alabaster = ">=0.7,<0.8"
babel = ">=2.9"
colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
docutils = ">=0.14,<0.20"
imagesize = ">=1.3"
importlib-metadata = {version = ">=4.8", markers = "python_version < \"3.10\""}
Jinja2 = ">=3.0"
packaging = ">=21.0"
Pygments = ">=2.12"
requests = ">=2.5.0"
snowballstemmer = ">=2.0"
sphinxcontrib-applehelp = "*"
sphinxcontrib-devhelp = "*"
sphinxcontrib-htmlhelp = ">=2.0.0"
sphinxcontrib-jsmath = "*"
sphinxcontrib-qthelp = "*"
sphinxcontrib-serializinghtml = ">=1.1.5"
[package.extras]
docs = ["sphinxcontrib-websupport"]
lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-bugbear", "flake8-comprehensions", "flake8-simplify", "isort", "mypy (>=0.981)", "sphinx-lint", "types-requests", "types-typed-ast"]
test = ["cython", "html5lib", "pytest (>=4.6)", "typed_ast"]
[[package]]
name = "sphinx-copybutton"
version = "0.5.0"
description = "Add a copy button to each of your code cells."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
sphinx = ">=1.8"
[package.extras]
code_style = ["pre-commit (==2.12.1)"]
rtd = ["ipython", "myst-nb", "sphinx", "sphinx-book-theme"]
[[package]]
name = "sphinx_design"
version = "0.3.0"
description = "A sphinx extension for designing beautiful, view size responsive web components."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
sphinx = ">=4,<6"
[package.extras]
code_style = ["pre-commit (>=2.12,<3.0)"]
rtd = ["myst-parser (>=0.18.0,<0.19.0)"]
testing = ["myst-parser (>=0.18.0,<0.19.0)", "pytest (>=7.1,<8.0)", "pytest-cov", "pytest-regressions"]
theme_furo = ["furo (>=2022.06.04,<2022.07)"]
theme_pydata = ["pydata-sphinx-theme (>=0.9.0,<0.10.0)"]
theme_rtd = ["sphinx-rtd-theme (>=1.0,<2.0)"]
theme_sbt = ["sphinx-book-theme (>=0.3.0,<0.4.0)"]
[[package]]
name = "sphinx-rtd-theme"
version = "1.1.1"
description = "Read the Docs theme for Sphinx"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
docutils = "<0.18"
sphinx = ">=1.6,<6"
[package.extras]
dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client", "wheel"]
[[package]]
name = "sphinxcontrib-applehelp"
version = "1.0.2"
description = "sphinxcontrib-applehelp is a sphinx extension which outputs Apple help books"
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "sphinxcontrib-devhelp"
version = "1.0.2"
description = "sphinxcontrib-devhelp is a sphinx extension which outputs Devhelp document."
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "sphinxcontrib-googleanalytics"
version = "0.2"
description = ""
category = "dev"
optional = false
python-versions = "*"
develop = false
[package.dependencies]
Sphinx = ">=0.6"
[package.source]
type = "git"
url = "https://github.com/sphinx-contrib/googleanalytics.git"
reference = "master"
resolved_reference = "42b3df99fdc01a136b9c575f3f251ae80cdfbe1d"
[[package]]
name = "sphinxcontrib-htmlhelp"
version = "2.0.0"
description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files"
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["html5lib", "pytest"]
[[package]]
name = "sphinxcontrib-jsmath"
version = "1.0.1"
description = "A sphinx extension which renders display math in HTML via JavaScript"
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["flake8", "mypy", "pytest"]
[[package]]
name = "sphinxcontrib-qthelp"
version = "1.0.3"
description = "sphinxcontrib-qthelp is a sphinx extension which outputs QtHelp document."
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "sphinxcontrib-serializinghtml"
version = "1.1.5"
description = "sphinxcontrib-serializinghtml is a sphinx extension which outputs \"serialized\" HTML files (json and pickle)."
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
lint = ["docutils-stubs", "flake8", "mypy"]
test = ["pytest"]
[[package]]
name = "srsly"
version = "2.4.5"
description = "Modern high-performance serialization utilities for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
catalogue = ">=2.0.3,<2.1.0"
[[package]]
name = "stack-data"
version = "0.6.2"
description = "Extract data from python stack frames and tracebacks for informative displays"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
asttokens = ">=2.1.0"
executing = ">=1.2.0"
pure-eval = "*"
[package.extras]
tests = ["cython", "littleutils", "pygments", "pytest", "typeguard"]
[[package]]
name = "statsmodels"
version = "0.13.5"
description = "Statistical computations and models for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = {version = ">=1.17", markers = "python_version != \"3.10\" or platform_system != \"Windows\" or platform_python_implementation == \"PyPy\""}
packaging = ">=21.3"
pandas = ">=0.25"
patsy = ">=0.5.2"
scipy = [
{version = ">=1.3", markers = "(python_version > \"3.9\" or platform_system != \"Windows\" or platform_machine != \"x86\") and python_version < \"3.12\""},
{version = ">=1.3,<1.9", markers = "python_version == \"3.8\" and platform_system == \"Windows\" and platform_machine == \"x86\" or python_version == \"3.9\" and platform_system == \"Windows\" and platform_machine == \"x86\""},
]
[package.extras]
build = ["cython (>=0.29.32)"]
develop = ["Jinja2", "colorama", "cython (>=0.29.32)", "cython (>=0.29.32,<3.0.0)", "flake8", "isort", "joblib", "matplotlib (>=3)", "oldest-supported-numpy (>=2022.4.18)", "pytest (>=7.0.1,<7.1.0)", "pytest-randomly", "pytest-xdist", "pywinpty", "setuptools-scm[toml] (>=7.0.0,<7.1.0)"]
docs = ["ipykernel", "jupyter-client", "matplotlib", "nbconvert", "nbformat", "numpydoc", "pandas-datareader", "sphinx"]
[[package]]
name = "sympy"
version = "1.11.1"
description = "Computer algebra system (CAS) in Python"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
mpmath = ">=0.19"
[[package]]
name = "tblib"
version = "1.7.0"
description = "Traceback serialization library."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "tenacity"
version = "8.1.0"
description = "Retry code until it succeeds"
category = "main"
optional = false
python-versions = ">=3.6"
[package.extras]
doc = ["reno", "sphinx", "tornado (>=4.5)"]
[[package]]
name = "tensorboard"
version = "2.11.0"
description = "TensorBoard lets you watch Tensors Flow"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=0.4"
google-auth = ">=1.6.3,<3"
google-auth-oauthlib = ">=0.4.1,<0.5"
grpcio = ">=1.24.3"
markdown = ">=2.6.8"
numpy = ">=1.12.0"
protobuf = ">=3.9.2,<4"
requests = ">=2.21.0,<3"
setuptools = ">=41.0.0"
tensorboard-data-server = ">=0.6.0,<0.7.0"
tensorboard-plugin-wit = ">=1.6.0"
werkzeug = ">=1.0.1"
wheel = ">=0.26"
[[package]]
name = "tensorboard-data-server"
version = "0.6.1"
description = "Fast data loading for TensorBoard"
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tensorboard-plugin-wit"
version = "1.8.1"
description = "What-If Tool TensorBoard plugin."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "tensorflow"
version = "2.11.0"
description = "TensorFlow is an open source machine learning framework for everyone."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
absl-py = ">=1.0.0"
astunparse = ">=1.6.0"
flatbuffers = ">=2.0"
gast = ">=0.2.1,<=0.4.0"
google-pasta = ">=0.1.1"
grpcio = ">=1.24.3,<2.0"
h5py = ">=2.9.0"
keras = ">=2.11.0,<2.12"
libclang = ">=13.0.0"
numpy = ">=1.20"
opt-einsum = ">=2.3.2"
packaging = "*"
protobuf = ">=3.9.2,<3.20"
setuptools = "*"
six = ">=1.12.0"
tensorboard = ">=2.11,<2.12"
tensorflow-estimator = ">=2.11.0,<2.12"
tensorflow-io-gcs-filesystem = {version = ">=0.23.1", markers = "platform_machine != \"arm64\" or platform_system != \"Darwin\""}
termcolor = ">=1.1.0"
typing-extensions = ">=3.6.6"
wrapt = ">=1.11.0"
[[package]]
name = "tensorflow-estimator"
version = "2.11.0"
description = "TensorFlow Estimator."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tensorflow-io-gcs-filesystem"
version = "0.28.0"
description = "TensorFlow IO"
category = "dev"
optional = false
python-versions = ">=3.7, <3.11"
[package.extras]
tensorflow = ["tensorflow (>=2.11.0,<2.12.0)"]
tensorflow-aarch64 = ["tensorflow-aarch64 (>=2.11.0,<2.12.0)"]
tensorflow-cpu = ["tensorflow-cpu (>=2.11.0,<2.12.0)"]
tensorflow-gpu = ["tensorflow-gpu (>=2.11.0,<2.12.0)"]
tensorflow-rocm = ["tensorflow-rocm (>=2.11.0,<2.12.0)"]
[[package]]
name = "termcolor"
version = "2.1.1"
description = "ANSI color formatting for output in terminal"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
tests = ["pytest", "pytest-cov"]
[[package]]
name = "terminado"
version = "0.17.0"
description = "Tornado websocket backend for the Xterm.js Javascript terminal emulator library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
ptyprocess = {version = "*", markers = "os_name != \"nt\""}
pywinpty = {version = ">=1.1.0", markers = "os_name == \"nt\""}
tornado = ">=6.1.0"
[package.extras]
docs = ["pydata-sphinx-theme", "sphinx"]
test = ["pre-commit", "pytest (>=7.0)", "pytest-timeout"]
[[package]]
name = "thinc"
version = "8.1.5"
description = "A refreshing functional take on deep learning, compatible with your favorite libraries"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
blis = ">=0.7.8,<0.8.0"
catalogue = ">=2.0.4,<2.1.0"
confection = ">=0.0.1,<1.0.0"
cymem = ">=2.0.2,<2.1.0"
murmurhash = ">=1.0.2,<1.1.0"
numpy = ">=1.15.0"
preshed = ">=3.0.2,<3.1.0"
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<1.11.0"
setuptools = "*"
srsly = ">=2.4.0,<3.0.0"
wasabi = ">=0.8.1,<1.1.0"
[package.extras]
cuda = ["cupy (>=5.0.0b4)"]
cuda-autodetect = ["cupy-wheel (>=11.0.0)"]
cuda100 = ["cupy-cuda100 (>=5.0.0b4)"]
cuda101 = ["cupy-cuda101 (>=5.0.0b4)"]
cuda102 = ["cupy-cuda102 (>=5.0.0b4)"]
cuda110 = ["cupy-cuda110 (>=5.0.0b4)"]
cuda111 = ["cupy-cuda111 (>=5.0.0b4)"]
cuda112 = ["cupy-cuda112 (>=5.0.0b4)"]
cuda113 = ["cupy-cuda113 (>=5.0.0b4)"]
cuda114 = ["cupy-cuda114 (>=5.0.0b4)"]
cuda115 = ["cupy-cuda115 (>=5.0.0b4)"]
cuda116 = ["cupy-cuda116 (>=5.0.0b4)"]
cuda117 = ["cupy-cuda117 (>=5.0.0b4)"]
cuda11x = ["cupy-cuda11x (>=11.0.0)"]
cuda80 = ["cupy-cuda80 (>=5.0.0b4)"]
cuda90 = ["cupy-cuda90 (>=5.0.0b4)"]
cuda91 = ["cupy-cuda91 (>=5.0.0b4)"]
cuda92 = ["cupy-cuda92 (>=5.0.0b4)"]
datasets = ["ml-datasets (>=0.2.0,<0.3.0)"]
mxnet = ["mxnet (>=1.5.1,<1.6.0)"]
tensorflow = ["tensorflow (>=2.0.0,<2.6.0)"]
torch = ["torch (>=1.6.0)"]
[[package]]
name = "threadpoolctl"
version = "3.1.0"
description = "threadpoolctl"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tinycss2"
version = "1.2.1"
description = "A tiny CSS parser"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
webencodings = ">=0.4"
[package.extras]
doc = ["sphinx", "sphinx_rtd_theme"]
test = ["flake8", "isort", "pytest"]
[[package]]
name = "tokenize-rt"
version = "5.0.0"
description = "A wrapper around the stdlib `tokenize` which roundtrips."
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tomli"
version = "2.0.1"
description = "A lil' TOML parser"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "toolz"
version = "0.12.0"
description = "List processing tools and functional utilities"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "torch"
version = "1.12.1"
description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration"
category = "main"
optional = false
python-versions = ">=3.7.0"
[package.dependencies]
typing-extensions = "*"
[[package]]
name = "torchvision"
version = "0.13.1"
description = "image and video datasets and models for torch deep learning"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
numpy = "*"
pillow = ">=5.3.0,<8.3.0 || >=8.4.0"
requests = "*"
torch = "1.12.1"
typing-extensions = "*"
[package.extras]
scipy = ["scipy"]
[[package]]
name = "tornado"
version = "6.2"
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
category = "main"
optional = false
python-versions = ">= 3.7"
[[package]]
name = "tqdm"
version = "4.64.1"
description = "Fast, Extensible Progress Meter"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
dev = ["py-make (>=0.1.0)", "twine", "wheel"]
notebook = ["ipywidgets (>=6)"]
slack = ["slack-sdk"]
telegram = ["requests"]
[[package]]
name = "traitlets"
version = "5.5.0"
description = ""
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["myst-parser", "pydata-sphinx-theme", "sphinx"]
test = ["pre-commit", "pytest"]
[[package]]
name = "typer"
version = "0.7.0"
description = "Typer, build great CLIs. Easy to code. Based on Python type hints."
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
click = ">=7.1.1,<9.0.0"
[package.extras]
all = ["colorama (>=0.4.3,<0.5.0)", "rich (>=10.11.0,<13.0.0)", "shellingham (>=1.3.0,<2.0.0)"]
dev = ["autoflake (>=1.3.1,<2.0.0)", "flake8 (>=3.8.3,<4.0.0)", "pre-commit (>=2.17.0,<3.0.0)"]
doc = ["cairosvg (>=2.5.2,<3.0.0)", "mdx-include (>=1.4.1,<2.0.0)", "mkdocs (>=1.1.2,<2.0.0)", "mkdocs-material (>=8.1.4,<9.0.0)", "pillow (>=9.3.0,<10.0.0)"]
test = ["black (>=22.3.0,<23.0.0)", "coverage (>=6.2,<7.0)", "isort (>=5.0.6,<6.0.0)", "mypy (==0.910)", "pytest (>=4.4.0,<8.0.0)", "pytest-cov (>=2.10.0,<5.0.0)", "pytest-sugar (>=0.9.4,<0.10.0)", "pytest-xdist (>=1.32.0,<4.0.0)", "rich (>=10.11.0,<13.0.0)", "shellingham (>=1.3.0,<2.0.0)"]
[[package]]
name = "typing-extensions"
version = "4.4.0"
description = "Backported and Experimental Type Hints for Python 3.7+"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tzdata"
version = "2022.6"
description = "Provider of IANA time zone data"
category = "dev"
optional = false
python-versions = ">=2"
[[package]]
name = "tzlocal"
version = "4.2"
description = "tzinfo object for the local timezone"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
"backports.zoneinfo" = {version = "*", markers = "python_version < \"3.9\""}
pytz-deprecation-shim = "*"
tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["black", "pyroma", "pytest-cov", "zest.releaser"]
test = ["pytest (>=4.3)", "pytest-mock (>=3.3)"]
[[package]]
name = "urllib3"
version = "1.26.13"
description = "HTTP library with thread-safe connection pooling, file post, and more."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*"
[package.extras]
brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)", "brotlipy (>=0.6.0)"]
secure = ["certifi", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "ipaddress", "pyOpenSSL (>=0.14)", "urllib3-secure-extra"]
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
[[package]]
name = "wasabi"
version = "0.10.1"
description = "A lightweight console printing and formatting toolkit"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "wcwidth"
version = "0.2.5"
description = "Measures the displayed width of unicode strings in a terminal"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "webencodings"
version = "0.5.1"
description = "Character encoding aliases for legacy web content"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "websocket-client"
version = "1.4.2"
description = "WebSocket client for Python with low level API options"
category = "dev"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["Sphinx (>=3.4)", "sphinx-rtd-theme (>=0.5)"]
optional = ["python-socks", "wsaccel"]
test = ["websockets"]
[[package]]
name = "Werkzeug"
version = "2.2.2"
description = "The comprehensive WSGI web application library."
category = "dev"
optional = false
python-versions = ">=3.7"
[package.dependencies]
MarkupSafe = ">=2.1.1"
[package.extras]
watchdog = ["watchdog"]
[[package]]
name = "wheel"
version = "0.38.4"
description = "A built-package format for Python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
test = ["pytest (>=3.0.0)"]
[[package]]
name = "widgetsnbextension"
version = "4.0.3"
description = "Jupyter interactive widgets for Jupyter Notebook"
category = "dev"
optional = false
python-versions = ">=3.7"
[[package]]
name = "wrapt"
version = "1.14.1"
description = "Module for decorators, wrappers and monkey patching."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "xgboost"
version = "1.7.1"
description = "XGBoost Python Package"
category = "main"
optional = false
python-versions = ">=3.8"
[package.dependencies]
numpy = "*"
scipy = "*"
[package.extras]
dask = ["dask", "distributed", "pandas"]
datatable = ["datatable"]
pandas = ["pandas"]
plotting = ["graphviz", "matplotlib"]
pyspark = ["cloudpickle", "pyspark", "scikit-learn"]
scikit-learn = ["scikit-learn"]
[[package]]
name = "zict"
version = "2.2.0"
description = "Mutable mapping tools"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
heapdict = "*"
[[package]]
name = "zipp"
version = "3.11.0"
description = "Backport of pathlib-compatible object wrapper for zip files"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"]
testing = ["flake8 (<5)", "func-timeout", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
[extras]
causalml = ["causalml", "llvmlite", "cython"]
econml = ["econml"]
plotting = ["matplotlib"]
pydot = ["pydot"]
pygraphviz = ["pygraphviz"]
[metadata]
lock-version = "1.1"
python-versions = ">=3.8,<3.10"
content-hash = "12d40b6d9616d209cd632e2315aafc72f78d3e35efdf6e52ca410588465787cc"
[metadata.files]
absl-py = [
{file = "absl-py-1.3.0.tar.gz", hash = "sha256:463c38a08d2e4cef6c498b76ba5bd4858e4c6ef51da1a5a1f27139a022e20248"},
{file = "absl_py-1.3.0-py3-none-any.whl", hash = "sha256:34995df9bd7a09b3b8749e230408f5a2a2dd7a68a0d33c12a3d0cb15a041a507"},
]
alabaster = [
{file = "alabaster-0.7.12-py2.py3-none-any.whl", hash = "sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359"},
{file = "alabaster-0.7.12.tar.gz", hash = "sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02"},
]
anyio = [
{file = "anyio-3.6.2-py3-none-any.whl", hash = "sha256:fbbe32bd270d2a2ef3ed1c5d45041250284e31fc0a4df4a5a6071842051a51e3"},
{file = "anyio-3.6.2.tar.gz", hash = "sha256:25ea0d673ae30af41a0c442f81cf3b38c7e79fdc7b60335a4c14e05eb0947421"},
]
appnope = [
{file = "appnope-0.1.3-py2.py3-none-any.whl", hash = "sha256:265a455292d0bd8a72453494fa24df5a11eb18373a60c7c0430889f22548605e"},
{file = "appnope-0.1.3.tar.gz", hash = "sha256:02bd91c4de869fbb1e1c50aafc4098827a7a54ab2f39d9dcba6c9547ed920e24"},
]
argon2-cffi = [
{file = "argon2-cffi-21.3.0.tar.gz", hash = "sha256:d384164d944190a7dd7ef22c6aa3ff197da12962bd04b17f64d4e93d934dba5b"},
{file = "argon2_cffi-21.3.0-py3-none-any.whl", hash = "sha256:8c976986f2c5c0e5000919e6de187906cfd81fb1c72bf9d88c01177e77da7f80"},
]
argon2-cffi-bindings = [
{file = "argon2-cffi-bindings-21.2.0.tar.gz", hash = "sha256:bb89ceffa6c791807d1305ceb77dbfacc5aa499891d2c55661c6459651fc39e3"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ccb949252cb2ab3a08c02024acb77cfb179492d5701c7cbdbfd776124d4d2367"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9524464572e12979364b7d600abf96181d3541da11e23ddf565a32e70bd4dc0d"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b746dba803a79238e925d9046a63aa26bf86ab2a2fe74ce6b009a1c3f5c8f2ae"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58ed19212051f49a523abb1dbe954337dc82d947fb6e5a0da60f7c8471a8476c"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd46088725ef7f58b5a1ef7ca06647ebaf0eb4baff7d1d0d177c6cc8744abd86"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_i686.whl", hash = "sha256:8cd69c07dd875537a824deec19f978e0f2078fdda07fd5c42ac29668dda5f40f"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f1152ac548bd5b8bcecfb0b0371f082037e47128653df2e8ba6e914d384f3c3e"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win32.whl", hash = "sha256:603ca0aba86b1349b147cab91ae970c63118a0f30444d4bc80355937c950c082"},
{file = "argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:b2ef1c30440dbbcba7a5dc3e319408b59676e2e039e2ae11a8775ecf482b192f"},
{file = "argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e415e3f62c8d124ee16018e491a009937f8cf7ebf5eb430ffc5de21b900dad93"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3e385d1c39c520c08b53d63300c3ecc28622f076f4c2b0e6d7e796e9f6502194"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3e3cc67fdb7d82c4718f19b4e7a87123caf8a93fde7e23cf66ac0337d3cb3f"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a22ad9800121b71099d0fb0a65323810a15f2e292f2ba450810a7316e128ee5"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f9f8b450ed0547e3d473fdc8612083fd08dd2120d6ac8f73828df9b7d45bb351"},
{file = "argon2_cffi_bindings-21.2.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:93f9bf70084f97245ba10ee36575f0c3f1e7d7724d67d8e5b08e61787c320ed7"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:3b9ef65804859d335dc6b31582cad2c5166f0c3e7975f324d9ffaa34ee7e6583"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4966ef5848d820776f5f562a7d45fdd70c2f330c961d0d745b784034bd9f48d"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20ef543a89dee4db46a1a6e206cd015360e5a75822f76df533845c3cbaf72670"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ed2937d286e2ad0cc79a7087d3c272832865f779430e0cc2b4f3718d3159b0cb"},
{file = "argon2_cffi_bindings-21.2.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5e00316dabdaea0b2dd82d141cc66889ced0cdcbfa599e8b471cf22c620c329a"},
]
asttokens = [
{file = "asttokens-2.1.0-py2.py3-none-any.whl", hash = "sha256:1b28ed85e254b724439afc783d4bee767f780b936c3fe8b3275332f42cf5f561"},
{file = "asttokens-2.1.0.tar.gz", hash = "sha256:4aa76401a151c8cc572d906aad7aea2a841780834a19d780f4321c0fe1b54635"},
]
astunparse = [
{file = "astunparse-1.6.3-py2.py3-none-any.whl", hash = "sha256:c2652417f2c8b5bb325c885ae329bdf3f86424075c4fd1a128674bc6fba4b8e8"},
{file = "astunparse-1.6.3.tar.gz", hash = "sha256:5ad93a8456f0d084c3456d059fd9a92cce667963232cbf763eac3bc5b7940872"},
]
attrs = [
{file = "attrs-22.1.0-py2.py3-none-any.whl", hash = "sha256:86efa402f67bf2df34f51a335487cf46b1ec130d02b8d39fd248abfd30da551c"},
{file = "attrs-22.1.0.tar.gz", hash = "sha256:29adc2665447e5191d0e7c568fde78b21f9672d344281d0c6e1ab085429b22b6"},
]
"autogluon.common" = [
{file = "autogluon.common-0.6.0-py3-none-any.whl", hash = "sha256:8e1a46efaab051069589b875e417df30b38150a908e9aa2ff3ab479747a487ce"},
{file = "autogluon.common-0.6.0.tar.gz", hash = "sha256:d967844c728ad8e9a5c0f9e0deddbe6c4beb0e47cdf829a44a4834b5917798e0"},
]
"autogluon.core" = [
{file = "autogluon.core-0.6.0-py3-none-any.whl", hash = "sha256:b7efd2dfebfc9a3be0e39d1bf1bd352f45b23cccd503cf32afb9f5f23d58126b"},
{file = "autogluon.core-0.6.0.tar.gz", hash = "sha256:a6b6d57ec38d4193afab6b121cde63a6085446a51f84b9fa358221b7fed71ff4"},
]
"autogluon.features" = [
{file = "autogluon.features-0.6.0-py3-none-any.whl", hash = "sha256:ecff1a69cc768bc55777b3f7453ee89859352162dd43adda4451faadc9e583bf"},
{file = "autogluon.features-0.6.0.tar.gz", hash = "sha256:dced399ac2652c7c872da5208d0a0383778aeca3706a1b987b9781c9420d80c7"},
]
"autogluon.tabular" = [
{file = "autogluon.tabular-0.6.0-py3-none-any.whl", hash = "sha256:16404037c475e8746d61a7b1c977d5fd14afd853ebc9777fb0eafc851d37f8ad"},
{file = "autogluon.tabular-0.6.0.tar.gz", hash = "sha256:91892b7c9749942526eabfdd1bbb6d9daae2c24f785570a0552b2c7b9b851ab4"},
]
Babel = [
{file = "Babel-2.11.0-py3-none-any.whl", hash = "sha256:1ad3eca1c885218f6dce2ab67291178944f810a10a9b5f3cb8382a5a232b64fe"},
{file = "Babel-2.11.0.tar.gz", hash = "sha256:5ef4b3226b0180dedded4229651c8b0e1a3a6a2837d45a073272f313e4cf97f6"},
]
backcall = [
{file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
{file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"},
]
"backports.zoneinfo" = [
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:da6013fd84a690242c310d77ddb8441a559e9cb3d3d59ebac9aca1a57b2e18bc"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:89a48c0d158a3cc3f654da4c2de1ceba85263fafb861b98b59040a5086259722"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:1c5742112073a563c81f786e77514969acb58649bcdf6cdf0b4ed31a348d4546"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win32.whl", hash = "sha256:e8236383a20872c0cdf5a62b554b27538db7fa1bbec52429d8d106effbaeca08"},
{file = "backports.zoneinfo-0.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:8439c030a11780786a2002261569bdf362264f605dfa4d65090b64b05c9f79a7"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:f04e857b59d9d1ccc39ce2da1021d196e47234873820cbeaad210724b1ee28ac"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:17746bd546106fa389c51dbea67c8b7c8f0d14b5526a579ca6ccf5ed72c526cf"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5c144945a7752ca544b4b78c8c41544cdfaf9786f25fe5ffb10e838e19a27570"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win32.whl", hash = "sha256:e55b384612d93be96506932a786bbcde5a2db7a9e6a4bb4bffe8b733f5b9036b"},
{file = "backports.zoneinfo-0.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a76b38c52400b762e48131494ba26be363491ac4f9a04c1b7e92483d169f6582"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:8961c0f32cd0336fb8e8ead11a1f8cd99ec07145ec2931122faaac1c8f7fd987"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e81b76cace8eda1fca50e345242ba977f9be6ae3945af8d46326d776b4cf78d1"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7b0a64cda4145548fed9efc10322770f929b944ce5cee6c0dfe0c87bf4c0c8c9"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win32.whl", hash = "sha256:1b13e654a55cd45672cb54ed12148cd33628f672548f373963b0bff67b217328"},
{file = "backports.zoneinfo-0.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:4a0f800587060bf8880f954dbef70de6c11bbe59c673c3d818921f042f9954a6"},
{file = "backports.zoneinfo-0.2.1.tar.gz", hash = "sha256:fadbfe37f74051d024037f223b8e001611eac868b5c5b06144ef4d8b799862f2"},
]
beautifulsoup4 = [
{file = "beautifulsoup4-4.11.1-py3-none-any.whl", hash = "sha256:58d5c3d29f5a36ffeb94f02f0d786cd53014cf9b3b3951d42e0080d8a9498d30"},
{file = "beautifulsoup4-4.11.1.tar.gz", hash = "sha256:ad9aa55b65ef2808eb405f46cf74df7fcb7044d5cbc26487f96eb2ef2e436693"},
]
black = [
{file = "black-22.10.0-1fixedarch-cp310-cp310-macosx_11_0_x86_64.whl", hash = "sha256:5cc42ca67989e9c3cf859e84c2bf014f6633db63d1cbdf8fdb666dcd9e77e3fa"},
{file = "black-22.10.0-1fixedarch-cp311-cp311-macosx_11_0_x86_64.whl", hash = "sha256:5d8f74030e67087b219b032aa33a919fae8806d49c867846bfacde57f43972ef"},
{file = "black-22.10.0-1fixedarch-cp37-cp37m-macosx_10_16_x86_64.whl", hash = "sha256:197df8509263b0b8614e1df1756b1dd41be6738eed2ba9e9769f3880c2b9d7b6"},
{file = "black-22.10.0-1fixedarch-cp38-cp38-macosx_10_16_x86_64.whl", hash = "sha256:2644b5d63633702bc2c5f3754b1b475378fbbfb481f62319388235d0cd104c2d"},
{file = "black-22.10.0-1fixedarch-cp39-cp39-macosx_11_0_x86_64.whl", hash = "sha256:e41a86c6c650bcecc6633ee3180d80a025db041a8e2398dcc059b3afa8382cd4"},
{file = "black-22.10.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2039230db3c6c639bd84efe3292ec7b06e9214a2992cd9beb293d639c6402edb"},
{file = "black-22.10.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:14ff67aec0a47c424bc99b71005202045dc09270da44a27848d534600ac64fc7"},
{file = "black-22.10.0-cp310-cp310-win_amd64.whl", hash = "sha256:819dc789f4498ecc91438a7de64427c73b45035e2e3680c92e18795a839ebb66"},
{file = "black-22.10.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5b9b29da4f564ba8787c119f37d174f2b69cdfdf9015b7d8c5c16121ddc054ae"},
{file = "black-22.10.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b8b49776299fece66bffaafe357d929ca9451450f5466e997a7285ab0fe28e3b"},
{file = "black-22.10.0-cp311-cp311-win_amd64.whl", hash = "sha256:21199526696b8f09c3997e2b4db8d0b108d801a348414264d2eb8eb2532e540d"},
{file = "black-22.10.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1e464456d24e23d11fced2bc8c47ef66d471f845c7b7a42f3bd77bf3d1789650"},
{file = "black-22.10.0-cp37-cp37m-win_amd64.whl", hash = "sha256:9311e99228ae10023300ecac05be5a296f60d2fd10fff31cf5c1fa4ca4b1988d"},
{file = "black-22.10.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:fba8a281e570adafb79f7755ac8721b6cf1bbf691186a287e990c7929c7692ff"},
{file = "black-22.10.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:915ace4ff03fdfff953962fa672d44be269deb2eaf88499a0f8805221bc68c87"},
{file = "black-22.10.0-cp38-cp38-win_amd64.whl", hash = "sha256:444ebfb4e441254e87bad00c661fe32df9969b2bf224373a448d8aca2132b395"},
{file = "black-22.10.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:974308c58d057a651d182208a484ce80a26dac0caef2895836a92dd6ebd725e0"},
{file = "black-22.10.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:72ef3925f30e12a184889aac03d77d031056860ccae8a1e519f6cbb742736383"},
{file = "black-22.10.0-cp39-cp39-win_amd64.whl", hash = "sha256:432247333090c8c5366e69627ccb363bc58514ae3e63f7fc75c54b1ea80fa7de"},
{file = "black-22.10.0-py3-none-any.whl", hash = "sha256:c957b2b4ea88587b46cf49d1dc17681c1e672864fd7af32fc1e9664d572b3458"},
{file = "black-22.10.0.tar.gz", hash = "sha256:f513588da599943e0cde4e32cc9879e825d58720d6557062d1098c5ad80080e1"},
]
bleach = [
{file = "bleach-5.0.1-py3-none-any.whl", hash = "sha256:085f7f33c15bd408dd9b17a4ad77c577db66d76203e5984b1bd59baeee948b2a"},
{file = "bleach-5.0.1.tar.gz", hash = "sha256:0d03255c47eb9bd2f26aa9bb7f2107732e7e8fe195ca2f64709fcf3b0a4a085c"},
]
blis = [
{file = "blis-0.7.9-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b3ea73707a7938304c08363a0b990600e579bfb52dece7c674eafac4bf2df9f7"},
{file = "blis-0.7.9-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e85993364cae82707bfe7e637bee64ec96e232af31301e5c81a351778cb394b9"},
{file = "blis-0.7.9-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d205a7e69523e2bacdd67ea906b82b84034067e0de83b33bd83eb96b9e844ae3"},
{file = "blis-0.7.9-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b9737035636452fb6d08e7ab79e5a9904be18a0736868a129179cd9f9ab59825"},
{file = "blis-0.7.9-cp310-cp310-win_amd64.whl", hash = "sha256:d3882b4f44a33367812b5e287c0690027092830ffb1cce124b02f64e761819a4"},
{file = "blis-0.7.9-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3dbb44311029263a6f65ed55a35f970aeb1d20b18bfac4c025de5aadf7889a8c"},
{file = "blis-0.7.9-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:6fd5941bd5a21082b19d1dd0f6d62cd35609c25eb769aa3457d9877ef2ce37a9"},
{file = "blis-0.7.9-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:97ad55e9ef36e4ff06b35802d0cf7bfc56f9697c6bc9427f59c90956bb98377d"},
{file = "blis-0.7.9-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f7b6315d7b1ac5546bc0350f5f8d7cc064438d23db19a5c21aaa6ae7d93c1ab5"},
{file = "blis-0.7.9-cp311-cp311-win_amd64.whl", hash = "sha256:5fd46c649acd1920482b4f5556d1c88693cba9bf6a494a020b00f14b42e1132f"},
{file = "blis-0.7.9-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:db2959560dcb34e912dad0e0d091f19b05b61363bac15d78307c01334a4e5d9d"},
{file = "blis-0.7.9-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0521231bc95ab522f280da3bbb096299c910a62cac2376d48d4a1d403c54393"},
{file = "blis-0.7.9-cp36-cp36m-win_amd64.whl", hash = "sha256:d811e88480203d75e6e959f313fdbf3326393b4e2b317067d952347f5c56216e"},
{file = "blis-0.7.9-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5cb1db88ab629ccb39eac110b742b98e3511d48ce9caa82ca32609d9169a9c9c"},
{file = "blis-0.7.9-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c399a03de4059bf8e700b921f9ff5d72b2a86673616c40db40cd0592051bdd07"},
{file = "blis-0.7.9-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d4eb70a79562a211bd2e6b6db63f1e2eed32c0ab3e9ef921d86f657ae8375845"},
{file = "blis-0.7.9-cp37-cp37m-win_amd64.whl", hash = "sha256:3e3f95e035c7456a1f5f3b5a3cfe708483a00335a3a8ad2211d57ba4d5f749a5"},
{file = "blis-0.7.9-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:179037cb5e6744c2e93b6b5facc6e4a0073776d514933c3db1e1f064a3253425"},
{file = "blis-0.7.9-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d0e82a6e0337d5231129a4e8b36978fa7b973ad3bb0257fd8e3714a9b35ceffd"},
{file = "blis-0.7.9-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6d12475e588a322e66a18346a3faa9eb92523504042e665c193d1b9b0b3f0482"},
{file = "blis-0.7.9-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4d5755ef37a573647be62684ca1545698879d07321f1e5b89a4fd669ce355eb0"},
{file = "blis-0.7.9-cp38-cp38-win_amd64.whl", hash = "sha256:b8a1fcd2eb267301ab13e1e4209c165d172cdf9c0c9e08186a9e234bf91daa16"},
{file = "blis-0.7.9-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8275f6b6eee714b85f00bf882720f508ed6a60974bcde489715d37fd35529da8"},
{file = "blis-0.7.9-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:7417667c221e29fe8662c3b2ff9bc201c6a5214bbb5eb6cc290484868802258d"},
{file = "blis-0.7.9-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5f4691bf62013eccc167c38a85c09a0bf0c6e3e80d4c2229cdf2668c1124eb0"},
{file = "blis-0.7.9-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f5cec812ee47b29107eb36af9b457be7191163eab65d61775ed63538232c59d5"},
{file = "blis-0.7.9-cp39-cp39-win_amd64.whl", hash = "sha256:d81c3f627d33545fc25c9dcb5fee66c476d89288a27d63ac16ea63453401ffd5"},
{file = "blis-0.7.9.tar.gz", hash = "sha256:29ef4c25007785a90ffc2f0ab3d3bd3b75cd2d7856a9a482b7d0dac8d511a09d"},
]
boto3 = [
{file = "boto3-1.26.17-py3-none-any.whl", hash = "sha256:c39b7e87b27b00dcf452b2fc80252d311e275036f3d48464af34d0123077f985"},
{file = "boto3-1.26.17.tar.gz", hash = "sha256:bb40a9788dd2234851cdd1110eec0e3f6b3af6b98280924fa44c25199ced5737"},
]
botocore = [
{file = "botocore-1.29.17-py3-none-any.whl", hash = "sha256:d4bab7d42acdb18effa33fee53d137b8b1bdedc2da196428a2d1e04a123778bc"},
{file = "botocore-1.29.17.tar.gz", hash = "sha256:4be7ca8c581dbc6e8584876c4347dcc4f4bc6aa6e6e8131901fc11816fc8151b"},
]
cachetools = [
{file = "cachetools-5.2.0-py3-none-any.whl", hash = "sha256:f9f17d2aec496a9aa6b76f53e3b614c965223c061982d434d160f930c698a9db"},
{file = "cachetools-5.2.0.tar.gz", hash = "sha256:6a94c6402995a99c3970cc7e4884bb60b4a8639938157eeed436098bf9831757"},
]
catalogue = [
{file = "catalogue-2.0.8-py3-none-any.whl", hash = "sha256:2d786e229d8d202b4f8a2a059858e45a2331201d831e39746732daa704b99f69"},
{file = "catalogue-2.0.8.tar.gz", hash = "sha256:b325c77659208bfb6af1b0d93b1a1aa4112e1bb29a4c5ced816758a722f0e388"},
]
catboost = [
{file = "catboost-1.1.1-cp310-none-macosx_10_6_universal2.whl", hash = "sha256:93532f6807228f74db9c8184a0893ab222232d23fc5b3db534e2d8fedbba42cf"},
{file = "catboost-1.1.1-cp310-none-manylinux1_x86_64.whl", hash = "sha256:7c7364d79d5ff9deb56956560ba91a1b62b84204961d540bffd97f7b995e8cba"},
{file = "catboost-1.1.1-cp310-none-win_amd64.whl", hash = "sha256:5ec0c9bd65e53ae6c26d17c06f9c28e4febbd7cbdeb858460eb3d34249a10f30"},
{file = "catboost-1.1.1-cp36-none-macosx_10_6_universal2.whl", hash = "sha256:60acc4448eb45242f4d30aea6ccdf45bfaa8646bbc4ede3200cf25ba0d6bcf3d"},
{file = "catboost-1.1.1-cp36-none-manylinux1_x86_64.whl", hash = "sha256:b7443b40b5ddb141c6d14bff16c13f7cf4852893b57d7eda5dff30fb7517e14d"},
{file = "catboost-1.1.1-cp36-none-win_amd64.whl", hash = "sha256:190828590270e3dea5fb58f0fd13715ee2324f6ee321866592c422a1da141961"},
{file = "catboost-1.1.1-cp37-none-macosx_10_6_universal2.whl", hash = "sha256:a2fe4d08a360c3c3cabfa3a94c586f2261b93a3fff043ae2b43d2d4de121c2ce"},
{file = "catboost-1.1.1-cp37-none-manylinux1_x86_64.whl", hash = "sha256:4e350c40920dbd9644f1c7b88cb74cb8b96f1ecbbd7c12f6223964465d83b968"},
{file = "catboost-1.1.1-cp37-none-win_amd64.whl", hash = "sha256:0033569f2e6314a04a84ec83eecd39f77402426b52571b78991e629d7252c6f7"},
{file = "catboost-1.1.1-cp38-none-macosx_10_6_universal2.whl", hash = "sha256:454aae50922b10172b94971033d4b0607128a2e2ca8a5845cf8879ea28d80942"},
{file = "catboost-1.1.1-cp38-none-manylinux1_x86_64.whl", hash = "sha256:3fd12d9f1f89440292c63b242ccabdab012d313250e2b1e8a779d6618c734b32"},
{file = "catboost-1.1.1-cp38-none-win_amd64.whl", hash = "sha256:840348bf56dd11f6096030208601cbce87f1e6426ef33140fb6cc97bceb5fef3"},
{file = "catboost-1.1.1-cp39-none-macosx_10_6_universal2.whl", hash = "sha256:9e7c47050c8840ccaff4d394907d443bda01280a30778ae9d71939a7528f5ae3"},
{file = "catboost-1.1.1-cp39-none-manylinux1_x86_64.whl", hash = "sha256:a60ae2630f7b3752f262515a51b265521a4993df75dea26fa60777ec6e479395"},
{file = "catboost-1.1.1-cp39-none-win_amd64.whl", hash = "sha256:156264dbe9e841cb0b6333383e928cb8f65df4d00429a9771eb8b06b9bcfa17c"},
]
causal-learn = [
{file = "causal-learn-0.1.3.0.tar.gz", hash = "sha256:8242bced95e11eb4b4ee5f8085c528a25496d20c87bd5f3fcdb17d4678d7de63"},
{file = "causal_learn-0.1.3.0-py3-none-any.whl", hash = "sha256:d7271b0a60e839b725735373c4c5c012446dd216f17cc4b46aed550e08054d72"},
]
causalml = []
certifi = [
{file = "certifi-2022.9.24-py3-none-any.whl", hash = "sha256:90c1a32f1d68f940488354e36370f6cca89f0f106db09518524c88d6ed83f382"},
{file = "certifi-2022.9.24.tar.gz", hash = "sha256:0d9c601124e5a6ba9712dbc60d9c53c21e34f5f641fe83002317394311bdce14"},
]
cffi = [
{file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
{file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
{file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
{file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
{file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
{file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
{file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
{file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
{file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
{file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
{file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
{file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
{file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
{file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
{file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
{file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
{file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
{file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
{file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
{file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
{file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
{file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
{file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
{file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
{file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
]
charset-normalizer = [
{file = "charset-normalizer-2.1.1.tar.gz", hash = "sha256:5a3d016c7c547f69d6f81fb0db9449ce888b418b5b9952cc5e6e66843e9dd845"},
{file = "charset_normalizer-2.1.1-py3-none-any.whl", hash = "sha256:83e9a75d1911279afd89352c68b45348559d1fc0506b054b346651b5e7fee29f"},
]
click = [
{file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
{file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
]
cloudpickle = [
{file = "cloudpickle-2.2.0-py3-none-any.whl", hash = "sha256:7428798d5926d8fcbfd092d18d01a2a03daf8237d8fcdc8095d256b8490796f0"},
{file = "cloudpickle-2.2.0.tar.gz", hash = "sha256:3f4219469c55453cfe4737e564b67c2a149109dabf7f242478948b895f61106f"},
]
colorama = [
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
]
comm = [
{file = "comm-0.1.1-py3-none-any.whl", hash = "sha256:788a4ec961956c1cb2b0ba3c21f2458ff5757bb2f552032b140787af88d670a3"},
{file = "comm-0.1.1.tar.gz", hash = "sha256:f395ea64f4f261f35ffc2fbf80a62ec071375dac48cd3ea56092711e74dd063e"},
]
confection = [
{file = "confection-0.0.3-py3-none-any.whl", hash = "sha256:51af839c1240430421da2b248541ebc95f9d0ee385bcafa768b8acdbd2b0111d"},
{file = "confection-0.0.3.tar.gz", hash = "sha256:4fec47190057c43c9acbecb8b1b87a9bf31c469caa0d6888a5b9384432fdba5a"},
]
contourpy = [
{file = "contourpy-1.0.6-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:613c665529899b5d9fade7e5d1760111a0b011231277a0d36c49f0d3d6914bd6"},
{file = "contourpy-1.0.6-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:78ced51807ccb2f45d4ea73aca339756d75d021069604c2fccd05390dc3c28eb"},
{file = "contourpy-1.0.6-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b3b1bd7577c530eaf9d2bc52d1a93fef50ac516a8b1062c3d1b9bcec9ebe329b"},
{file = "contourpy-1.0.6-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d8834c14b8c3dd849005e06703469db9bf96ba2d66a3f88ecc539c9a8982e0ee"},
{file = "contourpy-1.0.6-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f4052a8a4926d4468416fc7d4b2a7b2a3e35f25b39f4061a7e2a3a2748c4fc48"},
{file = "contourpy-1.0.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1c0e1308307a75e07d1f1b5f0f56b5af84538a5e9027109a7bcf6cb47c434e72"},
{file = "contourpy-1.0.6-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9fc4e7973ed0e1fe689435842a6e6b330eb7ccc696080dda9a97b1a1b78e41db"},
{file = "contourpy-1.0.6-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:08e8d09d96219ace6cb596506fb9b64ea5f270b2fb9121158b976d88871fcfd1"},
{file = "contourpy-1.0.6-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:f33da6b5d19ad1bb5e7ad38bb8ba5c426d2178928bc2b2c44e8823ea0ecb6ff3"},
{file = "contourpy-1.0.6-cp310-cp310-win32.whl", hash = "sha256:12a7dc8439544ed05c6553bf026d5e8fa7fad48d63958a95d61698df0e00092b"},
{file = "contourpy-1.0.6-cp310-cp310-win_amd64.whl", hash = "sha256:eadad75bf91897f922e0fb3dca1b322a58b1726a953f98c2e5f0606bd8408621"},
{file = "contourpy-1.0.6-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:913bac9d064cff033cf3719e855d4f1db9f1c179e0ecf3ba9fdef21c21c6a16a"},
{file = "contourpy-1.0.6-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:46deb310a276cc5c1fd27958e358cce68b1e8a515fa5a574c670a504c3a3fe30"},
{file = "contourpy-1.0.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b64f747e92af7da3b85631a55d68c45a2d728b4036b03cdaba4bd94bcc85bd6f"},
{file = "contourpy-1.0.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50627bf76abb6ba291ad08db583161939c2c5fab38c38181b7833423ab9c7de3"},
{file = "contourpy-1.0.6-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:358f6364e4873f4d73360b35da30066f40387dd3c427a3e5432c6b28dd24a8fa"},
{file = "contourpy-1.0.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c78bfbc1a7bff053baf7e508449d2765964d67735c909b583204e3240a2aca45"},
{file = "contourpy-1.0.6-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e43255a83835a129ef98f75d13d643844d8c646b258bebd11e4a0975203e018f"},
{file = "contourpy-1.0.6-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:375d81366afd547b8558c4720337218345148bc2fcffa3a9870cab82b29667f2"},
{file = "contourpy-1.0.6-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:b98c820608e2dca6442e786817f646d11057c09a23b68d2b3737e6dcb6e4a49b"},
{file = "contourpy-1.0.6-cp311-cp311-win32.whl", hash = "sha256:0e4854cc02006ad6684ce092bdadab6f0912d131f91c2450ce6dbdea78ee3c0b"},
{file = "contourpy-1.0.6-cp311-cp311-win_amd64.whl", hash = "sha256:d2eff2af97ea0b61381828b1ad6cd249bbd41d280e53aea5cccd7b2b31b8225c"},
{file = "contourpy-1.0.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5b117d29433fc8393b18a696d794961464e37afb34a6eeb8b2c37b5f4128a83e"},
{file = "contourpy-1.0.6-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:341330ed19074f956cb20877ad8d2ae50e458884bfa6a6df3ae28487cc76c768"},
{file = "contourpy-1.0.6-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:371f6570a81dfdddbb837ba432293a63b4babb942a9eb7aaa699997adfb53278"},
{file = "contourpy-1.0.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9447c45df407d3ecb717d837af3b70cfef432138530712263730783b3d016512"},
{file = "contourpy-1.0.6-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:730c27978a0003b47b359935478b7d63fd8386dbb2dcd36c1e8de88cbfc1e9de"},
{file = "contourpy-1.0.6-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:da1ef35fd79be2926ba80fbb36327463e3656c02526e9b5b4c2b366588b74d9a"},
{file = "contourpy-1.0.6-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:cd2bc0c8f2e8de7dd89a7f1c10b8844e291bca17d359373203ef2e6100819edd"},
{file = "contourpy-1.0.6-cp37-cp37m-win32.whl", hash = "sha256:3a1917d3941dd58732c449c810fa7ce46cc305ce9325a11261d740118b85e6f3"},
{file = "contourpy-1.0.6-cp37-cp37m-win_amd64.whl", hash = "sha256:06ca79e1efbbe2df795822df2fa173d1a2b38b6e0f047a0ec7903fbca1d1847e"},
{file = "contourpy-1.0.6-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e626cefff8491bce356221c22af5a3ea528b0b41fbabc719c00ae233819ea0bf"},
{file = "contourpy-1.0.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:dbe6fe7a1166b1ddd7b6d887ea6fa8389d3f28b5ed3f73a8f40ece1fc5a3d340"},
{file = "contourpy-1.0.6-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e13b31d1b4b68db60b3b29f8e337908f328c7f05b9add4b1b5c74e0691180109"},
{file = "contourpy-1.0.6-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a79d239fc22c3b8d9d3de492aa0c245533f4f4c7608e5749af866949c0f1b1b9"},
{file = "contourpy-1.0.6-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9e8e686a6db92a46111a1ee0ee6f7fbfae4048f0019de207149f43ac1812cf95"},
{file = "contourpy-1.0.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:acd2bd02f1a7adff3a1f33e431eb96ab6d7987b039d2946a9b39fe6fb16a1036"},
{file = "contourpy-1.0.6-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:03d1b9c6b44a9e30d554654c72be89af94fab7510b4b9f62356c64c81cec8b7d"},
{file = "contourpy-1.0.6-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:b48d94386f1994db7c70c76b5808c12e23ed7a4ee13693c2fc5ab109d60243c0"},
{file = "contourpy-1.0.6-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:208bc904889c910d95aafcf7be9e677726df9ef71e216780170dbb7e37d118fa"},
{file = "contourpy-1.0.6-cp38-cp38-win32.whl", hash = "sha256:444fb776f58f4906d8d354eb6f6ce59d0a60f7b6a720da6c1ccb839db7c80eb9"},
{file = "contourpy-1.0.6-cp38-cp38-win_amd64.whl", hash = "sha256:9bc407a6af672da20da74823443707e38ece8b93a04009dca25856c2d9adadb1"},
{file = "contourpy-1.0.6-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:aa4674cf3fa2bd9c322982644967f01eed0c91bb890f624e0e0daf7a5c3383e9"},
{file = "contourpy-1.0.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6f56515e7c6fae4529b731f6c117752247bef9cdad2b12fc5ddf8ca6a50965a5"},
{file = "contourpy-1.0.6-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:344cb3badf6fc7316ad51835f56ac387bdf86c8e1b670904f18f437d70da4183"},
{file = "contourpy-1.0.6-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0b1e66346acfb17694d46175a0cea7d9036f12ed0c31dfe86f0f405eedde2bdd"},
{file = "contourpy-1.0.6-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8468b40528fa1e15181cccec4198623b55dcd58306f8815a793803f51f6c474a"},
{file = "contourpy-1.0.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1dedf4c64185a216c35eb488e6f433297c660321275734401760dafaeb0ad5c2"},
{file = "contourpy-1.0.6-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:494efed2c761f0f37262815f9e3c4bb9917c5c69806abdee1d1cb6611a7174a0"},
{file = "contourpy-1.0.6-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:75a2e638042118118ab39d337da4c7908c1af74a8464cad59f19fbc5bbafec9b"},
{file = "contourpy-1.0.6-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a628bba09ba72e472bf7b31018b6281fd4cc903f0888049a3724afba13b6e0b8"},
{file = "contourpy-1.0.6-cp39-cp39-win32.whl", hash = "sha256:e1739496c2f0108013629aa095cc32a8c6363444361960c07493818d0dea2da4"},
{file = "contourpy-1.0.6-cp39-cp39-win_amd64.whl", hash = "sha256:a457ee72d9032e86730f62c5eeddf402e732fdf5ca8b13b41772aa8ae13a4563"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:d912f0154a20a80ea449daada904a7eb6941c83281a9fab95de50529bfc3a1da"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4081918147fc4c29fad328d5066cfc751da100a1098398742f9f364be63803fc"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0537cc1195245bbe24f2913d1f9211b8f04eb203de9044630abd3664c6cc339c"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dcd556c8fc37a342dd636d7eef150b1399f823a4462f8c968e11e1ebeabee769"},
{file = "contourpy-1.0.6-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:f6ca38dd8d988eca8f07305125dec6f54ac1c518f1aaddcc14d08c01aebb6efc"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:c1baa49ab9fedbf19d40d93163b7d3e735d9cd8d5efe4cce9907902a6dad391f"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:211dfe2bd43bf5791d23afbe23a7952e8ac8b67591d24be3638cabb648b3a6eb"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c38c6536c2d71ca2f7e418acaf5bca30a3af7f2a2fa106083c7d738337848dbe"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b1ee48a130da4dd0eb8055bbab34abf3f6262957832fd575e0cab4979a15a41"},
{file = "contourpy-1.0.6-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:5641927cc5ae66155d0c80195dc35726eae060e7defc18b7ab27600f39dd1fe7"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:7ee394502026d68652c2824348a40bf50f31351a668977b51437131a90d777ea"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0b97454ed5b1368b66ed414c754cba15b9750ce69938fc6153679787402e4cdf"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0236875c5a0784215b49d00ebbe80c5b6b5d5244b3655a36dda88105334dea17"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:84c593aeff7a0171f639da92cb86d24954bbb61f8a1b530f74eb750a14685832"},
{file = "contourpy-1.0.6-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:9b0e7fe7f949fb719b206548e5cde2518ffb29936afa4303d8a1c4db43dcb675"},
{file = "contourpy-1.0.6.tar.gz", hash = "sha256:6e459ebb8bb5ee4c22c19cc000174f8059981971a33ce11e17dddf6aca97a142"},
]
coverage = [
{file = "coverage-6.5.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ef8674b0ee8cc11e2d574e3e2998aea5df5ab242e012286824ea3c6970580e53"},
{file = "coverage-6.5.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:784f53ebc9f3fd0e2a3f6a78b2be1bd1f5575d7863e10c6e12504f240fd06660"},
{file = "coverage-6.5.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b4a5be1748d538a710f87542f22c2cad22f80545a847ad91ce45e77417293eb4"},
{file = "coverage-6.5.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:83516205e254a0cb77d2d7bb3632ee019d93d9f4005de31dca0a8c3667d5bc04"},
{file = "coverage-6.5.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:af4fffaffc4067232253715065e30c5a7ec6faac36f8fc8d6f64263b15f74db0"},
{file = "coverage-6.5.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:97117225cdd992a9c2a5515db1f66b59db634f59d0679ca1fa3fe8da32749cae"},
{file = "coverage-6.5.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:a1170fa54185845505fbfa672f1c1ab175446c887cce8212c44149581cf2d466"},
{file = "coverage-6.5.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:11b990d520ea75e7ee8dcab5bc908072aaada194a794db9f6d7d5cfd19661e5a"},
{file = "coverage-6.5.0-cp310-cp310-win32.whl", hash = "sha256:5dbec3b9095749390c09ab7c89d314727f18800060d8d24e87f01fb9cfb40b32"},
{file = "coverage-6.5.0-cp310-cp310-win_amd64.whl", hash = "sha256:59f53f1dc5b656cafb1badd0feb428c1e7bc19b867479ff72f7a9dd9b479f10e"},
{file = "coverage-6.5.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:4a5375e28c5191ac38cca59b38edd33ef4cc914732c916f2929029b4bfb50795"},
{file = "coverage-6.5.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c4ed2820d919351f4167e52425e096af41bfabacb1857186c1ea32ff9983ed75"},
{file = "coverage-6.5.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:33a7da4376d5977fbf0a8ed91c4dffaaa8dbf0ddbf4c8eea500a2486d8bc4d7b"},
{file = "coverage-6.5.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a8fb6cf131ac4070c9c5a3e21de0f7dc5a0fbe8bc77c9456ced896c12fcdad91"},
{file = "coverage-6.5.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a6b7d95969b8845250586f269e81e5dfdd8ff828ddeb8567a4a2eaa7313460c4"},
{file = "coverage-6.5.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:1ef221513e6f68b69ee9e159506d583d31aa3567e0ae84eaad9d6ec1107dddaa"},
{file = "coverage-6.5.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cca4435eebea7962a52bdb216dec27215d0df64cf27fc1dd538415f5d2b9da6b"},
{file = "coverage-6.5.0-cp311-cp311-win32.whl", hash = "sha256:98e8a10b7a314f454d9eff4216a9a94d143a7ee65018dd12442e898ee2310578"},
{file = "coverage-6.5.0-cp311-cp311-win_amd64.whl", hash = "sha256:bc8ef5e043a2af066fa8cbfc6e708d58017024dc4345a1f9757b329a249f041b"},
{file = "coverage-6.5.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:4433b90fae13f86fafff0b326453dd42fc9a639a0d9e4eec4d366436d1a41b6d"},
{file = "coverage-6.5.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f4f05d88d9a80ad3cac6244d36dd89a3c00abc16371769f1340101d3cb899fc3"},
{file = "coverage-6.5.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:94e2565443291bd778421856bc975d351738963071e9b8839ca1fc08b42d4bef"},
{file = "coverage-6.5.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:027018943386e7b942fa832372ebc120155fd970837489896099f5cfa2890f79"},
{file = "coverage-6.5.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:255758a1e3b61db372ec2736c8e2a1fdfaf563977eedbdf131de003ca5779b7d"},
{file = "coverage-6.5.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:851cf4ff24062c6aec510a454b2584f6e998cada52d4cb58c5e233d07172e50c"},
{file = "coverage-6.5.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:12adf310e4aafddc58afdb04d686795f33f4d7a6fa67a7a9d4ce7d6ae24d949f"},
{file = "coverage-6.5.0-cp37-cp37m-win32.whl", hash = "sha256:b5604380f3415ba69de87a289a2b56687faa4fe04dbee0754bfcae433489316b"},
{file = "coverage-6.5.0-cp37-cp37m-win_amd64.whl", hash = "sha256:4a8dbc1f0fbb2ae3de73eb0bdbb914180c7abfbf258e90b311dcd4f585d44bd2"},
{file = "coverage-6.5.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d900bb429fdfd7f511f868cedd03a6bbb142f3f9118c09b99ef8dc9bf9643c3c"},
{file = "coverage-6.5.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:2198ea6fc548de52adc826f62cb18554caedfb1d26548c1b7c88d8f7faa8f6ba"},
{file = "coverage-6.5.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c4459b3de97b75e3bd6b7d4b7f0db13f17f504f3d13e2a7c623786289dd670e"},
{file = "coverage-6.5.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:20c8ac5386253717e5ccc827caad43ed66fea0efe255727b1053a8154d952398"},
{file = "coverage-6.5.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6b07130585d54fe8dff3d97b93b0e20290de974dc8177c320aeaf23459219c0b"},
{file = "coverage-6.5.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:dbdb91cd8c048c2b09eb17713b0c12a54fbd587d79adcebad543bc0cd9a3410b"},
{file = "coverage-6.5.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:de3001a203182842a4630e7b8d1a2c7c07ec1b45d3084a83d5d227a3806f530f"},
{file = "coverage-6.5.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:e07f4a4a9b41583d6eabec04f8b68076ab3cd44c20bd29332c6572dda36f372e"},
{file = "coverage-6.5.0-cp38-cp38-win32.whl", hash = "sha256:6d4817234349a80dbf03640cec6109cd90cba068330703fa65ddf56b60223a6d"},
{file = "coverage-6.5.0-cp38-cp38-win_amd64.whl", hash = "sha256:7ccf362abd726b0410bf8911c31fbf97f09f8f1061f8c1cf03dfc4b6372848f6"},
{file = "coverage-6.5.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:633713d70ad6bfc49b34ead4060531658dc6dfc9b3eb7d8a716d5873377ab745"},
{file = "coverage-6.5.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:95203854f974e07af96358c0b261f1048d8e1083f2de9b1c565e1be4a3a48cfc"},
{file = "coverage-6.5.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b9023e237f4c02ff739581ef35969c3739445fb059b060ca51771e69101efffe"},
{file = "coverage-6.5.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:265de0fa6778d07de30bcf4d9dc471c3dc4314a23a3c6603d356a3c9abc2dfcf"},
{file = "coverage-6.5.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f830ed581b45b82451a40faabb89c84e1a998124ee4212d440e9c6cf70083e5"},
{file = "coverage-6.5.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:7b6be138d61e458e18d8e6ddcddd36dd96215edfe5f1168de0b1b32635839b62"},
{file = "coverage-6.5.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:42eafe6778551cf006a7c43153af1211c3aaab658d4d66fa5fcc021613d02518"},
{file = "coverage-6.5.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:723e8130d4ecc8f56e9a611e73b31219595baa3bb252d539206f7bbbab6ffc1f"},
{file = "coverage-6.5.0-cp39-cp39-win32.whl", hash = "sha256:d9ecf0829c6a62b9b573c7bb6d4dcd6ba8b6f80be9ba4fc7ed50bf4ac9aecd72"},
{file = "coverage-6.5.0-cp39-cp39-win_amd64.whl", hash = "sha256:fc2af30ed0d5ae0b1abdb4ebdce598eafd5b35397d4d75deb341a614d333d987"},
{file = "coverage-6.5.0-pp36.pp37.pp38-none-any.whl", hash = "sha256:1431986dac3923c5945271f169f59c45b8802a114c8f548d611f2015133df77a"},
{file = "coverage-6.5.0.tar.gz", hash = "sha256:f642e90754ee3e06b0e7e51bce3379590e76b7f76b708e1a71ff043f87025c84"},
]
cycler = [
{file = "cycler-0.11.0-py3-none-any.whl", hash = "sha256:3a27e95f763a428a739d2add979fa7494c912a32c17c4c38c4d5f082cad165a3"},
{file = "cycler-0.11.0.tar.gz", hash = "sha256:9c87405839a19696e837b3b818fed3f5f69f16f1eec1a1ad77e043dcea9c772f"},
]
cymem = [
{file = "cymem-2.0.7-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:4981fc9182cc1fe54bfedf5f73bfec3ce0c27582d9be71e130c46e35958beef0"},
{file = "cymem-2.0.7-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:42aedfd2e77aa0518a24a2a60a2147308903abc8b13c84504af58539c39e52a3"},
{file = "cymem-2.0.7-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c183257dc5ab237b664f64156c743e788f562417c74ea58c5a3939fe2d48d6f6"},
{file = "cymem-2.0.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d18250f97eeb13af2e8b19d3cefe4bf743b963d93320b0a2e729771410fd8cf4"},
{file = "cymem-2.0.7-cp310-cp310-win_amd64.whl", hash = "sha256:864701e626b65eb2256060564ed8eb034ebb0a8f14ce3fbef337e88352cdee9f"},
{file = "cymem-2.0.7-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:314273be1f143da674388e0a125d409e2721fbf669c380ae27c5cbae4011e26d"},
{file = "cymem-2.0.7-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:df543a36e7000808fe0a03d92fd6cd8bf23fa8737c3f7ae791a5386de797bf79"},
{file = "cymem-2.0.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e5e1b7de7952d89508d07601b9e95b2244e70d7ef60fbc161b3ad68f22815f8"},
{file = "cymem-2.0.7-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2aa33f1dbd7ceda37970e174c38fd1cf106817a261aa58521ba9918156868231"},
{file = "cymem-2.0.7-cp311-cp311-win_amd64.whl", hash = "sha256:10178e402bb512b2686b8c2f41f930111e597237ca8f85cb583ea93822ef798d"},
{file = "cymem-2.0.7-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a2971b7da5aa2e65d8fbbe9f2acfc19ff8e73f1896e3d6e1223cc9bf275a0207"},
{file = "cymem-2.0.7-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:85359ab7b490e6c897c04863704481600bd45188a0e2ca7375eb5db193e13cb7"},
{file = "cymem-2.0.7-cp36-cp36m-win_amd64.whl", hash = "sha256:0ac45088abffbae9b7db2c597f098de51b7e3c1023cb314e55c0f7f08440cf66"},
{file = "cymem-2.0.7-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:26e5d5c6958855d2fe3d5629afe85a6aae5531abaa76f4bc21b9abf9caaccdfe"},
{file = "cymem-2.0.7-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:011039e12d3144ac1bf3a6b38f5722b817f0d6487c8184e88c891b360b69f533"},
{file = "cymem-2.0.7-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f9e63e5ad4ed6ffa21fd8db1c03b05be3fea2f32e32fdace67a840ea2702c3d"},
{file = "cymem-2.0.7-cp37-cp37m-win_amd64.whl", hash = "sha256:5ea6b027fdad0c3e9a4f1b94d28d213be08c466a60c72c633eb9db76cf30e53a"},
{file = "cymem-2.0.7-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:4302df5793a320c4f4a263c7785d2fa7f29928d72cb83ebeb34d64a610f8d819"},
{file = "cymem-2.0.7-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:24b779046484674c054af1e779c68cb224dc9694200ac13b22129d7fb7e99e6d"},
{file = "cymem-2.0.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c50794c612801ed8b599cd4af1ed810a0d39011711c8224f93e1153c00e08d1"},
{file = "cymem-2.0.7-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a9525ad563b36dc1e30889d0087a0daa67dd7bb7d3e1530c4b61cd65cc756a5b"},
{file = "cymem-2.0.7-cp38-cp38-win_amd64.whl", hash = "sha256:48b98da6b906fe976865263e27734ebc64f972a978a999d447ad6c83334e3f90"},
{file = "cymem-2.0.7-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:e156788d32ad8f7141330913c5d5d2aa67182fca8f15ae22645e9f379abe8a4c"},
{file = "cymem-2.0.7-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3da89464021fe669932fce1578343fcaf701e47e3206f50d320f4f21e6683ca5"},
{file = "cymem-2.0.7-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4f359cab9f16e25b3098f816c40acbf1697a3b614a8d02c56e6ebcb9c89a06b3"},
{file = "cymem-2.0.7-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f165d7bce55d6730930e29d8294569788aa127f1be8d1642d9550ed96223cb37"},
{file = "cymem-2.0.7-cp39-cp39-win_amd64.whl", hash = "sha256:59a09cf0e71b1b88bfa0de544b801585d81d06ea123c1725e7c5da05b7ca0d20"},
{file = "cymem-2.0.7.tar.gz", hash = "sha256:e6034badb5dd4e10344211c81f16505a55553a7164adc314c75bd80cf07e57a8"},
]
Cython = [
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:39afb4679b8c6bf7ccb15b24025568f4f9b4d7f9bf3cbd981021f542acecd75b"},
{file = "Cython-0.29.32-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:dbee03b8d42dca924e6aa057b836a064c769ddfd2a4c2919e65da2c8a362d528"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ba622326f2862f9c1f99ca8d47ade49871241920a352c917e16861e25b0e5c3"},
{file = "Cython-0.29.32-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e6ffa08aa1c111a1ebcbd1cf4afaaec120bc0bbdec3f2545f8bb7d3e8e77a1cd"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:97335b2cd4acebf30d14e2855d882de83ad838491a09be2011745579ac975833"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:06be83490c906b6429b4389e13487a26254ccaad2eef6f3d4ee21d8d3a4aaa2b"},
{file = "Cython-0.29.32-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:eefd2b9a5f38ded8d859fe96cc28d7d06e098dc3f677e7adbafda4dcdd4a461c"},
{file = "Cython-0.29.32-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5514f3b4122cb22317122a48e175a7194e18e1803ca555c4c959d7dfe68eaf98"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:656dc5ff1d269de4d11ee8542f2ffd15ab466c447c1f10e5b8aba6f561967276"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:cdf10af3e2e3279dc09fdc5f95deaa624850a53913f30350ceee824dc14fc1a6"},
{file = "Cython-0.29.32-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:3875c2b2ea752816a4d7ae59d45bb546e7c4c79093c83e3ba7f4d9051dd02928"},
{file = "Cython-0.29.32-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:79e3bab19cf1b021b613567c22eb18b76c0c547b9bc3903881a07bfd9e7e64cf"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0595aee62809ba353cebc5c7978e0e443760c3e882e2c7672c73ffe46383673"},
{file = "Cython-0.29.32-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0ea8267fc373a2c5064ad77d8ff7bf0ea8b88f7407098ff51829381f8ec1d5d9"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:c8e8025f496b5acb6ba95da2fb3e9dacffc97d9a92711aacfdd42f9c5927e094"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:afbce249133a830f121b917f8c9404a44f2950e0e4f5d1e68f043da4c2e9f457"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:513e9707407608ac0d306c8b09d55a28be23ea4152cbd356ceaec0f32ef08d65"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e83228e0994497900af954adcac27f64c9a57cd70a9ec768ab0cb2c01fd15cf1"},
{file = "Cython-0.29.32-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:ea1dcc07bfb37367b639415333cfbfe4a93c3be340edf1db10964bc27d42ed64"},
{file = "Cython-0.29.32-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8669cadeb26d9a58a5e6b8ce34d2c8986cc3b5c0bfa77eda6ceb471596cb2ec3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:ed087eeb88a8cf96c60fb76c5c3b5fb87188adee5e179f89ec9ad9a43c0c54b3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:3f85eb2343d20d91a4ea9cf14e5748092b376a64b7e07fc224e85b2753e9070b"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:63b79d9e1f7c4d1f498ab1322156a0d7dc1b6004bf981a8abda3f66800e140cd"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e1958e0227a4a6a2c06fd6e35b7469de50adf174102454db397cec6e1403cce3"},
{file = "Cython-0.29.32-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:856d2fec682b3f31583719cb6925c6cdbb9aa30f03122bcc45c65c8b6f515754"},
{file = "Cython-0.29.32-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:479690d2892ca56d34812fe6ab8f58e4b2e0129140f3d94518f15993c40553da"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:67fdd2f652f8d4840042e2d2d91e15636ba2bcdcd92e7e5ffbc68e6ef633a754"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:4a4b03ab483271f69221c3210f7cde0dcc456749ecf8243b95bc7a701e5677e0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:40eff7aa26e91cf108fd740ffd4daf49f39b2fdffadabc7292b4b7dc5df879f0"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0bbc27abdf6aebfa1bce34cd92bd403070356f28b0ecb3198ff8a182791d58b9"},
{file = "Cython-0.29.32-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:cddc47ec746a08603037731f5d10aebf770ced08666100bd2cdcaf06a85d4d1b"},
{file = "Cython-0.29.32-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:eca3065a1279456e81c615211d025ea11bfe4e19f0c5650b859868ca04b3fcbd"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d968ffc403d92addf20b68924d95428d523436adfd25cf505d427ed7ba3bee8b"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f3fd44cc362eee8ae569025f070d56208908916794b6ab21e139cea56470a2b3"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_24_i686.whl", hash = "sha256:b6da3063c5c476f5311fd76854abae6c315f1513ef7d7904deed2e774623bbb9"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:061e25151c38f2361bc790d3bcf7f9d9828a0b6a4d5afa56fbed3bd33fb2373a"},
{file = "Cython-0.29.32-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:f9944013588a3543fca795fffb0a070a31a243aa4f2d212f118aa95e69485831"},
{file = "Cython-0.29.32-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:07d173d3289415bb496e72cb0ddd609961be08fe2968c39094d5712ffb78672b"},
{file = "Cython-0.29.32-py2.py3-none-any.whl", hash = "sha256:eeb475eb6f0ccf6c039035eb4f0f928eb53ead88777e0a760eccb140ad90930b"},
{file = "Cython-0.29.32.tar.gz", hash = "sha256:8733cf4758b79304f2a4e39ebfac5e92341bce47bcceb26c1254398b2f8c1af7"},
]
dask = [
{file = "dask-2021.11.2-py3-none-any.whl", hash = "sha256:2b0ad7beba8950add4fdc7c5cb94fa9444915ddb00c711d5743e2c4bb0a95ef5"},
{file = "dask-2021.11.2.tar.gz", hash = "sha256:e12bfe272928d62fa99623d98d0e0b0c045b33a47509ef31a22175aa5fd10917"},
]
debugpy = [
{file = "debugpy-1.6.3-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:c4b2bd5c245eeb49824bf7e539f95fb17f9a756186e51c3e513e32999d8846f3"},
{file = "debugpy-1.6.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b8deaeb779699350deeed835322730a3efec170b88927debc9ba07a1a38e2585"},
{file = "debugpy-1.6.3-cp310-cp310-win32.whl", hash = "sha256:fc233a0160f3b117b20216f1169e7211b83235e3cd6749bcdd8dbb72177030c7"},
{file = "debugpy-1.6.3-cp310-cp310-win_amd64.whl", hash = "sha256:dda8652520eae3945833e061cbe2993ad94a0b545aebd62e4e6b80ee616c76b2"},
{file = "debugpy-1.6.3-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:d5c814596a170a0a58fa6fad74947e30bfd7e192a5d2d7bd6a12156c2899e13a"},
{file = "debugpy-1.6.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c4cd6f37e3c168080d61d698390dfe2cd9e74ebf80b448069822a15dadcda57d"},
{file = "debugpy-1.6.3-cp37-cp37m-win32.whl", hash = "sha256:3c9f985944a30cfc9ae4306ac6a27b9c31dba72ca943214dad4a0ab3840f6161"},
{file = "debugpy-1.6.3-cp37-cp37m-win_amd64.whl", hash = "sha256:5ad571a36cec137ae6ed951d0ff75b5e092e9af6683da084753231150cbc5b25"},
{file = "debugpy-1.6.3-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:adcfea5ea06d55d505375995e150c06445e2b20cd12885bcae566148c076636b"},
{file = "debugpy-1.6.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:daadab4403427abd090eccb38d8901afd8b393e01fd243048fab3f1d7132abb4"},
{file = "debugpy-1.6.3-cp38-cp38-win32.whl", hash = "sha256:6efc30325b68e451118b795eff6fe8488253ca3958251d5158106d9c87581bc6"},
{file = "debugpy-1.6.3-cp38-cp38-win_amd64.whl", hash = "sha256:86d784b72c5411c833af1cd45b83d80c252b77c3bfdb43db17c441d772f4c734"},
{file = "debugpy-1.6.3-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:4e255982552b0edfe3a6264438dbd62d404baa6556a81a88f9420d3ed79b06ae"},
{file = "debugpy-1.6.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:cca23cb6161ac89698d629d892520327dd1be9321c0960e610bbcb807232b45d"},
{file = "debugpy-1.6.3-cp39-cp39-win32.whl", hash = "sha256:7c302095a81be0d5c19f6529b600bac971440db3e226dce85347cc27e6a61908"},
{file = "debugpy-1.6.3-cp39-cp39-win_amd64.whl", hash = "sha256:34d2cdd3a7c87302ba5322b86e79c32c2115be396f3f09ca13306d8a04fe0f16"},
{file = "debugpy-1.6.3-py2.py3-none-any.whl", hash = "sha256:84c39940a0cac410bf6aa4db00ba174f973eef521fbe9dd058e26bcabad89c4f"},
{file = "debugpy-1.6.3.zip", hash = "sha256:e8922090514a890eec99cfb991bab872dd2e353ebb793164d5f01c362b9a40bf"},
]
decorator = [
{file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"},
{file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"},
]
defusedxml = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
]
dill = [
{file = "dill-0.3.6-py3-none-any.whl", hash = "sha256:a07ffd2351b8c678dfc4a856a3005f8067aea51d6ba6c700796a4d9e280f39f0"},
{file = "dill-0.3.6.tar.gz", hash = "sha256:e5db55f3687856d8fbdab002ed78544e1c4559a130302693d839dfe8f93f2373"},
]
distributed = [
{file = "distributed-2021.11.2-py3-none-any.whl", hash = "sha256:af1f7b98d85d43886fefe2354379c848c7a5aa6ae4d2313a7aca9ab9081a7e56"},
{file = "distributed-2021.11.2.tar.gz", hash = "sha256:f86a01a2e1e678865d2e42300c47552b5012cd81a2d354e47827a1fd074cc302"},
]
docutils = [
{file = "docutils-0.17.1-py2.py3-none-any.whl", hash = "sha256:cf316c8370a737a022b72b56874f6602acf974a37a9fba42ec2876387549fc61"},
{file = "docutils-0.17.1.tar.gz", hash = "sha256:686577d2e4c32380bb50cbb22f575ed742d58168cee37e99117a854bcd88f125"},
]
econml = [
{file = "econml-0.14.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9c2fc1d67d98774d00bfe8e76d76af3de5ebc8d5f7a440da3c667d5ad244f971"},
{file = "econml-0.14.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9b02aca395eaa905bff080c3efd4f74bf281f168c674d74bdf899fc9467311e1"},
{file = "econml-0.14.0-cp310-cp310-win_amd64.whl", hash = "sha256:d2cca82486826c2b13f47ed0140f3fc85d8016fb43153a1b2de025345b190c6c"},
{file = "econml-0.14.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:ce98668ba93d33856b60750e23312b9a6d503af6890b5588ab708db9de05ff49"},
{file = "econml-0.14.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3b6b9938a2f48bf3055ae0ea47ac5a627d1c180f22e62531943961427769b0ef"},
{file = "econml-0.14.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3c780c49a97bd688475f8863a7bdad2cbe19fdb4417708e3874f2bdae102852f"},
{file = "econml-0.14.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7f2930eb311ea576195718b97fde83b4f2d29f3f3dc57ce0834b52fee410bfac"},
{file = "econml-0.14.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:36be15da6ff3b295bc5cf80b95753e19bc123a1103bf53a2a0744daef49273e5"},
{file = "econml-0.14.0-cp38-cp38-win_amd64.whl", hash = "sha256:f71ab406f37b64dead4bee1b4c4869204faf9c55887dc8117bd9396d977edaf3"},
{file = "econml-0.14.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1b0e67419c4eff2acdf8138f208de333a85c3e6fded831a6664bb02d6f4bcbe1"},
{file = "econml-0.14.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:376724e0535ad9cbc585f768110eb23bfd3b3218032a61cac8793a09ee3bce95"},
{file = "econml-0.14.0-cp39-cp39-win_amd64.whl", hash = "sha256:6e1f0554d0f930dc639dbf3d7cb171297aa113dd64b7db322e0abb7d12eaa4dc"},
{file = "econml-0.14.0.tar.gz", hash = "sha256:5637d36c7548fb3ad01956d091cc6a9f788b090bc8b892bd527012e5bdbce041"},
]
entrypoints = [
{file = "entrypoints-0.4-py3-none-any.whl", hash = "sha256:f174b5ff827504fd3cd97cc3f8649f3693f51538c7e4bdf3ef002c8429d42f9f"},
{file = "entrypoints-0.4.tar.gz", hash = "sha256:b706eddaa9218a19ebcd67b56818f05bb27589b1ca9e8d797b74affad4ccacd4"},
]
exceptiongroup = [
{file = "exceptiongroup-1.0.4-py3-none-any.whl", hash = "sha256:542adf9dea4055530d6e1279602fa5cb11dab2395fa650b8674eaec35fc4a828"},
{file = "exceptiongroup-1.0.4.tar.gz", hash = "sha256:bd14967b79cd9bdb54d97323216f8fdf533e278df937aa2a90089e7d6e06e5ec"},
]
executing = [
{file = "executing-1.2.0-py2.py3-none-any.whl", hash = "sha256:0314a69e37426e3608aada02473b4161d4caf5a4b244d1d0c48072b8fee7bacc"},
{file = "executing-1.2.0.tar.gz", hash = "sha256:19da64c18d2d851112f09c287f8d3dbbdf725ab0e569077efb6cdcbd3497c107"},
]
fastai = [
{file = "fastai-2.7.10-py3-none-any.whl", hash = "sha256:db3709d6ff9ede9cd29111420b3669238248fa4f5a29d98daf37d52d122d9424"},
{file = "fastai-2.7.10.tar.gz", hash = "sha256:ccef6a185ae3a637efc9bcd9fea8e48b75f454d0ebad3b6df426f22fae20039d"},
]
fastcore = [
{file = "fastcore-1.5.27-py3-none-any.whl", hash = "sha256:79dffaa3de96066e4d7f2b8793f1a8a9468c82bc97d3d48ec002de34097b2a9f"},
{file = "fastcore-1.5.27.tar.gz", hash = "sha256:c6b66b35569d17251e25999bafc7d9bcdd6446c1e710503c08670c3ff1eef271"},
]
fastdownload = [
{file = "fastdownload-0.0.7-py3-none-any.whl", hash = "sha256:b791fa3406a2da003ba64615f03c60e2ea041c3c555796450b9a9a601bc0bbac"},
{file = "fastdownload-0.0.7.tar.gz", hash = "sha256:20507edb8e89406a1fbd7775e6e2a3d81a4dd633dd506b0e9cf0e1613e831d6a"},
]
fastjsonschema = [
{file = "fastjsonschema-2.16.2-py3-none-any.whl", hash = "sha256:21f918e8d9a1a4ba9c22e09574ba72267a6762d47822db9add95f6454e51cc1c"},
{file = "fastjsonschema-2.16.2.tar.gz", hash = "sha256:01e366f25d9047816fe3d288cbfc3e10541daf0af2044763f3d0ade42476da18"},
]
fastprogress = [
{file = "fastprogress-1.0.3-py3-none-any.whl", hash = "sha256:6dfea88f7a4717b0a8d6ee2048beae5dbed369f932a368c5dd9caff34796f7c5"},
{file = "fastprogress-1.0.3.tar.gz", hash = "sha256:7a17d2b438890f838c048eefce32c4ded47197ecc8ea042cecc33d3deb8022f5"},
]
flake8 = [
{file = "flake8-4.0.1-py2.py3-none-any.whl", hash = "sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d"},
{file = "flake8-4.0.1.tar.gz", hash = "sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"},
]
flaky = [
{file = "flaky-3.7.0-py2.py3-none-any.whl", hash = "sha256:d6eda73cab5ae7364504b7c44670f70abed9e75f77dd116352f662817592ec9c"},
{file = "flaky-3.7.0.tar.gz", hash = "sha256:3ad100780721a1911f57a165809b7ea265a7863305acb66708220820caf8aa0d"},
]
flatbuffers = [
{file = "flatbuffers-22.11.23-py2.py3-none-any.whl", hash = "sha256:13043a5deba77e55b73064750195d2c5b494754d52b7d4ad01bc52cad5c3c9f2"},
{file = "flatbuffers-22.11.23.tar.gz", hash = "sha256:2a82b85eea7f6712ab41077086dae1a89382862fe64414c8ebdf976123d1a095"},
]
fonttools = [
{file = "fonttools-4.38.0-py3-none-any.whl", hash = "sha256:820466f43c8be8c3009aef8b87e785014133508f0de64ec469e4efb643ae54fb"},
{file = "fonttools-4.38.0.zip", hash = "sha256:2bb244009f9bf3fa100fc3ead6aeb99febe5985fa20afbfbaa2f8946c2fbdaf1"},
]
forestci = [
{file = "forestci-0.6-py3-none-any.whl", hash = "sha256:025e76b20e23ddbdfc0a9c9c7f261751ee376b33a7b257b86e72fbad8312d650"},
{file = "forestci-0.6.tar.gz", hash = "sha256:f74f51eba9a7c189fdb673203cea10383f0a34504d2d28dee0fd712d19945b5a"},
]
fsspec = [
{file = "fsspec-2022.11.0-py3-none-any.whl", hash = "sha256:d6e462003e3dcdcb8c7aa84c73a228f8227e72453cd22570e2363e8844edfe7b"},
{file = "fsspec-2022.11.0.tar.gz", hash = "sha256:259d5fd5c8e756ff2ea72f42e7613c32667dc2049a4ac3d84364a7ca034acb8b"},
]
future = [
{file = "future-0.18.2.tar.gz", hash = "sha256:b1bead90b70cf6ec3f0710ae53a525360fa360d306a86583adc6bf83a4db537d"},
]
gast = [
{file = "gast-0.4.0-py3-none-any.whl", hash = "sha256:b7adcdd5adbebf1adf17378da5ba3f543684dbec47b1cda1f3997e573cd542c4"},
{file = "gast-0.4.0.tar.gz", hash = "sha256:40feb7b8b8434785585ab224d1568b857edb18297e5a3047f1ba012bc83b42c1"},
]
google-auth = [
{file = "google-auth-2.14.1.tar.gz", hash = "sha256:ccaa901f31ad5cbb562615eb8b664b3dd0bf5404a67618e642307f00613eda4d"},
{file = "google_auth-2.14.1-py2.py3-none-any.whl", hash = "sha256:f5d8701633bebc12e0deea4df8abd8aff31c28b355360597f7f2ee60f2e4d016"},
]
google-auth-oauthlib = [
{file = "google-auth-oauthlib-0.4.6.tar.gz", hash = "sha256:a90a072f6993f2c327067bf65270046384cda5a8ecb20b94ea9a687f1f233a7a"},
{file = "google_auth_oauthlib-0.4.6-py2.py3-none-any.whl", hash = "sha256:3f2a6e802eebbb6fb736a370fbf3b055edcb6b52878bf2f26330b5e041316c73"},
]
google-pasta = [
{file = "google-pasta-0.2.0.tar.gz", hash = "sha256:c9f2c8dfc8f96d0d5808299920721be30c9eec37f2389f28904f454565c8a16e"},
{file = "google_pasta-0.2.0-py2-none-any.whl", hash = "sha256:4612951da876b1a10fe3960d7226f0c7682cf901e16ac06e473b267a5afa8954"},
{file = "google_pasta-0.2.0-py3-none-any.whl", hash = "sha256:b32482794a366b5366a32c92a9a9201b107821889935a02b3e51f6b432ea84ed"},
]
graphviz = [
{file = "graphviz-0.20.1-py3-none-any.whl", hash = "sha256:587c58a223b51611c0cf461132da386edd896a029524ca61a1462b880bf97977"},
{file = "graphviz-0.20.1.zip", hash = "sha256:8c58f14adaa3b947daf26c19bc1e98c4e0702cdc31cf99153e6f06904d492bf8"},
]
grpcio = [
{file = "grpcio-1.50.0-cp310-cp310-linux_armv7l.whl", hash = "sha256:906f4d1beb83b3496be91684c47a5d870ee628715227d5d7c54b04a8de802974"},
{file = "grpcio-1.50.0-cp310-cp310-macosx_12_0_x86_64.whl", hash = "sha256:2d9fd6e38b16c4d286a01e1776fdf6c7a4123d99ae8d6b3f0b4a03a34bf6ce45"},
{file = "grpcio-1.50.0-cp310-cp310-manylinux_2_17_aarch64.whl", hash = "sha256:4b123fbb7a777a2fedec684ca0b723d85e1d2379b6032a9a9b7851829ed3ca9a"},
{file = "grpcio-1.50.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b2f77a90ba7b85bfb31329f8eab9d9540da2cf8a302128fb1241d7ea239a5469"},
{file = "grpcio-1.50.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9eea18a878cffc804506d39c6682d71f6b42ec1c151d21865a95fae743fda500"},
{file = "grpcio-1.50.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:2b71916fa8f9eb2abd93151fafe12e18cebb302686b924bd4ec39266211da525"},
{file = "grpcio-1.50.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:95ce51f7a09491fb3da8cf3935005bff19983b77c4e9437ef77235d787b06842"},
{file = "grpcio-1.50.0-cp310-cp310-win32.whl", hash = "sha256:f7025930039a011ed7d7e7ef95a1cb5f516e23c5a6ecc7947259b67bea8e06ca"},
{file = "grpcio-1.50.0-cp310-cp310-win_amd64.whl", hash = "sha256:05f7c248e440f538aaad13eee78ef35f0541e73498dd6f832fe284542ac4b298"},
{file = "grpcio-1.50.0-cp311-cp311-linux_armv7l.whl", hash = "sha256:ca8a2254ab88482936ce941485c1c20cdeaef0efa71a61dbad171ab6758ec998"},
{file = "grpcio-1.50.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:3b611b3de3dfd2c47549ca01abfa9bbb95937eb0ea546ea1d762a335739887be"},
{file = "grpcio-1.50.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1a4cd8cb09d1bc70b3ea37802be484c5ae5a576108bad14728f2516279165dd7"},
{file = "grpcio-1.50.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:156f8009e36780fab48c979c5605eda646065d4695deea4cfcbcfdd06627ddb6"},
{file = "grpcio-1.50.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:de411d2b030134b642c092e986d21aefb9d26a28bf5a18c47dd08ded411a3bc5"},
{file = "grpcio-1.50.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d144ad10eeca4c1d1ce930faa105899f86f5d99cecfe0d7224f3c4c76265c15e"},
{file = "grpcio-1.50.0-cp311-cp311-win32.whl", hash = "sha256:92d7635d1059d40d2ec29c8bf5ec58900120b3ce5150ef7414119430a4b2dd5c"},
{file = "grpcio-1.50.0-cp311-cp311-win_amd64.whl", hash = "sha256:ce8513aee0af9c159319692bfbf488b718d1793d764798c3d5cff827a09e25ef"},
{file = "grpcio-1.50.0-cp37-cp37m-linux_armv7l.whl", hash = "sha256:8e8999a097ad89b30d584c034929f7c0be280cd7851ac23e9067111167dcbf55"},
{file = "grpcio-1.50.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:a50a1be449b9e238b9bd43d3857d40edf65df9416dea988929891d92a9f8a778"},
{file = "grpcio-1.50.0-cp37-cp37m-manylinux_2_17_aarch64.whl", hash = "sha256:cf151f97f5f381163912e8952eb5b3afe89dec9ed723d1561d59cabf1e219a35"},
{file = "grpcio-1.50.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a23d47f2fc7111869f0ff547f771733661ff2818562b04b9ed674fa208e261f4"},
{file = "grpcio-1.50.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d84d04dec64cc4ed726d07c5d17b73c343c8ddcd6b59c7199c801d6bbb9d9ed1"},
{file = "grpcio-1.50.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:67dd41a31f6fc5c7db097a5c14a3fa588af54736ffc174af4411d34c4f306f68"},
{file = "grpcio-1.50.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:8d4c8e73bf20fb53fe5a7318e768b9734cf122fe671fcce75654b98ba12dfb75"},
{file = "grpcio-1.50.0-cp37-cp37m-win32.whl", hash = "sha256:7489dbb901f4fdf7aec8d3753eadd40839c9085967737606d2c35b43074eea24"},
{file = "grpcio-1.50.0-cp37-cp37m-win_amd64.whl", hash = "sha256:531f8b46f3d3db91d9ef285191825d108090856b3bc86a75b7c3930f16ce432f"},
{file = "grpcio-1.50.0-cp38-cp38-linux_armv7l.whl", hash = "sha256:d534d169673dd5e6e12fb57cc67664c2641361e1a0885545495e65a7b761b0f4"},
{file = "grpcio-1.50.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:1d8d02dbb616c0a9260ce587eb751c9c7dc689bc39efa6a88cc4fa3e9c138a7b"},
{file = "grpcio-1.50.0-cp38-cp38-manylinux_2_17_aarch64.whl", hash = "sha256:baab51dcc4f2aecabf4ed1e2f57bceab240987c8b03533f1cef90890e6502067"},
{file = "grpcio-1.50.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40838061e24f960b853d7bce85086c8e1b81c6342b1f4c47ff0edd44bbae2722"},
{file = "grpcio-1.50.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:931e746d0f75b2a5cff0a1197d21827a3a2f400c06bace036762110f19d3d507"},
{file = "grpcio-1.50.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:15f9e6d7f564e8f0776770e6ef32dac172c6f9960c478616c366862933fa08b4"},
{file = "grpcio-1.50.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:a4c23e54f58e016761b576976da6a34d876420b993f45f66a2bfb00363ecc1f9"},
{file = "grpcio-1.50.0-cp38-cp38-win32.whl", hash = "sha256:3e4244c09cc1b65c286d709658c061f12c61c814be0b7030a2d9966ff02611e0"},
{file = "grpcio-1.50.0-cp38-cp38-win_amd64.whl", hash = "sha256:8e69aa4e9b7f065f01d3fdcecbe0397895a772d99954bb82eefbb1682d274518"},
{file = "grpcio-1.50.0-cp39-cp39-linux_armv7l.whl", hash = "sha256:af98d49e56605a2912cf330b4627e5286243242706c3a9fa0bcec6e6f68646fc"},
{file = "grpcio-1.50.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:080b66253f29e1646ac53ef288c12944b131a2829488ac3bac8f52abb4413c0d"},
{file = "grpcio-1.50.0-cp39-cp39-manylinux_2_17_aarch64.whl", hash = "sha256:ab5d0e3590f0a16cb88de4a3fa78d10eb66a84ca80901eb2c17c1d2c308c230f"},
{file = "grpcio-1.50.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cb11464f480e6103c59d558a3875bd84eed6723f0921290325ebe97262ae1347"},
{file = "grpcio-1.50.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e07fe0d7ae395897981d16be61f0db9791f482f03fee7d1851fe20ddb4f69c03"},
{file = "grpcio-1.50.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:d75061367a69808ab2e84c960e9dce54749bcc1e44ad3f85deee3a6c75b4ede9"},
{file = "grpcio-1.50.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ae23daa7eda93c1c49a9ecc316e027ceb99adbad750fbd3a56fa9e4a2ffd5ae0"},
{file = "grpcio-1.50.0-cp39-cp39-win32.whl", hash = "sha256:177afaa7dba3ab5bfc211a71b90da1b887d441df33732e94e26860b3321434d9"},
{file = "grpcio-1.50.0-cp39-cp39-win_amd64.whl", hash = "sha256:ea8ccf95e4c7e20419b7827aa5b6da6f02720270686ac63bd3493a651830235c"},
{file = "grpcio-1.50.0.tar.gz", hash = "sha256:12b479839a5e753580b5e6053571de14006157f2ef9b71f38c56dc9b23b95ad6"},
]
h5py = [
{file = "h5py-3.7.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d77af42cb751ad6cc44f11bae73075a07429a5cf2094dfde2b1e716e059b3911"},
{file = "h5py-3.7.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:63beb8b7b47d0896c50de6efb9a1eaa81dbe211f3767e7dd7db159cea51ba37a"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:04e2e1e2fc51b8873e972a08d2f89625ef999b1f2d276199011af57bb9fc7851"},
{file = "h5py-3.7.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f73307c876af49aa869ec5df1818e9bb0bdcfcf8a5ba773cc45a4fba5a286a5c"},
{file = "h5py-3.7.0-cp310-cp310-win_amd64.whl", hash = "sha256:f514b24cacdd983e61f8d371edac8c1b780c279d0acb8485639e97339c866073"},
{file = "h5py-3.7.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:43fed4d13743cf02798a9a03a360a88e589d81285e72b83f47d37bb64ed44881"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c038399ce09a58ff8d89ec3e62f00aa7cb82d14f34e24735b920e2a811a3a426"},
{file = "h5py-3.7.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03d64fb86bb86b978928bad923b64419a23e836499ec6363e305ad28afd9d287"},
{file = "h5py-3.7.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e5b7820b75f9519499d76cc708e27242ccfdd9dfb511d6deb98701961d0445aa"},
{file = "h5py-3.7.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a9351d729ea754db36d175098361b920573fdad334125f86ac1dd3a083355e20"},
{file = "h5py-3.7.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6776d896fb90c5938de8acb925e057e2f9f28755f67ec3edcbc8344832616c38"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0a047fddbe6951bce40e9cde63373c838a978c5e05a011a682db9ba6334b8e85"},
{file = "h5py-3.7.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0798a9c0ff45f17d0192e4d7114d734cac9f8b2b2c76dd1d923c4d0923f27bb6"},
{file = "h5py-3.7.0-cp38-cp38-win_amd64.whl", hash = "sha256:0d8de8cb619fc597da7cf8cdcbf3b7ff8c5f6db836568afc7dc16d21f59b2b49"},
{file = "h5py-3.7.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f084bbe816907dfe59006756f8f2d16d352faff2d107f4ffeb1d8de126fc5dc7"},
{file = "h5py-3.7.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1fcb11a2dc8eb7ddcae08afd8fae02ba10467753a857fa07a404d700a93f3d53"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ed43e2cc4f511756fd664fb45d6b66c3cbed4e3bd0f70e29c37809b2ae013c44"},
{file = "h5py-3.7.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e7535df5ee3dc3e5d1f408fdfc0b33b46bc9b34db82743c82cd674d8239b9ad"},
{file = "h5py-3.7.0-cp39-cp39-win_amd64.whl", hash = "sha256:9e2ad2aa000f5b1e73b5dfe22f358ca46bf1a2b6ca394d9659874d7fc251731a"},
{file = "h5py-3.7.0.tar.gz", hash = "sha256:3fcf37884383c5da64846ab510190720027dca0768def34dd8dcb659dbe5cbf3"},
]
HeapDict = [
{file = "HeapDict-1.0.1-py3-none-any.whl", hash = "sha256:6065f90933ab1bb7e50db403b90cab653c853690c5992e69294c2de2b253fc92"},
{file = "HeapDict-1.0.1.tar.gz", hash = "sha256:8495f57b3e03d8e46d5f1b2cc62ca881aca392fd5cc048dc0aa2e1a6d23ecdb6"},
]
idna = [
{file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
{file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
]
imagesize = [
{file = "imagesize-1.4.1-py2.py3-none-any.whl", hash = "sha256:0d8d18d08f840c19d0ee7ca1fd82490fdc3729b7ac93f49870406ddde8ef8d8b"},
{file = "imagesize-1.4.1.tar.gz", hash = "sha256:69150444affb9cb0d5cc5a92b3676f0b2fb7cd9ae39e947a5e11a36b4497cd4a"},
]
importlib-metadata = [
{file = "importlib_metadata-5.1.0-py3-none-any.whl", hash = "sha256:d84d17e21670ec07990e1044a99efe8d615d860fd176fc29ef5c306068fda313"},
{file = "importlib_metadata-5.1.0.tar.gz", hash = "sha256:d5059f9f1e8e41f80e9c56c2ee58811450c31984dfa625329ffd7c0dad88a73b"},
]
importlib-resources = [
{file = "importlib_resources-5.10.0-py3-none-any.whl", hash = "sha256:ee17ec648f85480d523596ce49eae8ead87d5631ae1551f913c0100b5edd3437"},
{file = "importlib_resources-5.10.0.tar.gz", hash = "sha256:c01b1b94210d9849f286b86bb51bcea7cd56dde0600d8db721d7b81330711668"},
]
iniconfig = [
{file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"},
{file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"},
]
ipykernel = [
{file = "ipykernel-6.18.1-py3-none-any.whl", hash = "sha256:18c298565218e602939dd03b56206912433ebdb6b5800afd9177bbce8d96318b"},
{file = "ipykernel-6.18.1.tar.gz", hash = "sha256:71f21ce281da5a4e73ec4a7ecdf98802d9e65d58cdb7e22ff824ca994ce5114b"},
]
ipython = [
{file = "ipython-8.7.0-py3-none-any.whl", hash = "sha256:352042ddcb019f7c04e48171b4dd78e4c4bb67bf97030d170e154aac42b656d9"},
{file = "ipython-8.7.0.tar.gz", hash = "sha256:882899fe78d5417a0aa07f995db298fa28b58faeba2112d2e3a4c95fe14bb738"},
]
ipython_genutils = [
{file = "ipython_genutils-0.2.0-py2.py3-none-any.whl", hash = "sha256:72dd37233799e619666c9f639a9da83c34013a73e8bbc79a7a6348d93c61fab8"},
{file = "ipython_genutils-0.2.0.tar.gz", hash = "sha256:eb2e116e75ecef9d4d228fdc66af54269afa26ab4463042e33785b887c628ba8"},
]
ipywidgets = [
{file = "ipywidgets-8.0.2-py3-none-any.whl", hash = "sha256:1dc3dd4ee19ded045ea7c86eb273033d238d8e43f9e7872c52d092683f263891"},
{file = "ipywidgets-8.0.2.tar.gz", hash = "sha256:08cb75c6e0a96836147cbfdc55580ae04d13e05d26ffbc377b4e1c68baa28b1f"},
]
isort = [
{file = "isort-5.10.1-py3-none-any.whl", hash = "sha256:6f62d78e2f89b4500b080fe3a81690850cd254227f27f75c3a0c491a1f351ba7"},
{file = "isort-5.10.1.tar.gz", hash = "sha256:e8443a5e7a020e9d7f97f1d7d9cd17c88bcb3bc7e218bf9cf5095fe550be2951"},
]
jedi = [
{file = "jedi-0.18.2-py2.py3-none-any.whl", hash = "sha256:203c1fd9d969ab8f2119ec0a3342e0b49910045abe6af0a3ae83a5764d54639e"},
{file = "jedi-0.18.2.tar.gz", hash = "sha256:bae794c30d07f6d910d32a7048af09b5a39ed740918da923c6b780790ebac612"},
]
Jinja2 = [
{file = "Jinja2-3.1.2-py3-none-any.whl", hash = "sha256:6088930bfe239f0e6710546ab9c19c9ef35e29792895fed6e6e31a023a182a61"},
{file = "Jinja2-3.1.2.tar.gz", hash = "sha256:31351a702a408a9e7595a8fc6150fc3f43bb6bf7e319770cbc0db9df9437e852"},
]
jmespath = [
{file = "jmespath-1.0.1-py3-none-any.whl", hash = "sha256:02e2e4cc71b5bcab88332eebf907519190dd9e6e82107fa7f83b1003a6252980"},
{file = "jmespath-1.0.1.tar.gz", hash = "sha256:90261b206d6defd58fdd5e85f478bf633a2901798906be2ad389150c5c60edbe"},
]
joblib = [
{file = "joblib-1.2.0-py3-none-any.whl", hash = "sha256:091138ed78f800342968c523bdde947e7a305b8594b910a0fea2ab83c3c6d385"},
{file = "joblib-1.2.0.tar.gz", hash = "sha256:e1cee4a79e4af22881164f218d4311f60074197fb707e082e803b61f6d137018"},
]
jsonschema = [
{file = "jsonschema-4.17.1-py3-none-any.whl", hash = "sha256:410ef23dcdbca4eaedc08b850079179883c2ed09378bd1f760d4af4aacfa28d7"},
{file = "jsonschema-4.17.1.tar.gz", hash = "sha256:05b2d22c83640cde0b7e0aa329ca7754fbd98ea66ad8ae24aa61328dfe057fa3"},
]
jupyter = [
{file = "jupyter-1.0.0-py2.py3-none-any.whl", hash = "sha256:5b290f93b98ffbc21c0c7e749f054b3267782166d72fa5e3ed1ed4eaf34a2b78"},
{file = "jupyter-1.0.0.tar.gz", hash = "sha256:d9dc4b3318f310e34c82951ea5d6683f67bed7def4b259fafbfe4f1beb1d8e5f"},
{file = "jupyter-1.0.0.zip", hash = "sha256:3e1f86076bbb7c8c207829390305a2b1fe836d471ed54be66a3b8c41e7f46cc7"},
]
jupyter-client = [
{file = "jupyter_client-7.4.7-py3-none-any.whl", hash = "sha256:df56ae23b8e1da1b66f89dee1368e948b24a7f780fa822c5735187589fc4c157"},
{file = "jupyter_client-7.4.7.tar.gz", hash = "sha256:330f6b627e0b4bf2f54a3a0dd9e4a22d2b649c8518168afedce2c96a1ceb2860"},
]
jupyter-console = [
{file = "jupyter_console-6.4.4-py3-none-any.whl", hash = "sha256:756df7f4f60c986e7bc0172e4493d3830a7e6e75c08750bbe59c0a5403ad6dee"},
{file = "jupyter_console-6.4.4.tar.gz", hash = "sha256:172f5335e31d600df61613a97b7f0352f2c8250bbd1092ef2d658f77249f89fb"},
]
jupyter-core = [
{file = "jupyter_core-5.1.0-py3-none-any.whl", hash = "sha256:f5740d99606958544396914b08e67b668f45e7eff99ab47a7f4bcead419c02f4"},
{file = "jupyter_core-5.1.0.tar.gz", hash = "sha256:a5ae7c09c55c0b26f692ec69323ba2b62e8d7295354d20f6cd57b749de4a05bf"},
]
jupyter-server = [
{file = "jupyter_server-1.23.3-py3-none-any.whl", hash = "sha256:438496cac509709cc85e60172e5538ca45b4c8a0862bb97cd73e49f2ace419cb"},
{file = "jupyter_server-1.23.3.tar.gz", hash = "sha256:f7f7a2f9d36f4150ad125afef0e20b1c76c8ff83eb5e39fb02d3b9df0f9b79ab"},
]
jupyterlab-pygments = [
{file = "jupyterlab_pygments-0.2.2-py2.py3-none-any.whl", hash = "sha256:2405800db07c9f770863bcf8049a529c3dd4d3e28536638bd7c1c01d2748309f"},
{file = "jupyterlab_pygments-0.2.2.tar.gz", hash = "sha256:7405d7fde60819d905a9fa8ce89e4cd830e318cdad22a0030f7a901da705585d"},
]
jupyterlab-widgets = [
{file = "jupyterlab_widgets-3.0.3-py3-none-any.whl", hash = "sha256:6aa1bc0045470d54d76b9c0b7609a8f8f0087573bae25700a370c11f82cb38c8"},
{file = "jupyterlab_widgets-3.0.3.tar.gz", hash = "sha256:c767181399b4ca8b647befe2d913b1260f51bf9d8ef9b7a14632d4c1a7b536bd"},
]
keras = [
{file = "keras-2.11.0-py2.py3-none-any.whl", hash = "sha256:38c6fff0ea9a8b06a2717736565c92a73c8cd9b1c239e7125ccb188b7848f65e"},
]
kiwisolver = [
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:2f5e60fabb7343a836360c4f0919b8cd0d6dbf08ad2ca6b9cf90bf0c76a3c4f6"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:10ee06759482c78bdb864f4109886dff7b8a56529bc1609d4f1112b93fe6423c"},
{file = "kiwisolver-1.4.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c79ebe8f3676a4c6630fd3f777f3cfecf9289666c84e775a67d1d358578dc2e3"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abbe9fa13da955feb8202e215c4018f4bb57469b1b78c7a4c5c7b93001699938"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7577c1987baa3adc4b3c62c33bd1118c3ef5c8ddef36f0f2c950ae0b199e100d"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8ad8285b01b0d4695102546b342b493b3ccc6781fc28c8c6a1bb63e95d22f09"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8ed58b8acf29798b036d347791141767ccf65eee7f26bde03a71c944449e53de"},
{file = "kiwisolver-1.4.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a68b62a02953b9841730db7797422f983935aeefceb1679f0fc85cbfbd311c32"},
{file = "kiwisolver-1.4.4-cp310-cp310-win32.whl", hash = "sha256:e92a513161077b53447160b9bd8f522edfbed4bd9759e4c18ab05d7ef7e49408"},
{file = "kiwisolver-1.4.4-cp310-cp310-win_amd64.whl", hash = "sha256:3fe20f63c9ecee44560d0e7f116b3a747a5d7203376abeea292ab3152334d004"},
{file = "kiwisolver-1.4.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:e0ea21f66820452a3f5d1655f8704a60d66ba1191359b96541eaf457710a5fc6"},
{file = "kiwisolver-1.4.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:bc9db8a3efb3e403e4ecc6cd9489ea2bac94244f80c78e27c31dcc00d2790ac2"},
{file = "kiwisolver-1.4.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d5b61785a9ce44e5a4b880272baa7cf6c8f48a5180c3e81c59553ba0cb0821ca"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c2dbb44c3f7e6c4d3487b31037b1bdbf424d97687c1747ce4ff2895795c9bf69"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6295ecd49304dcf3bfbfa45d9a081c96509e95f4b9d0eb7ee4ec0530c4a96514"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4bd472dbe5e136f96a4b18f295d159d7f26fd399136f5b17b08c4e5f498cd494"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bf7d9fce9bcc4752ca4a1b80aabd38f6d19009ea5cbda0e0856983cf6d0023f5"},
{file = "kiwisolver-1.4.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78d6601aed50c74e0ef02f4204da1816147a6d3fbdc8b3872d263338a9052c51"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:877272cf6b4b7e94c9614f9b10140e198d2186363728ed0f701c6eee1baec1da"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:db608a6757adabb32f1cfe6066e39b3706d8c3aa69bbc353a5b61edad36a5cb4"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:5853eb494c71e267912275e5586fe281444eb5e722de4e131cddf9d442615626"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:f0a1dbdb5ecbef0d34eb77e56fcb3e95bbd7e50835d9782a45df81cc46949750"},
{file = "kiwisolver-1.4.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:283dffbf061a4ec60391d51e6155e372a1f7a4f5b15d59c8505339454f8989e4"},
{file = "kiwisolver-1.4.4-cp311-cp311-win32.whl", hash = "sha256:d06adcfa62a4431d404c31216f0f8ac97397d799cd53800e9d3efc2fbb3cf14e"},
{file = "kiwisolver-1.4.4-cp311-cp311-win_amd64.whl", hash = "sha256:e7da3fec7408813a7cebc9e4ec55afed2d0fd65c4754bc376bf03498d4e92686"},
{file = "kiwisolver-1.4.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:62ac9cc684da4cf1778d07a89bf5f81b35834cb96ca523d3a7fb32509380cbf6"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41dae968a94b1ef1897cb322b39360a0812661dba7c682aa45098eb8e193dbdf"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:02f79693ec433cb4b5f51694e8477ae83b3205768a6fb48ffba60549080e295b"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d0611a0a2a518464c05ddd5a3a1a0e856ccc10e67079bb17f265ad19ab3c7597"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:db5283d90da4174865d520e7366801a93777201e91e79bacbac6e6927cbceede"},
{file = "kiwisolver-1.4.4-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:1041feb4cda8708ce73bb4dcb9ce1ccf49d553bf87c3954bdfa46f0c3f77252c"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win32.whl", hash = "sha256:a553dadda40fef6bfa1456dc4be49b113aa92c2a9a9e8711e955618cd69622e3"},
{file = "kiwisolver-1.4.4-cp37-cp37m-win_amd64.whl", hash = "sha256:03baab2d6b4a54ddbb43bba1a3a2d1627e82d205c5cf8f4c924dc49284b87166"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:841293b17ad704d70c578f1f0013c890e219952169ce8a24ebc063eecf775454"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f4f270de01dd3e129a72efad823da90cc4d6aafb64c410c9033aba70db9f1ff0"},
{file = "kiwisolver-1.4.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f9f39e2f049db33a908319cf46624a569b36983c7c78318e9726a4cb8923b26c"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c97528e64cb9ebeff9701e7938653a9951922f2a38bd847787d4a8e498cc83ae"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1d1573129aa0fd901076e2bfb4275a35f5b7aa60fbfb984499d661ec950320b0"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad881edc7ccb9d65b0224f4e4d05a1e85cf62d73aab798943df6d48ab0cd79a1"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b428ef021242344340460fa4c9185d0b1f66fbdbfecc6c63eff4b7c29fad429d"},
{file = "kiwisolver-1.4.4-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:2e407cb4bd5a13984a6c2c0fe1845e4e41e96f183e5e5cd4d77a857d9693494c"},
{file = "kiwisolver-1.4.4-cp38-cp38-win32.whl", hash = "sha256:75facbe9606748f43428fc91a43edb46c7ff68889b91fa31f53b58894503a191"},
{file = "kiwisolver-1.4.4-cp38-cp38-win_amd64.whl", hash = "sha256:5bce61af018b0cb2055e0e72e7d65290d822d3feee430b7b8203d8a855e78766"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8c808594c88a025d4e322d5bb549282c93c8e1ba71b790f539567932722d7bd8"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f0a71d85ecdd570ded8ac3d1c0f480842f49a40beb423bb8014539a9f32a5897"},
{file = "kiwisolver-1.4.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:b533558eae785e33e8c148a8d9921692a9fe5aa516efbdff8606e7d87b9d5824"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:efda5fc8cc1c61e4f639b8067d118e742b812c930f708e6667a5ce0d13499e29"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:7c43e1e1206cd421cd92e6b3280d4385d41d7166b3ed577ac20444b6995a445f"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc8d3bd6c72b2dd9decf16ce70e20abcb3274ba01b4e1c96031e0c4067d1e7cd"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4ea39b0ccc4f5d803e3337dd46bcce60b702be4d86fd0b3d7531ef10fd99a1ac"},
{file = "kiwisolver-1.4.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:968f44fdbf6dd757d12920d63b566eeb4d5b395fd2d00d29d7ef00a00582aac9"},
{file = "kiwisolver-1.4.4-cp39-cp39-win32.whl", hash = "sha256:da7e547706e69e45d95e116e6939488d62174e033b763ab1496b4c29b76fabea"},
{file = "kiwisolver-1.4.4-cp39-cp39-win_amd64.whl", hash = "sha256:ba59c92039ec0a66103b1d5fe588fa546373587a7d68f5c96f743c3396afc04b"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:91672bacaa030f92fc2f43b620d7b337fd9a5af28b0d6ed3f77afc43c4a64b5a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:787518a6789009c159453da4d6b683f468ef7a65bbde796bcea803ccf191058d"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da152d8cdcab0e56e4f45eb08b9aea6455845ec83172092f09b0e077ece2cf7a"},
{file = "kiwisolver-1.4.4-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ecb1fa0db7bf4cff9dac752abb19505a233c7f16684c5826d1f11ebd9472b871"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:28bc5b299f48150b5f822ce68624e445040595a4ac3d59251703779836eceff9"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:81e38381b782cc7e1e46c4e14cd997ee6040768101aefc8fa3c24a4cc58e98f8"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2a66fdfb34e05b705620dd567f5a03f239a088d5a3f321e7b6ac3239d22aa286"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:872b8ca05c40d309ed13eb2e582cab0c5a05e81e987ab9c521bf05ad1d5cf5cb"},
{file = "kiwisolver-1.4.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:70e7c2e7b750585569564e2e5ca9845acfaa5da56ac46df68414f29fea97be9f"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:9f85003f5dfa867e86d53fac6f7e6f30c045673fa27b603c397753bebadc3008"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2e307eb9bd99801f82789b44bb45e9f541961831c7311521b13a6c85afc09767"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b1792d939ec70abe76f5054d3f36ed5656021dcad1322d1cc996d4e54165cef9"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6cb459eea32a4e2cf18ba5fcece2dbdf496384413bc1bae15583f19e567f3b2"},
{file = "kiwisolver-1.4.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:36dafec3d6d6088d34e2de6b85f9d8e2324eb734162fba59d2ba9ed7a2043d5b"},
{file = "kiwisolver-1.4.4.tar.gz", hash = "sha256:d41997519fcba4a1e46eb4a2fe31bc12f0ff957b2b81bac28db24744f333e955"},
]
langcodes = [
{file = "langcodes-3.3.0-py3-none-any.whl", hash = "sha256:4d89fc9acb6e9c8fdef70bcdf376113a3db09b67285d9e1d534de6d8818e7e69"},
{file = "langcodes-3.3.0.tar.gz", hash = "sha256:794d07d5a28781231ac335a1561b8442f8648ca07cd518310aeb45d6f0807ef6"},
]
libclang = [
{file = "libclang-14.0.6-py2.py3-none-macosx_10_9_x86_64.whl", hash = "sha256:8791cf3c3b087c373a6d61e9199da7a541da922c9ddcfed1122090586b996d6e"},
{file = "libclang-14.0.6-py2.py3-none-macosx_11_0_arm64.whl", hash = "sha256:7b06fc76bd1e67c8b04b5719bf2ac5d6a323b289b245dfa9e468561d99538188"},
{file = "libclang-14.0.6-py2.py3-none-manylinux1_x86_64.whl", hash = "sha256:e429853939423f276a25140b0b702442d7da9a09e001c05e48df888336947614"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2010_x86_64.whl", hash = "sha256:206d2789e4450a37d054e63b70451a6fc1873466397443fa13de2b3d4adb2796"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_aarch64.whl", hash = "sha256:e2add1703129b2abe066fb1890afa880870a89fd6ab4ec5d2a7a8dc8d271677e"},
{file = "libclang-14.0.6-py2.py3-none-manylinux2014_armv7l.whl", hash = "sha256:5dd3c6fca1b007d308a4114afa8e4e9d32f32b2572520701d45fcc626ac5cd6c"},
{file = "libclang-14.0.6-py2.py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:cfb0e892ebb5dff6bd498ab5778adb8581f26a00fd8347b3c76c989fe2fd04f7"},
{file = "libclang-14.0.6-py2.py3-none-win_amd64.whl", hash = "sha256:ea03c12675151837660cdd5dce65bd89320896ac3421efef43a36678f113ce95"},
{file = "libclang-14.0.6-py2.py3-none-win_arm64.whl", hash = "sha256:2e4303e04517fcd11173cb2e51a7070eed71e16ef45d4e26a82c5e881cac3d27"},
{file = "libclang-14.0.6.tar.gz", hash = "sha256:9052a8284d8846984f6fa826b1d7460a66d3b23a486d782633b42b6e3b418789"},
]
lightgbm = [
{file = "lightgbm-3.3.3-py3-none-macosx_10_15_x86_64.macosx_11_6_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:27b0ae82549d6c59ede4fa3245f4b21a6bf71ab5ec5c55601cf5a962a18c6f80"},
{file = "lightgbm-3.3.3-py3-none-manylinux1_x86_64.whl", hash = "sha256:389edda68b7f24a1755a6af4dad06e16236e374e9de64253a105b12982b153e2"},
{file = "lightgbm-3.3.3-py3-none-manylinux2014_aarch64.whl", hash = "sha256:b0af55bd476785726eaacbd3c880f8168d362d4bba098790f55cd10fe928591b"},
{file = "lightgbm-3.3.3-py3-none-win_amd64.whl", hash = "sha256:b334dbcd670e3d87f4ff3cfe31d652ab18eb88ad9092a02010916320549b7d10"},
{file = "lightgbm-3.3.3.tar.gz", hash = "sha256:857e559ae84a22963ce2b62168292969d21add30bc9246a84d4e7eedae67966d"},
]
llvmlite = [
{file = "llvmlite-0.36.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:cc0f9b9644b4ab0e4a5edb17f1531d791630c88858220d3cc688d6edf10da100"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f7918dbac02b1ebbfd7302ad8e8307d7877ab57d782d5f04b70ff9696b53c21b"},
{file = "llvmlite-0.36.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:7768658646c418b9b3beccb7044277a608bc8c62b82a85e73c7e5c065e4157c2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win32.whl", hash = "sha256:05f807209a360d39526d98141b6f281b9c7c771c77a4d1fc22002440642c8de2"},
{file = "llvmlite-0.36.0-cp36-cp36m-win_amd64.whl", hash = "sha256:d1fdd63c371626c25ad834e1c6297eb76cf2f093a40dbb401a87b6476ab4e34e"},
{file = "llvmlite-0.36.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7c4e7066447305d5095d0b0a9cae7b835d2f0fde143456b3124110eab0856426"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:9dad7e4bb042492914292aea3f4172eca84db731f9478250240955aedba95e08"},
{file = "llvmlite-0.36.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:1ce5bc0a638d874a08d4222be0a7e48e5df305d094c2ff8dec525ef32b581551"},
{file = "llvmlite-0.36.0-cp37-cp37m-win32.whl", hash = "sha256:dbedff0f6d417b374253a6bab39aa4b5364f1caab30c06ba8726904776fcf1cb"},
{file = "llvmlite-0.36.0-cp37-cp37m-win_amd64.whl", hash = "sha256:3b17fc4b0dd17bd29d7297d054e2915fad535889907c3f65232ee21f483447c5"},
{file = "llvmlite-0.36.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b3a77e46e6053e2a86e607e87b97651dda81e619febb914824a927bff4e88737"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:048a7c117641c9be87b90005684e64a6f33ea0897ebab1df8a01214a10d6e79a"},
{file = "llvmlite-0.36.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:7db4b0eef93125af1c4092c64a3c73c7dc904101117ef53f8d78a1a499b8d5f4"},
{file = "llvmlite-0.36.0-cp38-cp38-win32.whl", hash = "sha256:50b1828bde514b31431b2bba1aa20b387f5625b81ad6e12fede430a04645e47a"},
{file = "llvmlite-0.36.0-cp38-cp38-win_amd64.whl", hash = "sha256:f608bae781b2d343e15e080c546468c5a6f35f57f0446923ea198dd21f23757e"},
{file = "llvmlite-0.36.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6a3abc8a8889aeb06bf9c4a7e5df5bc7bb1aa0aedd91a599813809abeec80b5a"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:705f0323d931684428bb3451549603299bb5e17dd60fb979d67c3807de0debc1"},
{file = "llvmlite-0.36.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:5a6548b4899facb182145147185e9166c69826fb424895f227e6b7cf924a8da1"},
{file = "llvmlite-0.36.0-cp39-cp39-win32.whl", hash = "sha256:ff52fb9c2be66b95b0e67d56fce11038397e5be1ea410ee53f5f1175fdbb107a"},
{file = "llvmlite-0.36.0-cp39-cp39-win_amd64.whl", hash = "sha256:1dee416ea49fd338c74ec15c0c013e5273b0961528169af06ff90772614f7f6c"},
{file = "llvmlite-0.36.0.tar.gz", hash = "sha256:765128fdf5f149ed0b889ffbe2b05eb1717f8e20a5c87fa2b4018fbcce0fcfc9"},
]
locket = [
{file = "locket-1.0.0-py2.py3-none-any.whl", hash = "sha256:b6c819a722f7b6bd955b80781788e4a66a55628b858d347536b7e81325a3a5e3"},
{file = "locket-1.0.0.tar.gz", hash = "sha256:5c0d4c052a8bbbf750e056a8e65ccd309086f4f0f18a2eac306a8dfa4112a632"},
]
Markdown = [
{file = "Markdown-3.4.1-py3-none-any.whl", hash = "sha256:08fb8465cffd03d10b9dd34a5c3fea908e20391a2a90b88d66362cb05beed186"},
{file = "Markdown-3.4.1.tar.gz", hash = "sha256:3b809086bb6efad416156e00a0da66fe47618a5d6918dd688f53f40c8e4cfeff"},
]
MarkupSafe = [
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:86b1f75c4e7c2ac2ccdaec2b9022845dbb81880ca318bb7a0a01fbf7813e3812"},
{file = "MarkupSafe-2.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f121a1420d4e173a5d96e47e9a0c0dcff965afdf1626d28de1460815f7c4ee7a"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a49907dd8420c5685cfa064a1335b6754b74541bbb3706c259c02ed65b644b3e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:10c1bfff05d95783da83491be968e8fe789263689c02724e0c691933c52994f5"},
{file = "MarkupSafe-2.1.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7bd98b796e2b6553da7225aeb61f447f80a1ca64f41d83612e6139ca5213aa4"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b09bf97215625a311f669476f44b8b318b075847b49316d3e28c08e41a7a573f"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:694deca8d702d5db21ec83983ce0bb4b26a578e71fbdbd4fdcd387daa90e4d5e"},
{file = "MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:efc1913fd2ca4f334418481c7e595c00aad186563bbc1ec76067848c7ca0a933"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win32.whl", hash = "sha256:4a33dea2b688b3190ee12bd7cfa29d39c9ed176bda40bfa11099a3ce5d3a7ac6"},
{file = "MarkupSafe-2.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:dda30ba7e87fbbb7eab1ec9f58678558fd9a6b8b853530e176eabd064da81417"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:671cd1187ed5e62818414afe79ed29da836dde67166a9fac6d435873c44fdd02"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3799351e2336dc91ea70b034983ee71cf2f9533cdff7c14c90ea126bfd95d65a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e72591e9ecd94d7feb70c1cbd7be7b3ebea3f548870aa91e2732960fa4d57a37"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6fbf47b5d3728c6aea2abb0589b5d30459e369baa772e0f37a0320185e87c980"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d5ee4f386140395a2c818d149221149c54849dfcfcb9f1debfe07a8b8bd63f9a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:bcb3ed405ed3222f9904899563d6fc492ff75cce56cba05e32eff40e6acbeaa3"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:e1c0b87e09fa55a220f058d1d49d3fb8df88fbfab58558f1198e08c1e1de842a"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win32.whl", hash = "sha256:8dc1c72a69aa7e082593c4a203dcf94ddb74bb5c8a731e4e1eb68d031e8498ff"},
{file = "MarkupSafe-2.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:97a68e6ada378df82bc9f16b800ab77cbf4b2fada0081794318520138c088e4a"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e8c843bbcda3a2f1e3c2ab25913c80a3c5376cd00c6e8c4a86a89a28c8dc5452"},
{file = "MarkupSafe-2.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0212a68688482dc52b2d45013df70d169f542b7394fc744c02a57374a4207003"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e576a51ad59e4bfaac456023a78f6b5e6e7651dcd383bcc3e18d06f9b55d6d1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b9fe39a2ccc108a4accc2676e77da025ce383c108593d65cc909add5c3bd601"},
{file = "MarkupSafe-2.1.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:96e37a3dc86e80bf81758c152fe66dbf60ed5eca3d26305edf01892257049925"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6d0072fea50feec76a4c418096652f2c3238eaa014b2f94aeb1d56a66b41403f"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:089cf3dbf0cd6c100f02945abeb18484bd1ee57a079aefd52cffd17fba910b88"},
{file = "MarkupSafe-2.1.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a074d34ee7a5ce3effbc526b7083ec9731bb3cbf921bbe1d3005d4d2bdb3a63"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win32.whl", hash = "sha256:421be9fbf0ffe9ffd7a378aafebbf6f4602d564d34be190fc19a193232fd12b1"},
{file = "MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:fc7b548b17d238737688817ab67deebb30e8073c95749d55538ed473130ec0c7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:e04e26803c9c3851c931eac40c695602c6295b8d432cbe78609649ad9bd2da8a"},
{file = "MarkupSafe-2.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b87db4360013327109564f0e591bd2a3b318547bcef31b468a92ee504d07ae4f"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:99a2a507ed3ac881b975a2976d59f38c19386d128e7a9a18b7df6fff1fd4c1d6"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56442863ed2b06d19c37f94d999035e15ee982988920e12a5b4ba29b62ad1f77"},
{file = "MarkupSafe-2.1.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3ce11ee3f23f79dbd06fb3d63e2f6af7b12db1d46932fe7bd8afa259a5996603"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:33b74d289bd2f5e527beadcaa3f401e0df0a89927c1559c8566c066fa4248ab7"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:43093fb83d8343aac0b1baa75516da6092f58f41200907ef92448ecab8825135"},
{file = "MarkupSafe-2.1.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8e3dcf21f367459434c18e71b2a9532d96547aef8a871872a5bd69a715c15f96"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win32.whl", hash = "sha256:d4306c36ca495956b6d568d276ac11fdd9c30a36f1b6eb928070dc5360b22e1c"},
{file = "MarkupSafe-2.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:46d00d6cfecdde84d40e572d63735ef81423ad31184100411e6e3388d405e247"},
{file = "MarkupSafe-2.1.1.tar.gz", hash = "sha256:7f91197cc9e48f989d12e4e6fbc46495c446636dfc81b9ccf50bb0ec74b91d4b"},
]
matplotlib = [
{file = "matplotlib-3.6.2-cp310-cp310-macosx_10_12_universal2.whl", hash = "sha256:8d0068e40837c1d0df6e3abf1cdc9a34a6d2611d90e29610fa1d2455aeb4e2e5"},
{file = "matplotlib-3.6.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:252957e208c23db72ca9918cb33e160c7833faebf295aaedb43f5b083832a267"},
{file = "matplotlib-3.6.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d50e8c1e571ee39b5dfbc295c11ad65988879f68009dd281a6e1edbc2ff6c18c"},
{file = "matplotlib-3.6.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d840adcad7354be6f2ec28d0706528b0026e4c3934cc6566b84eac18633eab1b"},
{file = "matplotlib-3.6.2-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:78ec3c3412cf277e6252764ee4acbdbec6920cc87ad65862272aaa0e24381eee"},
{file = "matplotlib-3.6.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9347cc6822f38db2b1d1ce992f375289670e595a2d1c15961aacbe0977407dfc"},
{file = "matplotlib-3.6.2-cp310-cp310-win32.whl", hash = "sha256:e0bbee6c2a5bf2a0017a9b5e397babb88f230e6f07c3cdff4a4c4bc75ed7c617"},
{file = "matplotlib-3.6.2-cp310-cp310-win_amd64.whl", hash = "sha256:8a0ae37576ed444fe853709bdceb2be4c7df6f7acae17b8378765bd28e61b3ae"},
{file = "matplotlib-3.6.2-cp311-cp311-macosx_10_12_universal2.whl", hash = "sha256:5ecfc6559132116dedfc482d0ad9df8a89dc5909eebffd22f3deb684132d002f"},
{file = "matplotlib-3.6.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:9f335e5625feb90e323d7e3868ec337f7b9ad88b5d633f876e3b778813021dab"},
{file = "matplotlib-3.6.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b2604c6450f9dd2c42e223b1f5dca9643a23cfecc9fde4a94bb38e0d2693b136"},
{file = "matplotlib-3.6.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e5afe0a7ea0e3a7a257907060bee6724a6002b7eec55d0db16fd32409795f3e1"},
{file = "matplotlib-3.6.2-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca0e7a658fbafcddcaefaa07ba8dae9384be2343468a8e011061791588d839fa"},
{file = "matplotlib-3.6.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:32d29c8c26362169c80c5718ce367e8c64f4dd068a424e7110df1dd2ed7bd428"},
{file = "matplotlib-3.6.2-cp311-cp311-win32.whl", hash = "sha256:5024b8ed83d7f8809982d095d8ab0b179bebc07616a9713f86d30cf4944acb73"},
{file = "matplotlib-3.6.2-cp311-cp311-win_amd64.whl", hash = "sha256:52c2bdd7cd0bf9d5ccdf9c1816568fd4ccd51a4d82419cc5480f548981b47dd0"},
{file = "matplotlib-3.6.2-cp38-cp38-macosx_10_12_universal2.whl", hash = "sha256:8a8dbe2cb7f33ff54b16bb5c500673502a35f18ac1ed48625e997d40c922f9cc"},
{file = "matplotlib-3.6.2-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:380d48c15ec41102a2b70858ab1dedfa33eb77b2c0982cb65a200ae67a48e9cb"},
{file = "matplotlib-3.6.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0844523dfaaff566e39dbfa74e6f6dc42e92f7a365ce80929c5030b84caa563a"},
{file = "matplotlib-3.6.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7f716b6af94dc1b6b97c46401774472f0867e44595990fe80a8ba390f7a0a028"},
{file = "matplotlib-3.6.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:74153008bd24366cf099d1f1e83808d179d618c4e32edb0d489d526523a94d9f"},
{file = "matplotlib-3.6.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f41e57ad63d336fe50d3a67bb8eaa26c09f6dda6a59f76777a99b8ccd8e26aec"},
{file = "matplotlib-3.6.2-cp38-cp38-win32.whl", hash = "sha256:d0e9ac04065a814d4cf2c6791a2ad563f739ae3ae830d716d54245c2b96fead6"},
{file = "matplotlib-3.6.2-cp38-cp38-win_amd64.whl", hash = "sha256:8a9d899953c722b9afd7e88dbefd8fb276c686c3116a43c577cfabf636180558"},
{file = "matplotlib-3.6.2-cp39-cp39-macosx_10_12_universal2.whl", hash = "sha256:f04f97797df35e442ed09f529ad1235d1f1c0f30878e2fe09a2676b71a8801e0"},
{file = "matplotlib-3.6.2-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:3964934731fd7a289a91d315919cf757f293969a4244941ab10513d2351b4e83"},
{file = "matplotlib-3.6.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:168093410b99f647ba61361b208f7b0d64dde1172b5b1796d765cd243cadb501"},
{file = "matplotlib-3.6.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e16dcaecffd55b955aa5e2b8a804379789c15987e8ebd2f32f01398a81e975b"},
{file = "matplotlib-3.6.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:83dc89c5fd728fdb03b76f122f43b4dcee8c61f1489e232d9ad0f58020523e1c"},
{file = "matplotlib-3.6.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:795ad83940732b45d39b82571f87af0081c120feff2b12e748d96bb191169e33"},
{file = "matplotlib-3.6.2-cp39-cp39-win32.whl", hash = "sha256:19d61ee6414c44a04addbe33005ab1f87539d9f395e25afcbe9a3c50ce77c65c"},
{file = "matplotlib-3.6.2-cp39-cp39-win_amd64.whl", hash = "sha256:5ba73aa3aca35d2981e0b31230d58abb7b5d7ca104e543ae49709208d8ce706a"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:1836f366272b1557a613f8265db220eb8dd883202bbbabe01bad5a4eadfd0c95"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0eda9d1b43f265da91fb9ae10d6922b5a986e2234470a524e6b18f14095b20d2"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ec9be0f4826cdb3a3a517509dcc5f87f370251b76362051ab59e42b6b765f8c4"},
{file = "matplotlib-3.6.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:3cef89888a466228fc4e4b2954e740ce8e9afde7c4315fdd18caa1b8de58ca17"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:54fa9fe27f5466b86126ff38123261188bed568c1019e4716af01f97a12fe812"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e68be81cd8c22b029924b6d0ee814c337c0e706b8d88495a617319e5dd5441c3"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b0ca2c60d3966dfd6608f5f8c49b8a0fcf76de6654f2eda55fc6ef038d5a6f27"},
{file = "matplotlib-3.6.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4426c74761790bff46e3d906c14c7aab727543293eed5a924300a952e1a3a3c1"},
{file = "matplotlib-3.6.2.tar.gz", hash = "sha256:b03fd10a1709d0101c054883b550f7c4c5e974f751e2680318759af005964990"},
]
matplotlib-inline = [
{file = "matplotlib-inline-0.1.6.tar.gz", hash = "sha256:f887e5f10ba98e8d2b150ddcf4702c1e5f8b3a20005eb0f74bfdbd360ee6f304"},
{file = "matplotlib_inline-0.1.6-py3-none-any.whl", hash = "sha256:f1f41aab5328aa5aaea9b16d083b128102f8712542f819fe7e6a420ff581b311"},
]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
]
mistune = [
{file = "mistune-2.0.4-py2.py3-none-any.whl", hash = "sha256:182cc5ee6f8ed1b807de6b7bb50155df7b66495412836b9a74c8fbdfc75fe36d"},
{file = "mistune-2.0.4.tar.gz", hash = "sha256:9ee0a66053e2267aba772c71e06891fa8f1af6d4b01d5e84e267b4570d4d9808"},
]
mpmath = [
{file = "mpmath-1.2.1-py3-none-any.whl", hash = "sha256:604bc21bd22d2322a177c73bdb573994ef76e62edd595d17e00aff24b0667e5c"},
{file = "mpmath-1.2.1.tar.gz", hash = "sha256:79ffb45cf9f4b101a807595bcb3e72e0396202e0b1d25d689134b48c4216a81a"},
]
msgpack = [
{file = "msgpack-1.0.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:4ab251d229d10498e9a2f3b1e68ef64cb393394ec477e3370c457f9430ce9250"},
{file = "msgpack-1.0.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:112b0f93202d7c0fef0b7810d465fde23c746a2d482e1e2de2aafd2ce1492c88"},
{file = "msgpack-1.0.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:002b5c72b6cd9b4bafd790f364b8480e859b4712e91f43014fe01e4f957b8467"},
{file = "msgpack-1.0.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:35bc0faa494b0f1d851fd29129b2575b2e26d41d177caacd4206d81502d4c6a6"},
{file = "msgpack-1.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4733359808c56d5d7756628736061c432ded018e7a1dff2d35a02439043321aa"},
{file = "msgpack-1.0.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb514ad14edf07a1dbe63761fd30f89ae79b42625731e1ccf5e1f1092950eaa6"},
{file = "msgpack-1.0.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:c23080fdeec4716aede32b4e0ef7e213c7b1093eede9ee010949f2a418ced6ba"},
{file = "msgpack-1.0.4-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:49565b0e3d7896d9ea71d9095df15b7f75a035c49be733051c34762ca95bbf7e"},
{file = "msgpack-1.0.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:aca0f1644d6b5a73eb3e74d4d64d5d8c6c3d577e753a04c9e9c87d07692c58db"},
{file = "msgpack-1.0.4-cp310-cp310-win32.whl", hash = "sha256:0dfe3947db5fb9ce52aaea6ca28112a170db9eae75adf9339a1aec434dc954ef"},
{file = "msgpack-1.0.4-cp310-cp310-win_amd64.whl", hash = "sha256:4dea20515f660aa6b7e964433b1808d098dcfcabbebeaaad240d11f909298075"},
{file = "msgpack-1.0.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e83f80a7fec1a62cf4e6c9a660e39c7f878f603737a0cdac8c13131d11d97f52"},
{file = "msgpack-1.0.4-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c11a48cf5e59026ad7cb0dc29e29a01b5a66a3e333dc11c04f7e991fc5510a9"},
{file = "msgpack-1.0.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1276e8f34e139aeff1c77a3cefb295598b504ac5314d32c8c3d54d24fadb94c9"},
{file = "msgpack-1.0.4-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6c9566f2c39ccced0a38d37c26cc3570983b97833c365a6044edef3574a00c08"},
{file = "msgpack-1.0.4-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:fcb8a47f43acc113e24e910399376f7277cf8508b27e5b88499f053de6b115a8"},
{file = "msgpack-1.0.4-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:76ee788122de3a68a02ed6f3a16bbcd97bc7c2e39bd4d94be2f1821e7c4a64e6"},
{file = "msgpack-1.0.4-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:0a68d3ac0104e2d3510de90a1091720157c319ceeb90d74f7b5295a6bee51bae"},
{file = "msgpack-1.0.4-cp36-cp36m-win32.whl", hash = "sha256:85f279d88d8e833ec015650fd15ae5eddce0791e1e8a59165318f371158efec6"},
{file = "msgpack-1.0.4-cp36-cp36m-win_amd64.whl", hash = "sha256:c1683841cd4fa45ac427c18854c3ec3cd9b681694caf5bff04edb9387602d661"},
{file = "msgpack-1.0.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a75dfb03f8b06f4ab093dafe3ddcc2d633259e6c3f74bb1b01996f5d8aa5868c"},
{file = "msgpack-1.0.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9667bdfdf523c40d2511f0e98a6c9d3603be6b371ae9a238b7ef2dc4e7a427b0"},
{file = "msgpack-1.0.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:11184bc7e56fd74c00ead4f9cc9a3091d62ecb96e97653add7a879a14b003227"},
{file = "msgpack-1.0.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ac5bd7901487c4a1dd51a8c58f2632b15d838d07ceedaa5e4c080f7190925bff"},
{file = "msgpack-1.0.4-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:1e91d641d2bfe91ba4c52039adc5bccf27c335356055825c7f88742c8bb900dd"},
{file = "msgpack-1.0.4-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:2a2df1b55a78eb5f5b7d2a4bb221cd8363913830145fad05374a80bf0877cb1e"},
{file = "msgpack-1.0.4-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:545e3cf0cf74f3e48b470f68ed19551ae6f9722814ea969305794645da091236"},
{file = "msgpack-1.0.4-cp37-cp37m-win32.whl", hash = "sha256:2cc5ca2712ac0003bcb625c96368fd08a0f86bbc1a5578802512d87bc592fe44"},
{file = "msgpack-1.0.4-cp37-cp37m-win_amd64.whl", hash = "sha256:eba96145051ccec0ec86611fe9cf693ce55f2a3ce89c06ed307de0e085730ec1"},
{file = "msgpack-1.0.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:7760f85956c415578c17edb39eed99f9181a48375b0d4a94076d84148cf67b2d"},
{file = "msgpack-1.0.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:449e57cc1ff18d3b444eb554e44613cffcccb32805d16726a5494038c3b93dab"},
{file = "msgpack-1.0.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d603de2b8d2ea3f3bcb2efe286849aa7a81531abc52d8454da12f46235092bcb"},
{file = "msgpack-1.0.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:48f5d88c99f64c456413d74a975bd605a9b0526293218a3b77220a2c15458ba9"},
{file = "msgpack-1.0.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6916c78f33602ecf0509cc40379271ba0f9ab572b066bd4bdafd7434dee4bc6e"},
{file = "msgpack-1.0.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:81fc7ba725464651190b196f3cd848e8553d4d510114a954681fd0b9c479d7e1"},
{file = "msgpack-1.0.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:d5b5b962221fa2c5d3a7f8133f9abffc114fe218eb4365e40f17732ade576c8e"},
{file = "msgpack-1.0.4-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:77ccd2af37f3db0ea59fb280fa2165bf1b096510ba9fe0cc2bf8fa92a22fdb43"},
{file = "msgpack-1.0.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:b17be2478b622939e39b816e0aa8242611cc8d3583d1cd8ec31b249f04623243"},
{file = "msgpack-1.0.4-cp38-cp38-win32.whl", hash = "sha256:2bb8cdf50dd623392fa75525cce44a65a12a00c98e1e37bf0fb08ddce2ff60d2"},
{file = "msgpack-1.0.4-cp38-cp38-win_amd64.whl", hash = "sha256:26b8feaca40a90cbe031b03d82b2898bf560027160d3eae1423f4a67654ec5d6"},
{file = "msgpack-1.0.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:462497af5fd4e0edbb1559c352ad84f6c577ffbbb708566a0abaaa84acd9f3ae"},
{file = "msgpack-1.0.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2999623886c5c02deefe156e8f869c3b0aaeba14bfc50aa2486a0415178fce55"},
{file = "msgpack-1.0.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f0029245c51fd9473dc1aede1160b0a29f4a912e6b1dd353fa6d317085b219da"},
{file = "msgpack-1.0.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ed6f7b854a823ea44cf94919ba3f727e230da29feb4a99711433f25800cf747f"},
{file = "msgpack-1.0.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0df96d6eaf45ceca04b3f3b4b111b86b33785683d682c655063ef8057d61fd92"},
{file = "msgpack-1.0.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6a4192b1ab40f8dca3f2877b70e63799d95c62c068c84dc028b40a6cb03ccd0f"},
{file = "msgpack-1.0.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0e3590f9fb9f7fbc36df366267870e77269c03172d086fa76bb4eba8b2b46624"},
{file = "msgpack-1.0.4-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:1576bd97527a93c44fa856770197dec00d223b0b9f36ef03f65bac60197cedf8"},
{file = "msgpack-1.0.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:63e29d6e8c9ca22b21846234913c3466b7e4ee6e422f205a2988083de3b08cae"},
{file = "msgpack-1.0.4-cp39-cp39-win32.whl", hash = "sha256:fb62ea4b62bfcb0b380d5680f9a4b3f9a2d166d9394e9bbd9666c0ee09a3645c"},
{file = "msgpack-1.0.4-cp39-cp39-win_amd64.whl", hash = "sha256:4d5834a2a48965a349da1c5a79760d94a1a0172fbb5ab6b5b33cbf8447e109ce"},
{file = "msgpack-1.0.4.tar.gz", hash = "sha256:f5d869c18f030202eb412f08b28d2afeea553d6613aee89e200d7aca7ef01f5f"},
]
multiprocess = [
{file = "multiprocess-0.70.14-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:560a27540daef4ce8b24ed3cc2496a3c670df66c96d02461a4da67473685adf3"},
{file = "multiprocess-0.70.14-pp37-pypy37_pp73-manylinux_2_24_i686.whl", hash = "sha256:bfbbfa36f400b81d1978c940616bc77776424e5e34cb0c94974b178d727cfcd5"},
{file = "multiprocess-0.70.14-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:89fed99553a04ec4f9067031f83a886d7fdec5952005551a896a4b6a59575bb9"},
{file = "multiprocess-0.70.14-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:40a5e3685462079e5fdee7c6789e3ef270595e1755199f0d50685e72523e1d2a"},
{file = "multiprocess-0.70.14-pp38-pypy38_pp73-manylinux_2_24_i686.whl", hash = "sha256:44936b2978d3f2648727b3eaeab6d7fa0bedf072dc5207bf35a96d5ee7c004cf"},
{file = "multiprocess-0.70.14-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:e628503187b5d494bf29ffc52d3e1e57bb770ce7ce05d67c4bbdb3a0c7d3b05f"},
{file = "multiprocess-0.70.14-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:0d5da0fc84aacb0e4bd69c41b31edbf71b39fe2fb32a54eaedcaea241050855c"},
{file = "multiprocess-0.70.14-pp39-pypy39_pp73-manylinux_2_24_i686.whl", hash = "sha256:6a7b03a5b98e911a7785b9116805bd782815c5e2bd6c91c6a320f26fd3e7b7ad"},
{file = "multiprocess-0.70.14-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:cea5bdedd10aace3c660fedeac8b087136b4366d4ee49a30f1ebf7409bce00ae"},
{file = "multiprocess-0.70.14-py310-none-any.whl", hash = "sha256:7dc1f2f6a1d34894c8a9a013fbc807971e336e7cc3f3ff233e61b9dc679b3b5c"},
{file = "multiprocess-0.70.14-py37-none-any.whl", hash = "sha256:93a8208ca0926d05cdbb5b9250a604c401bed677579e96c14da3090beb798193"},
{file = "multiprocess-0.70.14-py38-none-any.whl", hash = "sha256:6725bc79666bbd29a73ca148a0fb5f4ea22eed4a8f22fce58296492a02d18a7b"},
{file = "multiprocess-0.70.14-py39-none-any.whl", hash = "sha256:63cee628b74a2c0631ef15da5534c8aedbc10c38910b9c8b18dcd327528d1ec7"},
{file = "multiprocess-0.70.14.tar.gz", hash = "sha256:3eddafc12f2260d27ae03fe6069b12570ab4764ab59a75e81624fac453fbf46a"},
]
murmurhash = [
{file = "murmurhash-1.0.9-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:697ed01454d92681c7ae26eb1adcdc654b54062bcc59db38ed03cad71b23d449"},
{file = "murmurhash-1.0.9-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5ef31b5c11be2c064dbbdd0e22ab3effa9ceb5b11ae735295c717c120087dd94"},
{file = "murmurhash-1.0.9-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d7a2bd203377a31bbb2d83fe3f968756d6c9bbfa36c64c6ebfc3c6494fc680bc"},
{file = "murmurhash-1.0.9-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0eb0f8e652431ea238c11bcb671fef5c03aff0544bf7e098df81ea4b6d495405"},
{file = "murmurhash-1.0.9-cp310-cp310-win_amd64.whl", hash = "sha256:cf0b3fe54dca598f5b18c9951e70812e070ecb4c0672ad2cc32efde8a33b3df6"},
{file = "murmurhash-1.0.9-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5dc41be79ba4d09aab7e9110a8a4d4b37b184b63767b1b247411667cdb1057a3"},
{file = "murmurhash-1.0.9-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c0f84ecdf37c06eda0222f2f9e81c0974e1a7659c35b755ab2fdc642ebd366db"},
{file = "murmurhash-1.0.9-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:241693c1c819148eac29d7882739b1099c891f1f7431127b2652c23f81722cec"},
{file = "murmurhash-1.0.9-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:47f5ca56c430230d3b581dfdbc54eb3ad8b0406dcc9afdd978da2e662c71d370"},
{file = "murmurhash-1.0.9-cp311-cp311-win_amd64.whl", hash = "sha256:660ae41fc6609abc05130543011a45b33ca5d8318ae5c70e66bbd351ca936063"},
{file = "murmurhash-1.0.9-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:01137d688a6b259bde642513506b062364ea4e1609f886d9bd095c3ae6da0b94"},
{file = "murmurhash-1.0.9-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b70bbf55d89713873a35bd4002bc231d38e530e1051d57ca5d15f96c01fd778"},
{file = "murmurhash-1.0.9-cp36-cp36m-win_amd64.whl", hash = "sha256:3e802fa5b0e618ee99e8c114ce99fc91677f14e9de6e18b945d91323a93c84e8"},
{file = "murmurhash-1.0.9-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:213d0248e586082e1cab6157d9945b846fd2b6be34357ad5ea0d03a1931d82ba"},
{file = "murmurhash-1.0.9-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94b89d02aeab5e6bad5056f9d08df03ac7cfe06e61ff4b6340feb227fda80ce8"},
{file = "murmurhash-1.0.9-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0c2e2ee2d91a87952fe0f80212e86119aa1fd7681f03e6c99b279e50790dc2b3"},
{file = "murmurhash-1.0.9-cp37-cp37m-win_amd64.whl", hash = "sha256:8c3d69fb649c77c74a55624ebf7a0df3c81629e6ea6e80048134f015da57b2ea"},
{file = "murmurhash-1.0.9-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ab78675510f83e7a3c6bd0abdc448a9a2b0b385b0d7ee766cbbfc5cc278a3042"},
{file = "murmurhash-1.0.9-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0ac5530c250d2b0073ed058555847c8d88d2d00229e483d45658c13b32398523"},
{file = "murmurhash-1.0.9-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69157e8fa6b25c4383645227069f6a1f8738d32ed2a83558961019ca3ebef56a"},
{file = "murmurhash-1.0.9-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2aebe2ae016525a662ff772b72a2c9244a673e3215fcd49897f494258b96f3e7"},
{file = "murmurhash-1.0.9-cp38-cp38-win_amd64.whl", hash = "sha256:a5952f9c18a717fa17579e27f57bfa619299546011a8378a8f73e14eece332f6"},
{file = "murmurhash-1.0.9-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ef79202feeac68e83971239169a05fa6514ecc2815ce04c8302076d267870f6e"},
{file = "murmurhash-1.0.9-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:799fcbca5693ad6a40f565ae6b8e9718e5875a63deddf343825c0f31c32348fa"},
{file = "murmurhash-1.0.9-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9b995bc82eaf9223e045210207b8878fdfe099a788dd8abd708d9ee58459a9d"},
{file = "murmurhash-1.0.9-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b129e1c5ebd772e6ff5ef925bcce695df13169bd885337e6074b923ab6edcfc8"},
{file = "murmurhash-1.0.9-cp39-cp39-win_amd64.whl", hash = "sha256:379bf6b414bd27dd36772dd1570565a7d69918e980457370838bd514df0d91e9"},
{file = "murmurhash-1.0.9.tar.gz", hash = "sha256:fe7a38cb0d3d87c14ec9dddc4932ffe2dbc77d75469ab80fd5014689b0e07b58"},
]
mypy = [
{file = "mypy-0.971-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f2899a3cbd394da157194f913a931edfd4be5f274a88041c9dc2d9cdcb1c315c"},
{file = "mypy-0.971-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:98e02d56ebe93981c41211c05adb630d1d26c14195d04d95e49cd97dbc046dc5"},
{file = "mypy-0.971-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:19830b7dba7d5356d3e26e2427a2ec91c994cd92d983142cbd025ebe81d69cf3"},
{file = "mypy-0.971-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:02ef476f6dcb86e6f502ae39a16b93285fef97e7f1ff22932b657d1ef1f28655"},
{file = "mypy-0.971-cp310-cp310-win_amd64.whl", hash = "sha256:25c5750ba5609a0c7550b73a33deb314ecfb559c350bb050b655505e8aed4103"},
{file = "mypy-0.971-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d3348e7eb2eea2472db611486846742d5d52d1290576de99d59edeb7cd4a42ca"},
{file = "mypy-0.971-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3fa7a477b9900be9b7dd4bab30a12759e5abe9586574ceb944bc29cddf8f0417"},
{file = "mypy-0.971-cp36-cp36m-win_amd64.whl", hash = "sha256:2ad53cf9c3adc43cf3bea0a7d01a2f2e86db9fe7596dfecb4496a5dda63cbb09"},
{file = "mypy-0.971-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:855048b6feb6dfe09d3353466004490b1872887150c5bb5caad7838b57328cc8"},
{file = "mypy-0.971-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:23488a14a83bca6e54402c2e6435467a4138785df93ec85aeff64c6170077fb0"},
{file = "mypy-0.971-cp37-cp37m-win_amd64.whl", hash = "sha256:4b21e5b1a70dfb972490035128f305c39bc4bc253f34e96a4adf9127cf943eb2"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:9796a2ba7b4b538649caa5cecd398d873f4022ed2333ffde58eaf604c4d2cb27"},
{file = "mypy-0.971-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a361d92635ad4ada1b1b2d3630fc2f53f2127d51cf2def9db83cba32e47c856"},
{file = "mypy-0.971-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b793b899f7cf563b1e7044a5c97361196b938e92f0a4343a5d27966a53d2ec71"},
{file = "mypy-0.971-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d1ea5d12c8e2d266b5fb8c7a5d2e9c0219fedfeb493b7ed60cd350322384ac27"},
{file = "mypy-0.971-cp38-cp38-win_amd64.whl", hash = "sha256:23c7ff43fff4b0df93a186581885c8512bc50fc4d4910e0f838e35d6bb6b5e58"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1f7656b69974a6933e987ee8ffb951d836272d6c0f81d727f1d0e2696074d9e6"},
{file = "mypy-0.971-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d2022bfadb7a5c2ef410d6a7c9763188afdb7f3533f22a0a32be10d571ee4bbe"},
{file = "mypy-0.971-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ef943c72a786b0f8d90fd76e9b39ce81fb7171172daf84bf43eaf937e9f220a9"},
{file = "mypy-0.971-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:d744f72eb39f69312bc6c2abf8ff6656973120e2eb3f3ec4f758ed47e414a4bf"},
{file = "mypy-0.971-cp39-cp39-win_amd64.whl", hash = "sha256:77a514ea15d3007d33a9e2157b0ba9c267496acf12a7f2b9b9f8446337aac5b0"},
{file = "mypy-0.971-py3-none-any.whl", hash = "sha256:0d054ef16b071149917085f51f89555a576e2618d5d9dd70bd6eea6410af3ac9"},
{file = "mypy-0.971.tar.gz", hash = "sha256:40b0f21484238269ae6a57200c807d80debc6459d444c0489a102d7c6a75fa56"},
]
mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
nbclassic = [
{file = "nbclassic-0.4.8-py3-none-any.whl", hash = "sha256:cbf05df5842b420d5cece0143462380ea9d308ff57c2dc0eb4d6e035b18fbfb3"},
{file = "nbclassic-0.4.8.tar.gz", hash = "sha256:c74d8a500f8e058d46b576a41e5bc640711e1032cf7541dde5f73ea49497e283"},
]
nbclient = [
{file = "nbclient-0.7.0-py3-none-any.whl", hash = "sha256:434c91385cf3e53084185334d675a0d33c615108b391e260915d1aa8e86661b8"},
{file = "nbclient-0.7.0.tar.gz", hash = "sha256:a1d844efd6da9bc39d2209bf996dbd8e07bf0f36b796edfabaa8f8a9ab77c3aa"},
]
nbconvert = [
{file = "nbconvert-7.0.0rc3-py3-none-any.whl", hash = "sha256:6774a0bf293d76fa2e886255812d953b750059330c3d7305ad271c02590f1957"},
{file = "nbconvert-7.0.0rc3.tar.gz", hash = "sha256:efb9aae47dad2eae02dd9e7d2cc8add6b7e8f15c6548c0de3363f6d2f8a39146"},
]
nbformat = [
{file = "nbformat-5.7.0-py3-none-any.whl", hash = "sha256:1b05ec2c552c2f1adc745f4eddce1eac8ca9ffd59bb9fd859e827eaa031319f9"},
{file = "nbformat-5.7.0.tar.gz", hash = "sha256:1d4760c15c1a04269ef5caf375be8b98dd2f696e5eb9e603ec2bf091f9b0d3f3"},
]
nbsphinx = [
{file = "nbsphinx-0.8.10-py3-none-any.whl", hash = "sha256:6076fba58020420927899362579f12779a43091eb238f414519ec25b4a8cfc96"},
{file = "nbsphinx-0.8.10.tar.gz", hash = "sha256:a8d68046f8aab916e2940b9b3819bd3ef9ddce868aa38845ea366645cabb6254"},
]
nest-asyncio = [
{file = "nest_asyncio-1.5.6-py3-none-any.whl", hash = "sha256:b9a953fb40dceaa587d109609098db21900182b16440652454a146cffb06e8b8"},
{file = "nest_asyncio-1.5.6.tar.gz", hash = "sha256:d267cc1ff794403f7df692964d1d2a3fa9418ffea2a3f6859a439ff482fef290"},
]
networkx = [
{file = "networkx-2.8.8-py3-none-any.whl", hash = "sha256:e435dfa75b1d7195c7b8378c3859f0445cd88c6b0375c181ed66823a9ceb7524"},
{file = "networkx-2.8.8.tar.gz", hash = "sha256:230d388117af870fce5647a3c52401fcf753e94720e6ea6b4197a5355648885e"},
]
notebook = [
{file = "notebook-6.5.2-py3-none-any.whl", hash = "sha256:e04f9018ceb86e4fa841e92ea8fb214f8d23c1cedfde530cc96f92446924f0e4"},
{file = "notebook-6.5.2.tar.gz", hash = "sha256:c1897e5317e225fc78b45549a6ab4b668e4c996fd03a04e938fe5e7af2bfffd0"},
]
notebook-shim = [
{file = "notebook_shim-0.2.2-py3-none-any.whl", hash = "sha256:9c6c30f74c4fbea6fce55c1be58e7fd0409b1c681b075dcedceb005db5026949"},
{file = "notebook_shim-0.2.2.tar.gz", hash = "sha256:090e0baf9a5582ff59b607af523ca2db68ff216da0c69956b62cab2ef4fc9c3f"},
]
numba = [
{file = "numba-0.53.1-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:b23de6b6837c132087d06b8b92d343edb54b885873b824a037967fbd5272ebb7"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:6545b9e9b0c112b81de7f88a3c787469a357eeff8211e90b8f45ee243d521cc2"},
{file = "numba-0.53.1-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:8fa5c963a43855050a868106a87cd614f3c3f459951c8fc468aec263ef80d063"},
{file = "numba-0.53.1-cp36-cp36m-win32.whl", hash = "sha256:aaa6ebf56afb0b6752607b9f3bf39e99b0efe3c1fa6849698373925ee6838fd7"},
{file = "numba-0.53.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b08b3df38aab769df79ed948d70f0a54a3cdda49d58af65369235c204ec5d0f3"},
{file = "numba-0.53.1-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:bf5c463b62d013e3f709cc8277adf2f4f4d8cc6757293e29c6db121b77e6b760"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:74df02e73155f669e60dcff07c4eef4a03dbf5b388594db74142ab40914fe4f5"},
{file = "numba-0.53.1-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:5165709bf62f28667e10b9afe6df0ce1037722adab92d620f59cb8bbb8104641"},
{file = "numba-0.53.1-cp37-cp37m-win32.whl", hash = "sha256:2e96958ed2ca7e6d967b2ce29c8da0ca47117e1de28e7c30b2c8c57386506fa5"},
{file = "numba-0.53.1-cp37-cp37m-win_amd64.whl", hash = "sha256:276f9d1674fe08d95872d81b97267c6b39dd830f05eb992608cbede50fcf48a9"},
{file = "numba-0.53.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:4c4c8d102512ae472af52c76ad9522da718c392cb59f4cd6785d711fa5051a2a"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:691adbeac17dbdf6ed7c759e9e33a522351f07d2065fe926b264b6b2c15fd89b"},
{file = "numba-0.53.1-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:94aab3e0e9e8754116325ce026e1b29ae72443c706a3104cf7f3368dc3012912"},
{file = "numba-0.53.1-cp38-cp38-win32.whl", hash = "sha256:aabeec89bb3e3162136eea492cea7ee8882ddcda2201f05caecdece192c40896"},
{file = "numba-0.53.1-cp38-cp38-win_amd64.whl", hash = "sha256:1895ebd256819ff22256cd6fe24aa8f7470b18acc73e7917e8e93c9ac7f565dc"},
{file = "numba-0.53.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:224d197a46a9e602a16780d87636e199e2cdef528caef084a4d8fd8909c2455c"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:aba7acb247a09d7f12bd17a8e28bbb04e8adef9fc20ca29835d03b7894e1b49f"},
{file = "numba-0.53.1-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:bd126f1f49da6fc4b3169cf1d96f1c3b3f84a7badd11fe22da344b923a00e744"},
{file = "numba-0.53.1-cp39-cp39-win32.whl", hash = "sha256:0ef9d1f347b251282ae46e5a5033600aa2d0dfa1ee8c16cb8137b8cd6f79e221"},
{file = "numba-0.53.1-cp39-cp39-win_amd64.whl", hash = "sha256:17146885cbe4e89c9d4abd4fcb8886dee06d4591943dc4343500c36ce2fcfa69"},
{file = "numba-0.53.1.tar.gz", hash = "sha256:9cd4e5216acdc66c4e9dab2dfd22ddb5bef151185c070d4a3cd8e78638aff5b0"},
]
numpy = [
{file = "numpy-1.23.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:9c88793f78fca17da0145455f0d7826bcb9f37da4764af27ac945488116efe63"},
{file = "numpy-1.23.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e9f4c4e51567b616be64e05d517c79a8a22f3606499941d97bb76f2ca59f982d"},
{file = "numpy-1.23.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7903ba8ab592b82014713c491f6c5d3a1cde5b4a3bf116404e08f5b52f6daf43"},
{file = "numpy-1.23.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5e05b1c973a9f858c74367553e236f287e749465f773328c8ef31abe18f691e1"},
{file = "numpy-1.23.5-cp310-cp310-win32.whl", hash = "sha256:522e26bbf6377e4d76403826ed689c295b0b238f46c28a7251ab94716da0b280"},
{file = "numpy-1.23.5-cp310-cp310-win_amd64.whl", hash = "sha256:dbee87b469018961d1ad79b1a5d50c0ae850000b639bcb1b694e9981083243b6"},
{file = "numpy-1.23.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ce571367b6dfe60af04e04a1834ca2dc5f46004ac1cc756fb95319f64c095a96"},
{file = "numpy-1.23.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:56e454c7833e94ec9769fa0f86e6ff8e42ee38ce0ce1fa4cbb747ea7e06d56aa"},
{file = "numpy-1.23.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5039f55555e1eab31124a5768898c9e22c25a65c1e0037f4d7c495a45778c9f2"},
{file = "numpy-1.23.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:58f545efd1108e647604a1b5aa809591ccd2540f468a880bedb97247e72db387"},
{file = "numpy-1.23.5-cp311-cp311-win32.whl", hash = "sha256:b2a9ab7c279c91974f756c84c365a669a887efa287365a8e2c418f8b3ba73fb0"},
{file = "numpy-1.23.5-cp311-cp311-win_amd64.whl", hash = "sha256:0cbe9848fad08baf71de1a39e12d1b6310f1d5b2d0ea4de051058e6e1076852d"},
{file = "numpy-1.23.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f063b69b090c9d918f9df0a12116029e274daf0181df392839661c4c7ec9018a"},
{file = "numpy-1.23.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0aaee12d8883552fadfc41e96b4c82ee7d794949e2a7c3b3a7201e968c7ecab9"},
{file = "numpy-1.23.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:92c8c1e89a1f5028a4c6d9e3ccbe311b6ba53694811269b992c0b224269e2398"},
{file = "numpy-1.23.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d208a0f8729f3fb790ed18a003f3a57895b989b40ea4dce4717e9cf4af62c6bb"},
{file = "numpy-1.23.5-cp38-cp38-win32.whl", hash = "sha256:06005a2ef6014e9956c09ba07654f9837d9e26696a0470e42beedadb78c11b07"},
{file = "numpy-1.23.5-cp38-cp38-win_amd64.whl", hash = "sha256:ca51fcfcc5f9354c45f400059e88bc09215fb71a48d3768fb80e357f3b457e1e"},
{file = "numpy-1.23.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8969bfd28e85c81f3f94eb4a66bc2cf1dbdc5c18efc320af34bffc54d6b1e38f"},
{file = "numpy-1.23.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a7ac231a08bb37f852849bbb387a20a57574a97cfc7b6cabb488a4fc8be176de"},
{file = "numpy-1.23.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bf837dc63ba5c06dc8797c398db1e223a466c7ece27a1f7b5232ba3466aafe3d"},
{file = "numpy-1.23.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:33161613d2269025873025b33e879825ec7b1d831317e68f4f2f0f84ed14c719"},
{file = "numpy-1.23.5-cp39-cp39-win32.whl", hash = "sha256:af1da88f6bc3d2338ebbf0e22fe487821ea4d8e89053e25fa59d1d79786e7481"},
{file = "numpy-1.23.5-cp39-cp39-win_amd64.whl", hash = "sha256:09b7847f7e83ca37c6e627682f145856de331049013853f344f37b0c9690e3df"},
{file = "numpy-1.23.5-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:abdde9f795cf292fb9651ed48185503a2ff29be87770c3b8e2a14b0cd7aa16f8"},
{file = "numpy-1.23.5-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f9a909a8bae284d46bbfdefbdd4a262ba19d3bc9921b1e76126b1d21c3c34135"},
{file = "numpy-1.23.5-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:01dd17cbb340bf0fc23981e52e1d18a9d4050792e8fb8363cecbf066a84b827d"},
{file = "numpy-1.23.5.tar.gz", hash = "sha256:1b1766d6f397c18153d40015ddfc79ddb715cabadc04d2d228d4e5a8bc4ded1a"},
]
oauthlib = [
{file = "oauthlib-3.2.2-py3-none-any.whl", hash = "sha256:8139f29aac13e25d502680e9e19963e83f16838d48a0d71c287fe40e7067fbca"},
{file = "oauthlib-3.2.2.tar.gz", hash = "sha256:9859c40929662bec5d64f34d01c99e093149682a3f38915dc0655d5a633dd918"},
]
opt-einsum = [
{file = "opt_einsum-3.3.0-py3-none-any.whl", hash = "sha256:2455e59e3947d3c275477df7f5205b30635e266fe6dc300e3d9f9646bfcea147"},
{file = "opt_einsum-3.3.0.tar.gz", hash = "sha256:59f6475f77bbc37dcf7cd748519c0ec60722e91e63ca114e68821c0c54a46549"},
]
packaging = [
{file = "packaging-21.3-py3-none-any.whl", hash = "sha256:ef103e05f519cdc783ae24ea4e2e0f508a9c99b2d4969652eed6a2e1ea5bd522"},
{file = "packaging-21.3.tar.gz", hash = "sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb"},
]
pandas = [
{file = "pandas-1.5.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e9dbacd22555c2d47f262ef96bb4e30880e5956169741400af8b306bbb24a273"},
{file = "pandas-1.5.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e2b83abd292194f350bb04e188f9379d36b8dfac24dd445d5c87575f3beaf789"},
{file = "pandas-1.5.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2552bffc808641c6eb471e55aa6899fa002ac94e4eebfa9ec058649122db5824"},
{file = "pandas-1.5.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fc87eac0541a7d24648a001d553406f4256e744d92df1df8ebe41829a915028"},
{file = "pandas-1.5.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d0d8fd58df5d17ddb8c72a5075d87cd80d71b542571b5f78178fb067fa4e9c72"},
{file = "pandas-1.5.2-cp310-cp310-win_amd64.whl", hash = "sha256:4aed257c7484d01c9a194d9a94758b37d3d751849c05a0050c087a358c41ad1f"},
{file = "pandas-1.5.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:375262829c8c700c3e7cbb336810b94367b9c4889818bbd910d0ecb4e45dc261"},
{file = "pandas-1.5.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:cc3cd122bea268998b79adebbb8343b735a5511ec14efb70a39e7acbc11ccbdc"},
{file = "pandas-1.5.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b4f5a82afa4f1ff482ab8ded2ae8a453a2cdfde2001567b3ca24a4c5c5ca0db3"},
{file = "pandas-1.5.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8092a368d3eb7116e270525329a3e5c15ae796ccdf7ccb17839a73b4f5084a39"},
{file = "pandas-1.5.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6257b314fc14958f8122779e5a1557517b0f8e500cfb2bd53fa1f75a8ad0af2"},
{file = "pandas-1.5.2-cp311-cp311-win_amd64.whl", hash = "sha256:82ae615826da838a8e5d4d630eb70c993ab8636f0eff13cb28aafc4291b632b5"},
{file = "pandas-1.5.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:457d8c3d42314ff47cc2d6c54f8fc0d23954b47977b2caed09cd9635cb75388b"},
{file = "pandas-1.5.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c009a92e81ce836212ce7aa98b219db7961a8b95999b97af566b8dc8c33e9519"},
{file = "pandas-1.5.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:71f510b0efe1629bf2f7c0eadb1ff0b9cf611e87b73cd017e6b7d6adb40e2b3a"},
{file = "pandas-1.5.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a40dd1e9f22e01e66ed534d6a965eb99546b41d4d52dbdb66565608fde48203f"},
{file = "pandas-1.5.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ae7e989f12628f41e804847a8cc2943d362440132919a69429d4dea1f164da0"},
{file = "pandas-1.5.2-cp38-cp38-win32.whl", hash = "sha256:530948945e7b6c95e6fa7aa4be2be25764af53fba93fe76d912e35d1c9ee46f5"},
{file = "pandas-1.5.2-cp38-cp38-win_amd64.whl", hash = "sha256:73f219fdc1777cf3c45fde7f0708732ec6950dfc598afc50588d0d285fddaefc"},
{file = "pandas-1.5.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:9608000a5a45f663be6af5c70c3cbe634fa19243e720eb380c0d378666bc7702"},
{file = "pandas-1.5.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:315e19a3e5c2ab47a67467fc0362cb36c7c60a93b6457f675d7d9615edad2ebe"},
{file = "pandas-1.5.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e18bc3764cbb5e118be139b3b611bc3fbc5d3be42a7e827d1096f46087b395eb"},
{file = "pandas-1.5.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0183cb04a057cc38fde5244909fca9826d5d57c4a5b7390c0cc3fa7acd9fa883"},
{file = "pandas-1.5.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:344021ed3e639e017b452aa8f5f6bf38a8806f5852e217a7594417fb9bbfa00e"},
{file = "pandas-1.5.2-cp39-cp39-win32.whl", hash = "sha256:e7469271497960b6a781eaa930cba8af400dd59b62ec9ca2f4d31a19f2f91090"},
{file = "pandas-1.5.2-cp39-cp39-win_amd64.whl", hash = "sha256:c218796d59d5abd8780170c937b812c9637e84c32f8271bbf9845970f8c1351f"},
{file = "pandas-1.5.2.tar.gz", hash = "sha256:220b98d15cee0b2cd839a6358bd1f273d0356bf964c1a1aeb32d47db0215488b"},
]
pandocfilters = [
{file = "pandocfilters-1.5.0-py2.py3-none-any.whl", hash = "sha256:33aae3f25fd1a026079f5d27bdd52496f0e0803b3469282162bafdcbdf6ef14f"},
{file = "pandocfilters-1.5.0.tar.gz", hash = "sha256:0b679503337d233b4339a817bfc8c50064e2eff681314376a47cb582305a7a38"},
]
parso = [
{file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"},
{file = "parso-0.8.3.tar.gz", hash = "sha256:8c07be290bb59f03588915921e29e8a50002acaf2cdc5fa0e0114f91709fafa0"},
]
partd = [
{file = "partd-1.3.0-py3-none-any.whl", hash = "sha256:6393a0c898a0ad945728e34e52de0df3ae295c5aff2e2926ba7cc3c60a734a15"},
{file = "partd-1.3.0.tar.gz", hash = "sha256:ce91abcdc6178d668bcaa431791a5a917d902341cb193f543fe445d494660485"},
]
pastel = [
{file = "pastel-0.2.1-py2.py3-none-any.whl", hash = "sha256:4349225fcdf6c2bb34d483e523475de5bb04a5c10ef711263452cb37d7dd4364"},
{file = "pastel-0.2.1.tar.gz", hash = "sha256:e6581ac04e973cac858828c6202c1e1e81fee1dc7de7683f3e1ffe0bfd8a573d"},
]
pathos = [
{file = "pathos-0.2.9-py2-none-any.whl", hash = "sha256:6a6ddb514ce2719f63fb88d5ec4f4490e436b636b54f1102d952c9f7c52f18e2"},
{file = "pathos-0.2.9-py3-none-any.whl", hash = "sha256:1c44373d8692897d5d15a8aa3b3a442ddc0814c5e848f4ff0ded5491f34b1dac"},
{file = "pathos-0.2.9.tar.gz", hash = "sha256:a8dbddcd3d9af32ada7c6dc088d845588c513a29a0ba19ab9f64c5cd83692934"},
]
pathspec = [
{file = "pathspec-0.10.2-py3-none-any.whl", hash = "sha256:88c2606f2c1e818b978540f73ecc908e13999c6c3a383daf3705652ae79807a5"},
{file = "pathspec-0.10.2.tar.gz", hash = "sha256:8f6bf73e5758fd365ef5d58ce09ac7c27d2833a8d7da51712eac6e27e35141b0"},
]
pathy = [
{file = "pathy-0.10.0-py3-none-any.whl", hash = "sha256:205d6da31c47334227d364ad8c13b848eb3254701553eb179f3faaec3abd0204"},
{file = "pathy-0.10.0.tar.gz", hash = "sha256:939822c326913cd0ab48f5928c8c40afcc59c5b093eac328348dd16700ab49e9"},
]
patsy = [
{file = "patsy-0.5.3-py2.py3-none-any.whl", hash = "sha256:7eb5349754ed6aa982af81f636479b1b8db9d5b1a6e957a6016ec0534b5c86b7"},
{file = "patsy-0.5.3.tar.gz", hash = "sha256:bdc18001875e319bc91c812c1eb6a10be4bb13cb81eb763f466179dca3b67277"},
]
pexpect = [
{file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
{file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
]
pickleshare = [
{file = "pickleshare-0.7.5-py2.py3-none-any.whl", hash = "sha256:9649af414d74d4df115d5d718f82acb59c9d418196b7b4290ed47a12ce62df56"},
{file = "pickleshare-0.7.5.tar.gz", hash = "sha256:87683d47965c1da65cdacaf31c8441d12b8044cdec9aca500cd78fc2c683afca"},
]
Pillow = [
{file = "Pillow-9.3.0-1-cp37-cp37m-win32.whl", hash = "sha256:e6ea6b856a74d560d9326c0f5895ef8050126acfdc7ca08ad703eb0081e82b74"},
{file = "Pillow-9.3.0-1-cp37-cp37m-win_amd64.whl", hash = "sha256:32a44128c4bdca7f31de5be641187367fe2a450ad83b833ef78910397db491aa"},
{file = "Pillow-9.3.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:0b7257127d646ff8676ec8a15520013a698d1fdc48bc2a79ba4e53df792526f2"},
{file = "Pillow-9.3.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b90f7616ea170e92820775ed47e136208e04c967271c9ef615b6fbd08d9af0e3"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68943d632f1f9e3dce98908e873b3a090f6cba1cbb1b892a9e8d97c938871fbe"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:be55f8457cd1eac957af0c3f5ece7bc3f033f89b114ef30f710882717670b2a8"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d77adcd56a42d00cc1be30843d3426aa4e660cab4a61021dc84467123f7a00c"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:829f97c8e258593b9daa80638aee3789b7df9da5cf1336035016d76f03b8860c"},
{file = "Pillow-9.3.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:801ec82e4188e935c7f5e22e006d01611d6b41661bba9fe45b60e7ac1a8f84de"},
{file = "Pillow-9.3.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:871b72c3643e516db4ecf20efe735deb27fe30ca17800e661d769faab45a18d7"},
{file = "Pillow-9.3.0-cp310-cp310-win32.whl", hash = "sha256:655a83b0058ba47c7c52e4e2df5ecf484c1b0b0349805896dd350cbc416bdd91"},
{file = "Pillow-9.3.0-cp310-cp310-win_amd64.whl", hash = "sha256:9f47eabcd2ded7698106b05c2c338672d16a6f2a485e74481f524e2a23c2794b"},
{file = "Pillow-9.3.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:57751894f6618fd4308ed8e0c36c333e2f5469744c34729a27532b3db106ee20"},
{file = "Pillow-9.3.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7db8b751ad307d7cf238f02101e8e36a128a6cb199326e867d1398067381bff4"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3033fbe1feb1b59394615a1cafaee85e49d01b51d54de0cbf6aa8e64182518a1"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:22b012ea2d065fd163ca096f4e37e47cd8b59cf4b0fd47bfca6abb93df70b34c"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b9a65733d103311331875c1dca05cb4606997fd33d6acfed695b1232ba1df193"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:502526a2cbfa431d9fc2a079bdd9061a2397b842bb6bc4239bb176da00993812"},
{file = "Pillow-9.3.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:90fb88843d3902fe7c9586d439d1e8c05258f41da473952aa8b328d8b907498c"},
{file = "Pillow-9.3.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:89dca0ce00a2b49024df6325925555d406b14aa3efc2f752dbb5940c52c56b11"},
{file = "Pillow-9.3.0-cp311-cp311-win32.whl", hash = "sha256:3168434d303babf495d4ba58fc22d6604f6e2afb97adc6a423e917dab828939c"},
{file = "Pillow-9.3.0-cp311-cp311-win_amd64.whl", hash = "sha256:18498994b29e1cf86d505edcb7edbe814d133d2232d256db8c7a8ceb34d18cef"},
{file = "Pillow-9.3.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:772a91fc0e03eaf922c63badeca75e91baa80fe2f5f87bdaed4280662aad25c9"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa4107d1b306cdf8953edde0534562607fe8811b6c4d9a486298ad31de733b2"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b4012d06c846dc2b80651b120e2cdd787b013deb39c09f407727ba90015c684f"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77ec3e7be99629898c9a6d24a09de089fa5356ee408cdffffe62d67bb75fdd72"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:6c738585d7a9961d8c2821a1eb3dcb978d14e238be3d70f0a706f7fa9316946b"},
{file = "Pillow-9.3.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:828989c45c245518065a110434246c44a56a8b2b2f6347d1409c787e6e4651ee"},
{file = "Pillow-9.3.0-cp37-cp37m-win32.whl", hash = "sha256:82409ffe29d70fd733ff3c1025a602abb3e67405d41b9403b00b01debc4c9a29"},
{file = "Pillow-9.3.0-cp37-cp37m-win_amd64.whl", hash = "sha256:41e0051336807468be450d52b8edd12ac60bebaa97fe10c8b660f116e50b30e4"},
{file = "Pillow-9.3.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:b03ae6f1a1878233ac620c98f3459f79fd77c7e3c2b20d460284e1fb370557d4"},
{file = "Pillow-9.3.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4390e9ce199fc1951fcfa65795f239a8a4944117b5935a9317fb320e7767b40f"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40e1ce476a7804b0fb74bcfa80b0a2206ea6a882938eaba917f7a0f004b42502"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a0a06a052c5f37b4ed81c613a455a81f9a3a69429b4fd7bb913c3fa98abefc20"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:03150abd92771742d4a8cd6f2fa6246d847dcd2e332a18d0c15cc75bf6703040"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:15c42fb9dea42465dfd902fb0ecf584b8848ceb28b41ee2b58f866411be33f07"},
{file = "Pillow-9.3.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:51e0e543a33ed92db9f5ef69a0356e0b1a7a6b6a71b80df99f1d181ae5875636"},
{file = "Pillow-9.3.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:3dd6caf940756101205dffc5367babf288a30043d35f80936f9bfb37f8355b32"},
{file = "Pillow-9.3.0-cp38-cp38-win32.whl", hash = "sha256:f1ff2ee69f10f13a9596480335f406dd1f70c3650349e2be67ca3139280cade0"},
{file = "Pillow-9.3.0-cp38-cp38-win_amd64.whl", hash = "sha256:276a5ca930c913f714e372b2591a22c4bd3b81a418c0f6635ba832daec1cbcfc"},
{file = "Pillow-9.3.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:73bd195e43f3fadecfc50c682f5055ec32ee2c933243cafbfdec69ab1aa87cad"},
{file = "Pillow-9.3.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1c7c8ae3864846fc95f4611c78129301e203aaa2af813b703c55d10cc1628535"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2e0918e03aa0c72ea56edbb00d4d664294815aa11291a11504a377ea018330d3"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b0915e734b33a474d76c28e07292f196cdf2a590a0d25bcc06e64e545f2d146c"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:af0372acb5d3598f36ec0914deed2a63f6bcdb7b606da04dc19a88d31bf0c05b"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:ad58d27a5b0262c0c19b47d54c5802db9b34d38bbf886665b626aff83c74bacd"},
{file = "Pillow-9.3.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:97aabc5c50312afa5e0a2b07c17d4ac5e865b250986f8afe2b02d772567a380c"},
{file = "Pillow-9.3.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:9aaa107275d8527e9d6e7670b64aabaaa36e5b6bd71a1015ddd21da0d4e06448"},
{file = "Pillow-9.3.0-cp39-cp39-win32.whl", hash = "sha256:bac18ab8d2d1e6b4ce25e3424f709aceef668347db8637c2296bcf41acb7cf48"},
{file = "Pillow-9.3.0-cp39-cp39-win_amd64.whl", hash = "sha256:b472b5ea442148d1c3e2209f20f1e0bb0eb556538690fa70b5e1f79fa0ba8dc2"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-macosx_10_10_x86_64.whl", hash = "sha256:ab388aaa3f6ce52ac1cb8e122c4bd46657c15905904b3120a6248b5b8b0bc228"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dbb8e7f2abee51cef77673be97760abff1674ed32847ce04b4af90f610144c7b"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bca31dd6014cb8b0b2db1e46081b0ca7d936f856da3b39744aef499db5d84d02"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c7025dce65566eb6e89f56c9509d4f628fddcedb131d9465cacd3d8bac337e7e"},
{file = "Pillow-9.3.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:ebf2029c1f464c59b8bdbe5143c79fa2045a581ac53679733d3a91d400ff9efb"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b59430236b8e58840a0dfb4099a0e8717ffb779c952426a69ae435ca1f57210c"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:12ce4932caf2ddf3e41d17fc9c02d67126935a44b86df6a206cf0d7161548627"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ae5331c23ce118c53b172fa64a4c037eb83c9165aba3a7ba9ddd3ec9fa64a699"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:0b07fffc13f474264c336298d1b4ce01d9c5a011415b79d4ee5527bb69ae6f65"},
{file = "Pillow-9.3.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:073adb2ae23431d3b9bcbcff3fe698b62ed47211d0716b067385538a1b0f28b8"},
{file = "Pillow-9.3.0.tar.gz", hash = "sha256:c935a22a557a560108d780f9a0fc426dd7459940dc54faa49d83249c8d3e760f"},
]
pip = [
{file = "pip-22.3.1-py3-none-any.whl", hash = "sha256:908c78e6bc29b676ede1c4d57981d490cb892eb45cd8c214ab6298125119e077"},
{file = "pip-22.3.1.tar.gz", hash = "sha256:65fd48317359f3af8e593943e6ae1506b66325085ea64b706a998c6e83eeaf38"},
]
pkgutil_resolve_name = [
{file = "pkgutil_resolve_name-1.3.10-py3-none-any.whl", hash = "sha256:ca27cc078d25c5ad71a9de0a7a330146c4e014c2462d9af19c6b828280649c5e"},
{file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
]
platformdirs = [
{file = "platformdirs-2.5.4-py3-none-any.whl", hash = "sha256:af0276409f9a02373d540bf8480021a048711d572745aef4b7842dad245eba10"},
{file = "platformdirs-2.5.4.tar.gz", hash = "sha256:1006647646d80f16130f052404c6b901e80ee4ed6bef6792e1f238a8969106f7"},
]
plotly = [
{file = "plotly-5.11.0-py2.py3-none-any.whl", hash = "sha256:52fd74b08aa4fd5a55b9d3034a30dbb746e572d7ed84897422f927fdf687ea5f"},
{file = "plotly-5.11.0.tar.gz", hash = "sha256:4efef479c2ec1d86dcdac8405b6ca70ca65649a77408e39a7e84a1ea2db6c787"},
]
pluggy = [
{file = "pluggy-1.0.0-py2.py3-none-any.whl", hash = "sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"},
{file = "pluggy-1.0.0.tar.gz", hash = "sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159"},
]
poethepoet = [
{file = "poethepoet-0.16.5-py3-none-any.whl", hash = "sha256:493d5d47b4cb0894dde6a69d14129ba39ef3f124fabda1f83ebb39bbf737a40e"},
{file = "poethepoet-0.16.5.tar.gz", hash = "sha256:3c958792ce488661ba09df67ba832a1b3141aa640236505ee60c23f4b1db4dbc"},
]
pox = [
{file = "pox-0.3.2-py3-none-any.whl", hash = "sha256:56fe2f099ecd8a557b8948082504492de90e8598c34733c9b1fdeca8f7b6de61"},
{file = "pox-0.3.2.tar.gz", hash = "sha256:e825225297638d6e3d49415f8cfb65407a5d15e56f2fb7fe9d9b9e3050c65ee1"},
]
ppft = [
{file = "ppft-1.7.6.6-py3-none-any.whl", hash = "sha256:f355d2caeed8bd7c9e4a860c471f31f7e66d1ada2791ab5458ea7dca15a51e41"},
{file = "ppft-1.7.6.6.tar.gz", hash = "sha256:f933f0404f3e808bc860745acb3b79cd4fe31ea19a20889a645f900415be60f1"},
]
preshed = [
{file = "preshed-3.0.8-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ea4b6df8ef7af38e864235256793bc3056e9699d991afcf6256fa298858582fc"},
{file = "preshed-3.0.8-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8e945fc814bdc29564a2ce137c237b3a9848aa1e76a1160369b6e0d328151fdd"},
{file = "preshed-3.0.8-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f9a4833530fe53001c351974e0c8bb660211b8d0358e592af185fec1ae12b2d0"},
{file = "preshed-3.0.8-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e1472ee231f323b4f4368b1b5f8f08481ed43af89697d45450c6ae4af46ac08a"},
{file = "preshed-3.0.8-cp310-cp310-win_amd64.whl", hash = "sha256:c8a2e2931eea7e500fbf8e014b69022f3fab2e35a70da882e2fc753e5e487ae3"},
{file = "preshed-3.0.8-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0e1bb8701df7861af26a312225bdf7c4822ac06fcf75aeb60fe2b0a20e64c222"},
{file = "preshed-3.0.8-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e9aef2b0b7687aecef48b1c6ff657d407ff24e75462877dcb888fa904c4a9c6d"},
{file = "preshed-3.0.8-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:854d58a8913ebf3b193b0dc8064155b034e8987de25f26838dfeca09151fda8a"},
{file = "preshed-3.0.8-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:135e2ac0db1a3948d6ec295598c7e182b52c394663f2fcfe36a97ae51186be21"},
{file = "preshed-3.0.8-cp311-cp311-win_amd64.whl", hash = "sha256:019d8fa4161035811fb2804d03214143298739e162d0ad24e087bd46c50970f5"},
{file = "preshed-3.0.8-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6a49ce52856fbb3ef4f1cc744c53f5d7e1ca370b1939620ac2509a6d25e02a50"},
{file = "preshed-3.0.8-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fdbc2957b36115a576c515ffe963919f19d2683f3c76c9304ae88ef59f6b5ca6"},
{file = "preshed-3.0.8-cp36-cp36m-win_amd64.whl", hash = "sha256:09cc9da2ac1b23010ce7d88a5e20f1033595e6dd80be14318e43b9409f4c7697"},
{file = "preshed-3.0.8-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e19c8069f1a1450f835f23d47724530cf716d581fcafb398f534d044f806b8c2"},
{file = "preshed-3.0.8-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25b5ef5e387a0e17ff41202a8c1816184ab6fb3c0d0b847bf8add0ed5941eb8d"},
{file = "preshed-3.0.8-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:53d3e2456a085425c66af7baba62d7eaa24aa5e460e1a9e02c401a2ed59abd7b"},
{file = "preshed-3.0.8-cp37-cp37m-win_amd64.whl", hash = "sha256:85e98a618fb36cdcc37501d8b9b8c1246651cc2f2db3a70702832523e0ae12f4"},
{file = "preshed-3.0.8-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7f8837bf616335464f3713cbf562a3dcaad22c3ca9193f957018964ef871a68b"},
{file = "preshed-3.0.8-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:720593baf2c2e295f855192974799e486da5f50d4548db93c44f5726a43cefb9"},
{file = "preshed-3.0.8-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e0ad3d860b9ce88a74cf7414bb4b1c6fd833813e7b818e76f49272c4974b19ce"},
{file = "preshed-3.0.8-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fd19d48440b152657966a52e627780c0ddbe9d907b8d7ee4598505e80a3c55c7"},
{file = "preshed-3.0.8-cp38-cp38-win_amd64.whl", hash = "sha256:246e7c6890dc7fe9b10f0e31de3346b906e3862b6ef42fcbede37968f46a73bf"},
{file = "preshed-3.0.8-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:67643e66691770dc3434b01671648f481e3455209ce953727ef2330b16790aaa"},
{file = "preshed-3.0.8-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0ae25a010c9f551aa2247ee621457f679e07c57fc99d3fd44f84cb40b925f12c"},
{file = "preshed-3.0.8-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5a6a7fcf7dd2e7711051b3f0432da9ec9c748954c989f49d2cd8eabf8c2d953e"},
{file = "preshed-3.0.8-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5942858170c4f53d9afc6352a86bbc72fc96cc4d8964b6415492114a5920d3ed"},
{file = "preshed-3.0.8-cp39-cp39-win_amd64.whl", hash = "sha256:06793022a56782ef51d74f1399925a2ba958e50c5cfbc6fa5b25c4945e158a07"},
{file = "preshed-3.0.8.tar.gz", hash = "sha256:6c74c70078809bfddda17be96483c41d06d717934b07cab7921011d81758b357"},
]
progressbar2 = [
{file = "progressbar2-4.2.0-py2.py3-none-any.whl", hash = "sha256:1a8e201211f99a85df55f720b3b6da7fb5c8cdef56792c4547205be2de5ea606"},
{file = "progressbar2-4.2.0.tar.gz", hash = "sha256:1393922fcb64598944ad457569fbeb4b3ac189ef50b5adb9cef3284e87e394ce"},
]
prometheus-client = [
{file = "prometheus_client-0.15.0-py3-none-any.whl", hash = "sha256:db7c05cbd13a0f79975592d112320f2605a325969b270a94b71dcabc47b931d2"},
{file = "prometheus_client-0.15.0.tar.gz", hash = "sha256:be26aa452490cfcf6da953f9436e95a9f2b4d578ca80094b4458930e5f584ab1"},
]
prompt-toolkit = [
{file = "prompt_toolkit-3.0.33-py3-none-any.whl", hash = "sha256:ced598b222f6f4029c0800cefaa6a17373fb580cd093223003475ce32805c35b"},
{file = "prompt_toolkit-3.0.33.tar.gz", hash = "sha256:535c29c31216c77302877d5120aef6c94ff573748a5b5ca5b1b1f76f5e700c73"},
]
protobuf = [
{file = "protobuf-3.19.6-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:010be24d5a44be7b0613750ab40bc8b8cedc796db468eae6c779b395f50d1fa1"},
{file = "protobuf-3.19.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:11478547958c2dfea921920617eb457bc26867b0d1aa065ab05f35080c5d9eb6"},
{file = "protobuf-3.19.6-cp310-cp310-win32.whl", hash = "sha256:559670e006e3173308c9254d63facb2c03865818f22204037ab76f7a0ff70b5f"},
{file = "protobuf-3.19.6-cp310-cp310-win_amd64.whl", hash = "sha256:347b393d4dd06fb93a77620781e11c058b3b0a5289262f094379ada2920a3730"},
{file = "protobuf-3.19.6-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:a8ce5ae0de28b51dff886fb922012dad885e66176663950cb2344c0439ecb473"},
{file = "protobuf-3.19.6-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90b0d02163c4e67279ddb6dc25e063db0130fc299aefabb5d481053509fae5c8"},
{file = "protobuf-3.19.6-cp36-cp36m-win32.whl", hash = "sha256:30f5370d50295b246eaa0296533403961f7e64b03ea12265d6dfce3a391d8992"},
{file = "protobuf-3.19.6-cp36-cp36m-win_amd64.whl", hash = "sha256:0c0714b025ec057b5a7600cb66ce7c693815f897cfda6d6efb58201c472e3437"},
{file = "protobuf-3.19.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5057c64052a1f1dd7d4450e9aac25af6bf36cfbfb3a1cd89d16393a036c49157"},
{file = "protobuf-3.19.6-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:bb6776bd18f01ffe9920e78e03a8676530a5d6c5911934c6a1ac6eb78973ecb6"},
{file = "protobuf-3.19.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:84a04134866861b11556a82dd91ea6daf1f4925746b992f277b84013a7cc1229"},
{file = "protobuf-3.19.6-cp37-cp37m-win32.whl", hash = "sha256:4bc98de3cdccfb5cd769620d5785b92c662b6bfad03a202b83799b6ed3fa1fa7"},
{file = "protobuf-3.19.6-cp37-cp37m-win_amd64.whl", hash = "sha256:aa3b82ca1f24ab5326dcf4ea00fcbda703e986b22f3d27541654f749564d778b"},
{file = "protobuf-3.19.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:2b2d2913bcda0e0ec9a784d194bc490f5dc3d9d71d322d070b11a0ade32ff6ba"},
{file = "protobuf-3.19.6-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:d0b635cefebd7a8a0f92020562dead912f81f401af7e71f16bf9506ff3bdbb38"},
{file = "protobuf-3.19.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7a552af4dc34793803f4e735aabe97ffc45962dfd3a237bdde242bff5a3de684"},
{file = "protobuf-3.19.6-cp38-cp38-win32.whl", hash = "sha256:0469bc66160180165e4e29de7f445e57a34ab68f49357392c5b2f54c656ab25e"},
{file = "protobuf-3.19.6-cp38-cp38-win_amd64.whl", hash = "sha256:91d5f1e139ff92c37e0ff07f391101df77e55ebb97f46bbc1535298d72019462"},
{file = "protobuf-3.19.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c0ccd3f940fe7f3b35a261b1dd1b4fc850c8fde9f74207015431f174be5976b3"},
{file = "protobuf-3.19.6-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:30a15015d86b9c3b8d6bf78d5b8c7749f2512c29f168ca259c9d7727604d0e39"},
{file = "protobuf-3.19.6-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:878b4cd080a21ddda6ac6d1e163403ec6eea2e206cf225982ae04567d39be7b0"},
{file = "protobuf-3.19.6-cp39-cp39-win32.whl", hash = "sha256:5a0d7539a1b1fb7e76bf5faa0b44b30f812758e989e59c40f77a7dab320e79b9"},
{file = "protobuf-3.19.6-cp39-cp39-win_amd64.whl", hash = "sha256:bbf5cea5048272e1c60d235c7bd12ce1b14b8a16e76917f371c718bd3005f045"},
{file = "protobuf-3.19.6-py2.py3-none-any.whl", hash = "sha256:14082457dc02be946f60b15aad35e9f5c69e738f80ebbc0900a19bc83734a5a4"},
{file = "protobuf-3.19.6.tar.gz", hash = "sha256:5f5540d57a43042389e87661c6eaa50f47c19c6176e8cf1c4f287aeefeccb5c4"},
]
psutil = [
{file = "psutil-5.9.4-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:c1ca331af862803a42677c120aff8a814a804e09832f166f226bfd22b56feee8"},
{file = "psutil-5.9.4-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:68908971daf802203f3d37e78d3f8831b6d1014864d7a85937941bb35f09aefe"},
{file = "psutil-5.9.4-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:3ff89f9b835100a825b14c2808a106b6fdcc4b15483141482a12c725e7f78549"},
{file = "psutil-5.9.4-cp27-cp27m-win32.whl", hash = "sha256:852dd5d9f8a47169fe62fd4a971aa07859476c2ba22c2254d4a1baa4e10b95ad"},
{file = "psutil-5.9.4-cp27-cp27m-win_amd64.whl", hash = "sha256:9120cd39dca5c5e1c54b59a41d205023d436799b1c8c4d3ff71af18535728e94"},
{file = "psutil-5.9.4-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:6b92c532979bafc2df23ddc785ed116fced1f492ad90a6830cf24f4d1ea27d24"},
{file = "psutil-5.9.4-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:efeae04f9516907be44904cc7ce08defb6b665128992a56957abc9b61dca94b7"},
{file = "psutil-5.9.4-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:54d5b184728298f2ca8567bf83c422b706200bcbbfafdc06718264f9393cfeb7"},
{file = "psutil-5.9.4-cp36-abi3-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:16653106f3b59386ffe10e0bad3bb6299e169d5327d3f187614b1cb8f24cf2e1"},
{file = "psutil-5.9.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:54c0d3d8e0078b7666984e11b12b88af2db11d11249a8ac8920dd5ef68a66e08"},
{file = "psutil-5.9.4-cp36-abi3-win32.whl", hash = "sha256:149555f59a69b33f056ba1c4eb22bb7bf24332ce631c44a319cec09f876aaeff"},
{file = "psutil-5.9.4-cp36-abi3-win_amd64.whl", hash = "sha256:fd8522436a6ada7b4aad6638662966de0d61d241cb821239b2ae7013d41a43d4"},
{file = "psutil-5.9.4-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:6001c809253a29599bc0dfd5179d9f8a5779f9dffea1da0f13c53ee568115e1e"},
{file = "psutil-5.9.4.tar.gz", hash = "sha256:3d7f9739eb435d4b1338944abe23f49584bde5395f27487d2ee25ad9a8774a62"},
]
ptyprocess = [
{file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
{file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
]
pure-eval = [
{file = "pure_eval-0.2.2-py3-none-any.whl", hash = "sha256:01eaab343580944bc56080ebe0a674b39ec44a945e6d09ba7db3cb8cec289350"},
{file = "pure_eval-0.2.2.tar.gz", hash = "sha256:2b45320af6dfaa1750f543d714b6d1c520a1688dec6fd24d339063ce0aaa9ac3"},
]
py = [
{file = "py-1.11.0-py2.py3-none-any.whl", hash = "sha256:607c53218732647dff4acdfcd50cb62615cedf612e72d1724fb1a0cc6405b378"},
{file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"},
]
pyasn1 = [
{file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"},
{file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"},
]
pyasn1-modules = [
{file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"},
{file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"},
]
pycodestyle = [
{file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"},
{file = "pycodestyle-2.8.0.tar.gz", hash = "sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"},
]
pycparser = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pydantic = [
{file = "pydantic-1.10.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bb6ad4489af1bac6955d38ebcb95079a836af31e4c4f74aba1ca05bb9f6027bd"},
{file = "pydantic-1.10.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a1f5a63a6dfe19d719b1b6e6106561869d2efaca6167f84f5ab9347887d78b98"},
{file = "pydantic-1.10.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:352aedb1d71b8b0736c6d56ad2bd34c6982720644b0624462059ab29bd6e5912"},
{file = "pydantic-1.10.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:19b3b9ccf97af2b7519c42032441a891a5e05c68368f40865a90eb88833c2559"},
{file = "pydantic-1.10.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e9069e1b01525a96e6ff49e25876d90d5a563bc31c658289a8772ae186552236"},
{file = "pydantic-1.10.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:355639d9afc76bcb9b0c3000ddcd08472ae75318a6eb67a15866b87e2efa168c"},
{file = "pydantic-1.10.2-cp310-cp310-win_amd64.whl", hash = "sha256:ae544c47bec47a86bc7d350f965d8b15540e27e5aa4f55170ac6a75e5f73b644"},
{file = "pydantic-1.10.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a4c805731c33a8db4b6ace45ce440c4ef5336e712508b4d9e1aafa617dc9907f"},
{file = "pydantic-1.10.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d49f3db871575e0426b12e2f32fdb25e579dea16486a26e5a0474af87cb1ab0a"},
{file = "pydantic-1.10.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:37c90345ec7dd2f1bcef82ce49b6235b40f282b94d3eec47e801baf864d15525"},
{file = "pydantic-1.10.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b5ba54d026c2bd2cb769d3468885f23f43710f651688e91f5fb1edcf0ee9283"},
{file = "pydantic-1.10.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:05e00dbebbe810b33c7a7362f231893183bcc4251f3f2ff991c31d5c08240c42"},
{file = "pydantic-1.10.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:2d0567e60eb01bccda3a4df01df677adf6b437958d35c12a3ac3e0f078b0ee52"},
{file = "pydantic-1.10.2-cp311-cp311-win_amd64.whl", hash = "sha256:c6f981882aea41e021f72779ce2a4e87267458cc4d39ea990729e21ef18f0f8c"},
{file = "pydantic-1.10.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c4aac8e7103bf598373208f6299fa9a5cfd1fc571f2d40bf1dd1955a63d6eeb5"},
{file = "pydantic-1.10.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:81a7b66c3f499108b448f3f004801fcd7d7165fb4200acb03f1c2402da73ce4c"},
{file = "pydantic-1.10.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bedf309630209e78582ffacda64a21f96f3ed2e51fbf3962d4d488e503420254"},
{file = "pydantic-1.10.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:9300fcbebf85f6339a02c6994b2eb3ff1b9c8c14f502058b5bf349d42447dcf5"},
{file = "pydantic-1.10.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:216f3bcbf19c726b1cc22b099dd409aa371f55c08800bcea4c44c8f74b73478d"},
{file = "pydantic-1.10.2-cp37-cp37m-win_amd64.whl", hash = "sha256:dd3f9a40c16daf323cf913593083698caee97df2804aa36c4b3175d5ac1b92a2"},
{file = "pydantic-1.10.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b97890e56a694486f772d36efd2ba31612739bc6f3caeee50e9e7e3ebd2fdd13"},
{file = "pydantic-1.10.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:9cabf4a7f05a776e7793e72793cd92cc865ea0e83a819f9ae4ecccb1b8aa6116"},
{file = "pydantic-1.10.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:06094d18dd5e6f2bbf93efa54991c3240964bb663b87729ac340eb5014310624"},
{file = "pydantic-1.10.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc78cc83110d2f275ec1970e7a831f4e371ee92405332ebfe9860a715f8336e1"},
{file = "pydantic-1.10.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:1ee433e274268a4b0c8fde7ad9d58ecba12b069a033ecc4645bb6303c062d2e9"},
{file = "pydantic-1.10.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:7c2abc4393dea97a4ccbb4ec7d8658d4e22c4765b7b9b9445588f16c71ad9965"},
{file = "pydantic-1.10.2-cp38-cp38-win_amd64.whl", hash = "sha256:0b959f4d8211fc964772b595ebb25f7652da3f22322c007b6fed26846a40685e"},
{file = "pydantic-1.10.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c33602f93bfb67779f9c507e4d69451664524389546bacfe1bee13cae6dc7488"},
{file = "pydantic-1.10.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5760e164b807a48a8f25f8aa1a6d857e6ce62e7ec83ea5d5c5a802eac81bad41"},
{file = "pydantic-1.10.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6eb843dcc411b6a2237a694f5e1d649fc66c6064d02b204a7e9d194dff81eb4b"},
{file = "pydantic-1.10.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4b8795290deaae348c4eba0cebb196e1c6b98bdbe7f50b2d0d9a4a99716342fe"},
{file = "pydantic-1.10.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:e0bedafe4bc165ad0a56ac0bd7695df25c50f76961da29c050712596cf092d6d"},
{file = "pydantic-1.10.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:2e05aed07fa02231dbf03d0adb1be1d79cabb09025dd45aa094aa8b4e7b9dcda"},
{file = "pydantic-1.10.2-cp39-cp39-win_amd64.whl", hash = "sha256:c1ba1afb396148bbc70e9eaa8c06c1716fdddabaf86e7027c5988bae2a829ab6"},
{file = "pydantic-1.10.2-py3-none-any.whl", hash = "sha256:1b6ee725bd6e83ec78b1aa32c5b1fa67a3a65badddde3976bca5fe4568f27709"},
{file = "pydantic-1.10.2.tar.gz", hash = "sha256:91b8e218852ef6007c2b98cd861601c6a09f1aa32bbbb74fab5b1c33d4a1e410"},
]
pydata-sphinx-theme = [
{file = "pydata_sphinx_theme-0.9.0-py3-none-any.whl", hash = "sha256:b22b442a6d6437e5eaf0a1f057169ffcb31eaa9f10be7d5481a125e735c71c12"},
{file = "pydata_sphinx_theme-0.9.0.tar.gz", hash = "sha256:03598a86915b596f4bf80bef79a4d33276a83e670bf360def699dbb9f99dc57a"},
]
pydot = [
{file = "pydot-1.4.2-py2.py3-none-any.whl", hash = "sha256:66c98190c65b8d2e2382a441b4c0edfdb4f4c025ef9cb9874de478fb0793a451"},
{file = "pydot-1.4.2.tar.gz", hash = "sha256:248081a39bcb56784deb018977e428605c1c758f10897a339fce1dd728ff007d"},
]
pydotplus = [
{file = "pydotplus-2.0.2.tar.gz", hash = "sha256:91e85e9ee9b85d2391ead7d635e3d9c7f5f44fd60a60e59b13e2403fa66505c4"},
]
pyflakes = [
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
]
pygam = [
{file = "pygam-0.8.0-py2.py3-none-any.whl", hash = "sha256:198bd478700520b7c399cc4bcbc011e46850969c32fb09ef0b7a4bbb14e842a5"},
{file = "pygam-0.8.0.tar.gz", hash = "sha256:5cae01aea8b2fede72a6da0aba1490213af54b3476745666af26bbe700479166"},
]
Pygments = [
{file = "Pygments-2.13.0-py3-none-any.whl", hash = "sha256:f643f331ab57ba3c9d89212ee4a2dabc6e94f117cf4eefde99a0574720d14c42"},
{file = "Pygments-2.13.0.tar.gz", hash = "sha256:56a8508ae95f98e2b9bdf93a6be5ae3f7d8af858b43e02c5a2ff083726be40c1"},
]
pygraphviz = [
{file = "pygraphviz-1.10.zip", hash = "sha256:457e093a888128903251a266a8cc16b4ba93f3f6334b3ebfed92c7471a74d867"},
]
pyparsing = [
{file = "pyparsing-3.0.9-py3-none-any.whl", hash = "sha256:5026bae9a10eeaefb61dab2f09052b9f4307d44aee4eda64b309723d8d206bbc"},
{file = "pyparsing-3.0.9.tar.gz", hash = "sha256:2b020ecf7d21b687f219b71ecad3631f644a47f01403fa1d1036b0c6416d70fb"},
]
pyro-api = [
{file = "pyro-api-0.1.2.tar.gz", hash = "sha256:a1b900d9580aa1c2fab3b123ab7ff33413744da7c5f440bd4aadc4d40d14d920"},
{file = "pyro_api-0.1.2-py3-none-any.whl", hash = "sha256:10e0e42e9e4401ce464dab79c870e50dfb4f413d326fa777f3582928ef9caf8f"},
]
pyro-ppl = [
{file = "pyro-ppl-1.8.3.tar.gz", hash = "sha256:3edd4381b020d12e8ab50ebe0298c7a68d150b8a024f998ad86fdac7a308d50e"},
{file = "pyro_ppl-1.8.3-py3-none-any.whl", hash = "sha256:cf642cb8bd1a54ad9c69960a5910e423b33f5de3480589b5dcc5f11236b403fb"},
]
pyrsistent = [
{file = "pyrsistent-0.19.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d6982b5a0237e1b7d876b60265564648a69b14017f3b5f908c5be2de3f9abb7a"},
{file = "pyrsistent-0.19.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:187d5730b0507d9285a96fca9716310d572e5464cadd19f22b63a6976254d77a"},
{file = "pyrsistent-0.19.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:055ab45d5911d7cae397dc418808d8802fb95262751872c841c170b0dbf51eed"},
{file = "pyrsistent-0.19.2-cp310-cp310-win32.whl", hash = "sha256:456cb30ca8bff00596519f2c53e42c245c09e1a4543945703acd4312949bfd41"},
{file = "pyrsistent-0.19.2-cp310-cp310-win_amd64.whl", hash = "sha256:b39725209e06759217d1ac5fcdb510e98670af9e37223985f330b611f62e7425"},
{file = "pyrsistent-0.19.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:2aede922a488861de0ad00c7630a6e2d57e8023e4be72d9d7147a9fcd2d30712"},
{file = "pyrsistent-0.19.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:879b4c2f4d41585c42df4d7654ddffff1239dc4065bc88b745f0341828b83e78"},
{file = "pyrsistent-0.19.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c43bec251bbd10e3cb58ced80609c5c1eb238da9ca78b964aea410fb820d00d6"},
{file = "pyrsistent-0.19.2-cp37-cp37m-win32.whl", hash = "sha256:d690b18ac4b3e3cab73b0b7aa7dbe65978a172ff94970ff98d82f2031f8971c2"},
{file = "pyrsistent-0.19.2-cp37-cp37m-win_amd64.whl", hash = "sha256:3ba4134a3ff0fc7ad225b6b457d1309f4698108fb6b35532d015dca8f5abed73"},
{file = "pyrsistent-0.19.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a178209e2df710e3f142cbd05313ba0c5ebed0a55d78d9945ac7a4e09d923308"},
{file = "pyrsistent-0.19.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e371b844cec09d8dc424d940e54bba8f67a03ebea20ff7b7b0d56f526c71d584"},
{file = "pyrsistent-0.19.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:111156137b2e71f3a9936baf27cb322e8024dac3dc54ec7fb9f0bcf3249e68bb"},
{file = "pyrsistent-0.19.2-cp38-cp38-win32.whl", hash = "sha256:e5d8f84d81e3729c3b506657dddfe46e8ba9c330bf1858ee33108f8bb2adb38a"},
{file = "pyrsistent-0.19.2-cp38-cp38-win_amd64.whl", hash = "sha256:9cd3e9978d12b5d99cbdc727a3022da0430ad007dacf33d0bf554b96427f33ab"},
{file = "pyrsistent-0.19.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f1258f4e6c42ad0b20f9cfcc3ada5bd6b83374516cd01c0960e3cb75fdca6770"},
{file = "pyrsistent-0.19.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21455e2b16000440e896ab99e8304617151981ed40c29e9507ef1c2e4314ee95"},
{file = "pyrsistent-0.19.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bfd880614c6237243ff53a0539f1cb26987a6dc8ac6e66e0c5a40617296a045e"},
{file = "pyrsistent-0.19.2-cp39-cp39-win32.whl", hash = "sha256:71d332b0320642b3261e9fee47ab9e65872c2bd90260e5d225dabeed93cbd42b"},
{file = "pyrsistent-0.19.2-cp39-cp39-win_amd64.whl", hash = "sha256:dec3eac7549869365fe263831f576c8457f6c833937c68542d08fde73457d291"},
{file = "pyrsistent-0.19.2-py3-none-any.whl", hash = "sha256:ea6b79a02a28550c98b6ca9c35b9f492beaa54d7c5c9e9949555893c8a9234d0"},
{file = "pyrsistent-0.19.2.tar.gz", hash = "sha256:bfa0351be89c9fcbcb8c9879b826f4353be10f58f8a677efab0c017bf7137ec2"},
]
pytest = [
{file = "pytest-7.2.0-py3-none-any.whl", hash = "sha256:892f933d339f068883b6fd5a459f03d85bfcb355e4981e146d2c7616c21fef71"},
{file = "pytest-7.2.0.tar.gz", hash = "sha256:c4014eb40e10f11f355ad4e3c2fb2c6c6d1919c73f3b5a433de4708202cade59"},
]
pytest-cov = [
{file = "pytest-cov-3.0.0.tar.gz", hash = "sha256:e7f0f5b1617d2210a2cabc266dfe2f4c75a8d32fb89eafb7ad9d06f6d076d470"},
{file = "pytest_cov-3.0.0-py3-none-any.whl", hash = "sha256:578d5d15ac4a25e5f961c938b85a05b09fdaae9deef3bb6de9a6e766622ca7a6"},
]
pytest-split = [
{file = "pytest-split-0.8.0.tar.gz", hash = "sha256:8571a3f60ca8656c698ed86b0a3212bb9e79586ecb201daef9988c336ff0e6ff"},
{file = "pytest_split-0.8.0-py3-none-any.whl", hash = "sha256:2e06b8b1ab7ceb19d0b001548271abaf91d12415a8687086cf40581c555d309f"},
]
python-dateutil = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
]
python-utils = [
{file = "python-utils-3.4.5.tar.gz", hash = "sha256:7e329c427a6d23036cfcc4501638afb31b2ddc8896f25393562833874b8c6e0a"},
{file = "python_utils-3.4.5-py2.py3-none-any.whl", hash = "sha256:22990259324eae88faa3389d302861a825dbdd217ab40e3ec701851b3337d592"},
]
pytz = [
{file = "pytz-2022.6-py2.py3-none-any.whl", hash = "sha256:222439474e9c98fced559f1709d89e6c9cbf8d79c794ff3eb9f8800064291427"},
{file = "pytz-2022.6.tar.gz", hash = "sha256:e89512406b793ca39f5971bc999cc538ce125c0e51c27941bef4568b460095e2"},
]
pytz-deprecation-shim = [
{file = "pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl", hash = "sha256:8314c9692a636c8eb3bda879b9f119e350e93223ae83e70e80c31675a0fdc1a6"},
{file = "pytz_deprecation_shim-0.1.0.post0.tar.gz", hash = "sha256:af097bae1b616dde5c5744441e2ddc69e74dfdcb0c263129610d85b87445a59d"},
]
pywin32 = [
{file = "pywin32-305-cp310-cp310-win32.whl", hash = "sha256:421f6cd86e84bbb696d54563c48014b12a23ef95a14e0bdba526be756d89f116"},
{file = "pywin32-305-cp310-cp310-win_amd64.whl", hash = "sha256:73e819c6bed89f44ff1d690498c0a811948f73777e5f97c494c152b850fad478"},
{file = "pywin32-305-cp310-cp310-win_arm64.whl", hash = "sha256:742eb905ce2187133a29365b428e6c3b9001d79accdc30aa8969afba1d8470f4"},
{file = "pywin32-305-cp311-cp311-win32.whl", hash = "sha256:19ca459cd2e66c0e2cc9a09d589f71d827f26d47fe4a9d09175f6aa0256b51c2"},
{file = "pywin32-305-cp311-cp311-win_amd64.whl", hash = "sha256:326f42ab4cfff56e77e3e595aeaf6c216712bbdd91e464d167c6434b28d65990"},
{file = "pywin32-305-cp311-cp311-win_arm64.whl", hash = "sha256:4ecd404b2c6eceaca52f8b2e3e91b2187850a1ad3f8b746d0796a98b4cea04db"},
{file = "pywin32-305-cp36-cp36m-win32.whl", hash = "sha256:48d8b1659284f3c17b68587af047d110d8c44837736b8932c034091683e05863"},
{file = "pywin32-305-cp36-cp36m-win_amd64.whl", hash = "sha256:13362cc5aa93c2beaf489c9c9017c793722aeb56d3e5166dadd5ef82da021fe1"},
{file = "pywin32-305-cp37-cp37m-win32.whl", hash = "sha256:a55db448124d1c1484df22fa8bbcbc45c64da5e6eae74ab095b9ea62e6d00496"},
{file = "pywin32-305-cp37-cp37m-win_amd64.whl", hash = "sha256:109f98980bfb27e78f4df8a51a8198e10b0f347257d1e265bb1a32993d0c973d"},
{file = "pywin32-305-cp38-cp38-win32.whl", hash = "sha256:9dd98384da775afa009bc04863426cb30596fd78c6f8e4e2e5bbf4edf8029504"},
{file = "pywin32-305-cp38-cp38-win_amd64.whl", hash = "sha256:56d7a9c6e1a6835f521788f53b5af7912090674bb84ef5611663ee1595860fc7"},
{file = "pywin32-305-cp39-cp39-win32.whl", hash = "sha256:9d968c677ac4d5cbdaa62fd3014ab241718e619d8e36ef8e11fb930515a1e918"},
{file = "pywin32-305-cp39-cp39-win_amd64.whl", hash = "sha256:50768c6b7c3f0b38b7fb14dd4104da93ebced5f1a50dc0e834594bff6fbe1271"},
]
pywinpty = [
{file = "pywinpty-2.0.9-cp310-none-win_amd64.whl", hash = "sha256:30a7b371446a694a6ce5ef906d70ac04e569de5308c42a2bdc9c3bc9275ec51f"},
{file = "pywinpty-2.0.9-cp311-none-win_amd64.whl", hash = "sha256:d78ef6f4bd7a6c6f94dc1a39ba8fb028540cc39f5cb593e756506db17843125f"},
{file = "pywinpty-2.0.9-cp37-none-win_amd64.whl", hash = "sha256:5ed36aa087e35a3a183f833631b3e4c1ae92fe2faabfce0fa91b77ed3f0f1382"},
{file = "pywinpty-2.0.9-cp38-none-win_amd64.whl", hash = "sha256:2352f44ee913faaec0a02d3c112595e56b8af7feeb8100efc6dc1a8685044199"},
{file = "pywinpty-2.0.9-cp39-none-win_amd64.whl", hash = "sha256:ba75ec55f46c9e17db961d26485b033deb20758b1731e8e208e1e8a387fcf70c"},
{file = "pywinpty-2.0.9.tar.gz", hash = "sha256:01b6400dd79212f50a2f01af1c65b781290ff39610853db99bf03962eb9a615f"},
]
PyYAML = [
{file = "PyYAML-6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4db7c7aef085872ef65a8fd7d6d09a14ae91f691dec3e87ee5ee0539d516f53"},
{file = "PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9df7ed3b3d2e0ecfe09e14741b857df43adb5a3ddadc919a2d94fbdf78fea53c"},
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77f396e6ef4c73fdc33a9157446466f1cff553d979bd00ecb64385760c6babdc"},
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a80a78046a72361de73f8f395f1f1e49f956c6be882eed58505a15f3e430962b"},
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f84fbc98b019fef2ee9a1cb3ce93e3187a6df0b2538a651bfb890254ba9f90b5"},
{file = "PyYAML-6.0-cp310-cp310-win32.whl", hash = "sha256:2cd5df3de48857ed0544b34e2d40e9fac445930039f3cfe4bcc592a1f836d513"},
{file = "PyYAML-6.0-cp310-cp310-win_amd64.whl", hash = "sha256:daf496c58a8c52083df09b80c860005194014c3698698d1a57cbcfa182142a3a"},
{file = "PyYAML-6.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d4b0ba9512519522b118090257be113b9468d804b19d63c71dbcf4a48fa32358"},
{file = "PyYAML-6.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:81957921f441d50af23654aa6c5e5eaf9b06aba7f0a19c18a538dc7ef291c5a1"},
{file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa17f5bc4d1b10afd4466fd3a44dc0e245382deca5b3c353d8b757f9e3ecb8d"},
{file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dbad0e9d368bb989f4515da330b88a057617d16b6a8245084f1b05400f24609f"},
{file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:432557aa2c09802be39460360ddffd48156e30721f5e8d917f01d31694216782"},
{file = "PyYAML-6.0-cp311-cp311-win32.whl", hash = "sha256:bfaef573a63ba8923503d27530362590ff4f576c626d86a9fed95822a8255fd7"},
{file = "PyYAML-6.0-cp311-cp311-win_amd64.whl", hash = "sha256:01b45c0191e6d66c470b6cf1b9531a771a83c1c4208272ead47a3ae4f2f603bf"},
{file = "PyYAML-6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:897b80890765f037df3403d22bab41627ca8811ae55e9a722fd0392850ec4d86"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50602afada6d6cbfad699b0c7bb50d5ccffa7e46a3d738092afddc1f9758427f"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:48c346915c114f5fdb3ead70312bd042a953a8ce5c7106d5bfb1a5254e47da92"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:98c4d36e99714e55cfbaaee6dd5badbc9a1ec339ebfc3b1f52e293aee6bb71a4"},
{file = "PyYAML-6.0-cp36-cp36m-win32.whl", hash = "sha256:0283c35a6a9fbf047493e3a0ce8d79ef5030852c51e9d911a27badfde0605293"},
{file = "PyYAML-6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:07751360502caac1c067a8132d150cf3d61339af5691fe9e87803040dbc5db57"},
{file = "PyYAML-6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:819b3830a1543db06c4d4b865e70ded25be52a2e0631ccd2f6a47a2822f2fd7c"},
{file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:473f9edb243cb1935ab5a084eb238d842fb8f404ed2193a915d1784b5a6b5fc0"},
{file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ce82d761c532fe4ec3f87fc45688bdd3a4c1dc5e0b4a19814b9009a29baefd4"},
{file = "PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:231710d57adfd809ef5d34183b8ed1eeae3f76459c18fb4a0b373ad56bedcdd9"},
{file = "PyYAML-6.0-cp37-cp37m-win32.whl", hash = "sha256:c5687b8d43cf58545ade1fe3e055f70eac7a5a1a0bf42824308d868289a95737"},
{file = "PyYAML-6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:d15a181d1ecd0d4270dc32edb46f7cb7733c7c508857278d3d378d14d606db2d"},
{file = "PyYAML-6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0b4624f379dab24d3725ffde76559cff63d9ec94e1736b556dacdfebe5ab6d4b"},
{file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:213c60cd50106436cc818accf5baa1aba61c0189ff610f64f4a3e8c6726218ba"},
{file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9fa600030013c4de8165339db93d182b9431076eb98eb40ee068700c9c813e34"},
{file = "PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:277a0ef2981ca40581a47093e9e2d13b3f1fbbeffae064c1d21bfceba2030287"},
{file = "PyYAML-6.0-cp38-cp38-win32.whl", hash = "sha256:d4eccecf9adf6fbcc6861a38015c2a64f38b9d94838ac1810a9023a0609e1b78"},
{file = "PyYAML-6.0-cp38-cp38-win_amd64.whl", hash = "sha256:1e4747bc279b4f613a09eb64bba2ba602d8a6664c6ce6396a4d0cd413a50ce07"},
{file = "PyYAML-6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:055d937d65826939cb044fc8c9b08889e8c743fdc6a32b33e2390f66013e449b"},
{file = "PyYAML-6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e61ceaab6f49fb8bdfaa0f92c4b57bcfbea54c09277b1b4f7ac376bfb7a7c174"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d67d839ede4ed1b28a4e8909735fc992a923cdb84e618544973d7dfc71540803"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cba8c411ef271aa037d7357a2bc8f9ee8b58b9965831d9e51baf703280dc73d3"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:40527857252b61eacd1d9af500c3337ba8deb8fc298940291486c465c8b46ec0"},
{file = "PyYAML-6.0-cp39-cp39-win32.whl", hash = "sha256:b5b9eccad747aabaaffbc6064800670f0c297e52c12754eb1d976c57e4f74dcb"},
{file = "PyYAML-6.0-cp39-cp39-win_amd64.whl", hash = "sha256:b3d267842bf12586ba6c734f89d1f5b871df0273157918b0ccefa29deb05c21c"},
{file = "PyYAML-6.0.tar.gz", hash = "sha256:68fb519c14306fec9720a2a5b45bc9f0c8d1b9c72adf45c37baedfcd949c35a2"},
]
pyzmq = [
{file = "pyzmq-24.0.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:28b119ba97129d3001673a697b7cce47fe6de1f7255d104c2f01108a5179a066"},
{file = "pyzmq-24.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bcbebd369493d68162cddb74a9c1fcebd139dfbb7ddb23d8f8e43e6c87bac3a6"},
{file = "pyzmq-24.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ae61446166983c663cee42c852ed63899e43e484abf080089f771df4b9d272ef"},
{file = "pyzmq-24.0.1-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:87f7ac99b15270db8d53f28c3c7b968612993a90a5cf359da354efe96f5372b4"},
{file = "pyzmq-24.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9dca7c3956b03b7663fac4d150f5e6d4f6f38b2462c1e9afd83bcf7019f17913"},
{file = "pyzmq-24.0.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:8c78bfe20d4c890cb5580a3b9290f700c570e167d4cdcc55feec07030297a5e3"},
{file = "pyzmq-24.0.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:48f721f070726cd2a6e44f3c33f8ee4b24188e4b816e6dd8ba542c8c3bb5b246"},
{file = "pyzmq-24.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:afe1f3bc486d0ce40abb0a0c9adb39aed3bbac36ebdc596487b0cceba55c21c1"},
{file = "pyzmq-24.0.1-cp310-cp310-win32.whl", hash = "sha256:3e6192dbcefaaa52ed81be88525a54a445f4b4fe2fffcae7fe40ebb58bd06bfd"},
{file = "pyzmq-24.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:86de64468cad9c6d269f32a6390e210ca5ada568c7a55de8e681ca3b897bb340"},
{file = "pyzmq-24.0.1-cp311-cp311-macosx_10_15_universal2.whl", hash = "sha256:838812c65ed5f7c2bd11f7b098d2e5d01685a3f6d1f82849423b570bae698c00"},
{file = "pyzmq-24.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:dfb992dbcd88d8254471760879d48fb20836d91baa90f181c957122f9592b3dc"},
{file = "pyzmq-24.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7abddb2bd5489d30ffeb4b93a428130886c171b4d355ccd226e83254fcb6b9ef"},
{file = "pyzmq-24.0.1-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:94010bd61bc168c103a5b3b0f56ed3b616688192db7cd5b1d626e49f28ff51b3"},
{file = "pyzmq-24.0.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:8242543c522d84d033fe79be04cb559b80d7eb98ad81b137ff7e0a9020f00ace"},
{file = "pyzmq-24.0.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ccb94342d13e3bf3ffa6e62f95b5e3f0bc6bfa94558cb37f4b3d09d6feb536ff"},
{file = "pyzmq-24.0.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:6640f83df0ae4ae1104d4c62b77e9ef39be85ebe53f636388707d532bee2b7b8"},
{file = "pyzmq-24.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:a180dbd5ea5d47c2d3b716d5c19cc3fb162d1c8db93b21a1295d69585bfddac1"},
{file = "pyzmq-24.0.1-cp311-cp311-win32.whl", hash = "sha256:624321120f7e60336be8ec74a172ae7fba5c3ed5bf787cc85f7e9986c9e0ebc2"},
{file = "pyzmq-24.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:1724117bae69e091309ffb8255412c4651d3f6355560d9af312d547f6c5bc8b8"},
{file = "pyzmq-24.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:15975747462ec49fdc863af906bab87c43b2491403ab37a6d88410635786b0f4"},
{file = "pyzmq-24.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b947e264f0e77d30dcbccbb00f49f900b204b922eb0c3a9f0afd61aaa1cedc3d"},
{file = "pyzmq-24.0.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0ec91f1bad66f3ee8c6deb65fa1fe418e8ad803efedd69c35f3b5502f43bd1dc"},
{file = "pyzmq-24.0.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:db03704b3506455d86ec72c3358a779e9b1d07b61220dfb43702b7b668edcd0d"},
{file = "pyzmq-24.0.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:e7e66b4e403c2836ac74f26c4b65d8ac0ca1eef41dfcac2d013b7482befaad83"},
{file = "pyzmq-24.0.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:7a23ccc1083c260fa9685c93e3b170baba45aeed4b524deb3f426b0c40c11639"},
{file = "pyzmq-24.0.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:fa0ae3275ef706c0309556061185dd0e4c4cd3b7d6f67ae617e4e677c7a41e2e"},
{file = "pyzmq-24.0.1-cp36-cp36m-win32.whl", hash = "sha256:f01de4ec083daebf210531e2cca3bdb1608dbbbe00a9723e261d92087a1f6ebc"},
{file = "pyzmq-24.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:de4217b9eb8b541cf2b7fde4401ce9d9a411cc0af85d410f9d6f4333f43640be"},
{file = "pyzmq-24.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:78068e8678ca023594e4a0ab558905c1033b2d3e806a0ad9e3094e231e115a33"},
{file = "pyzmq-24.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77c2713faf25a953c69cf0f723d1b7dd83827b0834e6c41e3fb3bbc6765914a1"},
{file = "pyzmq-24.0.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:8bb4af15f305056e95ca1bd086239b9ebc6ad55e9f49076d27d80027f72752f6"},
{file = "pyzmq-24.0.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:0f14cffd32e9c4c73da66db97853a6aeceaac34acdc0fae9e5bbc9370281864c"},
{file = "pyzmq-24.0.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:0108358dab8c6b27ff6b985c2af4b12665c1bc659648284153ee501000f5c107"},
{file = "pyzmq-24.0.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:d66689e840e75221b0b290b0befa86f059fb35e1ee6443bce51516d4d61b6b99"},
{file = "pyzmq-24.0.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:ae08ac90aa8fa14caafc7a6251bd218bf6dac518b7bff09caaa5e781119ba3f2"},
{file = "pyzmq-24.0.1-cp37-cp37m-win32.whl", hash = "sha256:8421aa8c9b45ea608c205db9e1c0c855c7e54d0e9c2c2f337ce024f6843cab3b"},
{file = "pyzmq-24.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:54d8b9c5e288362ec8595c1d98666d36f2070fd0c2f76e2b3c60fbad9bd76227"},
{file = "pyzmq-24.0.1-cp38-cp38-macosx_10_15_universal2.whl", hash = "sha256:acbd0a6d61cc954b9f535daaa9ec26b0a60a0d4353c5f7c1438ebc88a359a47e"},
{file = "pyzmq-24.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:47b11a729d61a47df56346283a4a800fa379ae6a85870d5a2e1e4956c828eedc"},
{file = "pyzmq-24.0.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:abe6eb10122f0d746a0d510c2039ae8edb27bc9af29f6d1b05a66cc2401353ff"},
{file = "pyzmq-24.0.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:07bec1a1b22dacf718f2c0e71b49600bb6a31a88f06527dfd0b5aababe3fa3f7"},
{file = "pyzmq-24.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f0d945a85b70da97ae86113faf9f1b9294efe66bd4a5d6f82f2676d567338b66"},
{file = "pyzmq-24.0.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:1b7928bb7580736ffac5baf814097be342ba08d3cfdfb48e52773ec959572287"},
{file = "pyzmq-24.0.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:b946da90dc2799bcafa682692c1d2139b2a96ec3c24fa9fc6f5b0da782675330"},
{file = "pyzmq-24.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:c8840f064b1fb377cffd3efeaad2b190c14d4c8da02316dae07571252d20b31f"},
{file = "pyzmq-24.0.1-cp38-cp38-win32.whl", hash = "sha256:4854f9edc5208f63f0841c0c667260ae8d6846cfa233c479e29fdc85d42ebd58"},
{file = "pyzmq-24.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:42d4f97b9795a7aafa152a36fe2ad44549b83a743fd3e77011136def512e6c2a"},
{file = "pyzmq-24.0.1-cp39-cp39-macosx_10_15_universal2.whl", hash = "sha256:52afb0ac962963fff30cf1be775bc51ae083ef4c1e354266ab20e5382057dd62"},
{file = "pyzmq-24.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8bad8210ad4df68c44ff3685cca3cda448ee46e20d13edcff8909eba6ec01ca4"},
{file = "pyzmq-24.0.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:dabf1a05318d95b1537fd61d9330ef4313ea1216eea128a17615038859da3b3b"},
{file = "pyzmq-24.0.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5bd3d7dfd9cd058eb68d9a905dec854f86649f64d4ddf21f3ec289341386c44b"},
{file = "pyzmq-24.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e8012bce6836d3f20a6c9599f81dfa945f433dab4dbd0c4917a6fb1f998ab33d"},
{file = "pyzmq-24.0.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:c31805d2c8ade9b11feca4674eee2b9cce1fec3e8ddb7bbdd961a09dc76a80ea"},
{file = "pyzmq-24.0.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:3104f4b084ad5d9c0cb87445cc8cfd96bba710bef4a66c2674910127044df209"},
{file = "pyzmq-24.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:df0841f94928f8af9c7a1f0aaaffba1fb74607af023a152f59379c01c53aee58"},
{file = "pyzmq-24.0.1-cp39-cp39-win32.whl", hash = "sha256:a435ef8a3bd95c8a2d316d6e0ff70d0db524f6037411652803e118871d703333"},
{file = "pyzmq-24.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:2032d9cb994ce3b4cba2b8dfae08c7e25bc14ba484c770d4d3be33c27de8c45b"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:bb5635c851eef3a7a54becde6da99485eecf7d068bd885ac8e6d173c4ecd68b0"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:83ea1a398f192957cb986d9206ce229efe0ee75e3c6635baff53ddf39bd718d5"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:941fab0073f0a54dc33d1a0460cb04e0d85893cb0c5e1476c785000f8b359409"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0e8f482c44ccb5884bf3f638f29bea0f8dc68c97e38b2061769c4cb697f6140d"},
{file = "pyzmq-24.0.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:613010b5d17906c4367609e6f52e9a2595e35d5cc27d36ff3f1b6fa6e954d944"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:65c94410b5a8355cfcf12fd600a313efee46ce96a09e911ea92cf2acf6708804"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:20e7eeb1166087db636c06cae04a1ef59298627f56fb17da10528ab52a14c87f"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a2712aee7b3834ace51738c15d9ee152cc5a98dc7d57dd93300461b792ab7b43"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1a7c280185c4da99e0cc06c63bdf91f5b0b71deb70d8717f0ab870a43e376db8"},
{file = "pyzmq-24.0.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:858375573c9225cc8e5b49bfac846a77b696b8d5e815711b8d4ba3141e6e8879"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:80093b595921eed1a2cead546a683b9e2ae7f4a4592bb2ab22f70d30174f003a"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f3f3154fde2b1ff3aa7b4f9326347ebc89c8ef425ca1db8f665175e6d3bd42f"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:abb756147314430bee5d10919b8493c0ccb109ddb7f5dfd2fcd7441266a25b75"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:44e706bac34e9f50779cb8c39f10b53a4d15aebb97235643d3112ac20bd577b4"},
{file = "pyzmq-24.0.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:687700f8371643916a1d2c61f3fdaa630407dd205c38afff936545d7b7466066"},
{file = "pyzmq-24.0.1.tar.gz", hash = "sha256:216f5d7dbb67166759e59b0479bca82b8acf9bed6015b526b8eb10143fb08e77"},
]
qtconsole = [
{file = "qtconsole-5.4.0-py3-none-any.whl", hash = "sha256:be13560c19bdb3b54ed9741a915aa701a68d424519e8341ac479a91209e694b2"},
{file = "qtconsole-5.4.0.tar.gz", hash = "sha256:57748ea2fd26320a0b77adba20131cfbb13818c7c96d83fafcb110ff55f58b35"},
]
QtPy = [
{file = "QtPy-2.3.0-py3-none-any.whl", hash = "sha256:8d6d544fc20facd27360ea189592e6135c614785f0dec0b4f083289de6beb408"},
{file = "QtPy-2.3.0.tar.gz", hash = "sha256:0603c9c83ccc035a4717a12908bf6bc6cb22509827ea2ec0e94c2da7c9ed57c5"},
]
requests = [
{file = "requests-2.28.1-py3-none-any.whl", hash = "sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349"},
{file = "requests-2.28.1.tar.gz", hash = "sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983"},
]
requests-oauthlib = [
{file = "requests-oauthlib-1.3.1.tar.gz", hash = "sha256:75beac4a47881eeb94d5ea5d6ad31ef88856affe2332b9aafb52c6452ccf0d7a"},
{file = "requests_oauthlib-1.3.1-py2.py3-none-any.whl", hash = "sha256:2577c501a2fb8d05a304c09d090d6e47c306fef15809d102b327cf8364bddab5"},
]
rpy2 = [
{file = "rpy2-3.5.6-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:7f56bb66d95aaa59f52c82bdff3bb268a5745cc3779839ca1ac9aecfc411c17a"},
{file = "rpy2-3.5.6-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:defff796b43fe230e1e698a1bc353b7a4a25d4d9de856ee1bcffd6831edc825c"},
{file = "rpy2-3.5.6-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:a3f74cd54bd2e21a94274ae5306113e24f8a15c034b15be931188939292b49f7"},
{file = "rpy2-3.5.6-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:6a2e4be001b98c00f084a561cfcf9ca52f938cd8fcd8acfa0fbfc6a8be219339"},
{file = "rpy2-3.5.6.tar.gz", hash = "sha256:3404f1031d2d8ff8a1002656ab8e394b8ac16dd34ca43af68deed102f396e771"},
]
rsa = [
{file = "rsa-4.9-py3-none-any.whl", hash = "sha256:90260d9058e514786967344d0ef75fa8727eed8a7d2e43ce9f4bcf1b536174f7"},
{file = "rsa-4.9.tar.gz", hash = "sha256:e38464a49c6c85d7f1351b0126661487a7e0a14a50f1675ec50eb34d4f20ef21"},
]
s3transfer = [
{file = "s3transfer-0.6.0-py3-none-any.whl", hash = "sha256:06176b74f3a15f61f1b4f25a1fc29a4429040b7647133a463da8fa5bd28d5ecd"},
{file = "s3transfer-0.6.0.tar.gz", hash = "sha256:2ed07d3866f523cc561bf4a00fc5535827981b117dd7876f036b0c1aca42c947"},
]
scikit-learn = [
{file = "scikit-learn-1.0.2.tar.gz", hash = "sha256:b5870959a5484b614f26d31ca4c17524b1b0317522199dc985c3b4256e030767"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:da3c84694ff693b5b3194d8752ccf935a665b8b5edc33a283122f4273ca3e687"},
{file = "scikit_learn-1.0.2-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:75307d9ea39236cad7eea87143155eea24d48f93f3a2f9389c817f7019f00705"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f14517e174bd7332f1cca2c959e704696a5e0ba246eb8763e6c24876d8710049"},
{file = "scikit_learn-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d9aac97e57c196206179f674f09bc6bffcd0284e2ba95b7fe0b402ac3f986023"},
{file = "scikit_learn-1.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:d93d4c28370aea8a7cbf6015e8a669cd5d69f856cc2aa44e7a590fb805bb5583"},
{file = "scikit_learn-1.0.2-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:85260fb430b795d806251dd3bb05e6f48cdc777ac31f2bcf2bc8bbed3270a8f5"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:a053a6a527c87c5c4fa7bf1ab2556fa16d8345cf99b6c5a19030a4a7cd8fd2c0"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:245c9b5a67445f6f044411e16a93a554edc1efdcce94d3fc0bc6a4b9ac30b752"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:158faf30684c92a78e12da19c73feff9641a928a8024b4fa5ec11d583f3d8a87"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:08ef968f6b72033c16c479c966bf37ccd49b06ea91b765e1cc27afefe723920b"},
{file = "scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16455ace947d8d9e5391435c2977178d0ff03a261571e67f627c8fee0f9d431a"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win32.whl", hash = "sha256:2f3b453e0b149898577e301d27e098dfe1a36943f7bb0ad704d1e548efc3b448"},
{file = "scikit_learn-1.0.2-cp37-cp37m-win_amd64.whl", hash = "sha256:46f431ec59dead665e1370314dbebc99ead05e1c0a9df42f22d6a0e00044820f"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:ff3fa8ea0e09e38677762afc6e14cad77b5e125b0ea70c9bba1992f02c93b028"},
{file = "scikit_learn-1.0.2-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:9369b030e155f8188743eb4893ac17a27f81d28a884af460870c7c072f114243"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:7d6b2475f1c23a698b48515217eb26b45a6598c7b1840ba23b3c5acece658dbb"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:285db0352e635b9e3392b0b426bc48c3b485512d3b4ac3c7a44ec2a2ba061e66"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cb33fe1dc6f73dc19e67b264dbb5dde2a0539b986435fdd78ed978c14654830"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b1391d1a6e2268485a63c3073111fe3ba6ec5145fc957481cfd0652be571226d"},
{file = "scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc3744dabc56b50bec73624aeca02e0def06b03cb287de26836e730659c5d29c"},
{file = "scikit_learn-1.0.2-cp38-cp38-win32.whl", hash = "sha256:a999c9f02ff9570c783069f1074f06fe7386ec65b84c983db5aeb8144356a355"},
{file = "scikit_learn-1.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:7626a34eabbf370a638f32d1a3ad50526844ba58d63e3ab81ba91e2a7c6d037e"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:a90b60048f9ffdd962d2ad2fb16367a87ac34d76e02550968719eb7b5716fd10"},
{file = "scikit_learn-1.0.2-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:7a93c1292799620df90348800d5ac06f3794c1316ca247525fa31169f6d25855"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:eabceab574f471de0b0eb3f2ecf2eee9f10b3106570481d007ed1c84ebf6d6a1"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:55f2f3a8414e14fbee03782f9fe16cca0f141d639d2b1c1a36779fa069e1db57"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80095a1e4b93bd33261ef03b9bc86d6db649f988ea4dbcf7110d0cded8d7213d"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fa38a1b9b38ae1fad2863eff5e0d69608567453fdfc850c992e6e47eb764e846"},
{file = "scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ff746a69ff2ef25f62b36338c615dd15954ddc3ab8e73530237dd73235e76d62"},
{file = "scikit_learn-1.0.2-cp39-cp39-win32.whl", hash = "sha256:e174242caecb11e4abf169342641778f68e1bfaba80cd18acd6bc84286b9a534"},
{file = "scikit_learn-1.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:b54a62c6e318ddbfa7d22c383466d38d2ee770ebdb5ddb668d56a099f6eaf75f"},
]
scipy = [
{file = "scipy-1.8.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:65b77f20202599c51eb2771d11a6b899b97989159b7975e9b5259594f1d35ef4"},
{file = "scipy-1.8.1-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:e013aed00ed776d790be4cb32826adb72799c61e318676172495383ba4570aa4"},
{file = "scipy-1.8.1-cp310-cp310-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:02b567e722d62bddd4ac253dafb01ce7ed8742cf8031aea030a41414b86c1125"},
{file = "scipy-1.8.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1da52b45ce1a24a4a22db6c157c38b39885a990a566748fc904ec9f03ed8c6ba"},
{file = "scipy-1.8.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a0aa8220b89b2e3748a2836fbfa116194378910f1a6e78e4675a095bcd2c762d"},
{file = "scipy-1.8.1-cp310-cp310-win_amd64.whl", hash = "sha256:4e53a55f6a4f22de01ffe1d2f016e30adedb67a699a310cdcac312806807ca81"},
{file = "scipy-1.8.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:28d2cab0c6ac5aa131cc5071a3a1d8e1366dad82288d9ec2ca44df78fb50e649"},
{file = "scipy-1.8.1-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:6311e3ae9cc75f77c33076cb2794fb0606f14c8f1b1c9ff8ce6005ba2c283621"},
{file = "scipy-1.8.1-cp38-cp38-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:3b69b90c9419884efeffaac2c38376d6ef566e6e730a231e15722b0ab58f0328"},
{file = "scipy-1.8.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:6cc6b33139eb63f30725d5f7fa175763dc2df6a8f38ddf8df971f7c345b652dc"},
{file = "scipy-1.8.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9c4e3ae8a716c8b3151e16c05edb1daf4cb4d866caa385e861556aff41300c14"},
{file = "scipy-1.8.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23b22fbeef3807966ea42d8163322366dd89da9bebdc075da7034cee3a1441ca"},
{file = "scipy-1.8.1-cp38-cp38-win32.whl", hash = "sha256:4b93ec6f4c3c4d041b26b5f179a6aab8f5045423117ae7a45ba9710301d7e462"},
{file = "scipy-1.8.1-cp38-cp38-win_amd64.whl", hash = "sha256:70ebc84134cf0c504ce6a5f12d6db92cb2a8a53a49437a6bb4edca0bc101f11c"},
{file = "scipy-1.8.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f3e7a8867f307e3359cc0ed2c63b61a1e33a19080f92fe377bc7d49f646f2ec1"},
{file = "scipy-1.8.1-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:2ef0fbc8bcf102c1998c1f16f15befe7cffba90895d6e84861cd6c6a33fb54f6"},
{file = "scipy-1.8.1-cp39-cp39-macosx_12_0_universal2.macosx_10_9_x86_64.whl", hash = "sha256:83606129247e7610b58d0e1e93d2c5133959e9cf93555d3c27e536892f1ba1f2"},
{file = "scipy-1.8.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:93d07494a8900d55492401917a119948ed330b8c3f1d700e0b904a578f10ead4"},
{file = "scipy-1.8.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3b3c8924252caaffc54d4a99f1360aeec001e61267595561089f8b5900821bb"},
{file = "scipy-1.8.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70de2f11bf64ca9921fda018864c78af7147025e467ce9f4a11bc877266900a6"},
{file = "scipy-1.8.1-cp39-cp39-win32.whl", hash = "sha256:1166514aa3bbf04cb5941027c6e294a000bba0cf00f5cdac6c77f2dad479b434"},
{file = "scipy-1.8.1-cp39-cp39-win_amd64.whl", hash = "sha256:9dd4012ac599a1e7eb63c114d1eee1bcfc6dc75a29b589ff0ad0bb3d9412034f"},
{file = "scipy-1.8.1.tar.gz", hash = "sha256:9e3fb1b0e896f14a85aa9a28d5f755daaeeb54c897b746df7a55ccb02b340f33"},
{file = "scipy-1.9.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:1884b66a54887e21addf9c16fb588720a8309a57b2e258ae1c7986d4444d3bc0"},
{file = "scipy-1.9.3-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:83b89e9586c62e787f5012e8475fbb12185bafb996a03257e9675cd73d3736dd"},
{file = "scipy-1.9.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1a72d885fa44247f92743fc20732ae55564ff2a519e8302fb7e18717c5355a8b"},
{file = "scipy-1.9.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d01e1dd7b15bd2449c8bfc6b7cc67d630700ed655654f0dfcf121600bad205c9"},
{file = "scipy-1.9.3-cp310-cp310-win_amd64.whl", hash = "sha256:68239b6aa6f9c593da8be1509a05cb7f9efe98b80f43a5861cd24c7557e98523"},
{file = "scipy-1.9.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b41bc822679ad1c9a5f023bc93f6d0543129ca0f37c1ce294dd9d386f0a21096"},
{file = "scipy-1.9.3-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:90453d2b93ea82a9f434e4e1cba043e779ff67b92f7a0e85d05d286a3625df3c"},
{file = "scipy-1.9.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83c06e62a390a9167da60bedd4575a14c1f58ca9dfde59830fc42e5197283dab"},
{file = "scipy-1.9.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:abaf921531b5aeaafced90157db505e10345e45038c39e5d9b6c7922d68085cb"},
{file = "scipy-1.9.3-cp311-cp311-win_amd64.whl", hash = "sha256:06d2e1b4c491dc7d8eacea139a1b0b295f74e1a1a0f704c375028f8320d16e31"},
{file = "scipy-1.9.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5a04cd7d0d3eff6ea4719371cbc44df31411862b9646db617c99718ff68d4840"},
{file = "scipy-1.9.3-cp38-cp38-macosx_12_0_arm64.whl", hash = "sha256:545c83ffb518094d8c9d83cce216c0c32f8c04aaf28b92cc8283eda0685162d5"},
{file = "scipy-1.9.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0d54222d7a3ba6022fdf5773931b5d7c56efe41ede7f7128c7b1637700409108"},
{file = "scipy-1.9.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cff3a5295234037e39500d35316a4c5794739433528310e117b8a9a0c76d20fc"},
{file = "scipy-1.9.3-cp38-cp38-win_amd64.whl", hash = "sha256:2318bef588acc7a574f5bfdff9c172d0b1bf2c8143d9582e05f878e580a3781e"},
{file = "scipy-1.9.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d644a64e174c16cb4b2e41dfea6af722053e83d066da7343f333a54dae9bc31c"},
{file = "scipy-1.9.3-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:da8245491d73ed0a994ed9c2e380fd058ce2fa8a18da204681f2fe1f57f98f95"},
{file = "scipy-1.9.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4db5b30849606a95dcf519763dd3ab6fe9bd91df49eba517359e450a7d80ce2e"},
{file = "scipy-1.9.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c68db6b290cbd4049012990d7fe71a2abd9ffbe82c0056ebe0f01df8be5436b0"},
{file = "scipy-1.9.3-cp39-cp39-win_amd64.whl", hash = "sha256:5b88e6d91ad9d59478fafe92a7c757d00c59e3bdc3331be8ada76a4f8d683f58"},
{file = "scipy-1.9.3.tar.gz", hash = "sha256:fbc5c05c85c1a02be77b1ff591087c83bc44579c6d2bd9fb798bb64ea5e1a027"},
]
seaborn = [
{file = "seaborn-0.12.1-py3-none-any.whl", hash = "sha256:a9eb39cba095fcb1e4c89a7fab1c57137d70a715a7f2eefcd41c9913c4d4ed65"},
{file = "seaborn-0.12.1.tar.gz", hash = "sha256:bb1eb1d51d3097368c187c3ef089c0288ec1fe8aa1c69fb324c68aa1d02df4c1"},
]
Send2Trash = [
{file = "Send2Trash-1.8.0-py3-none-any.whl", hash = "sha256:f20eaadfdb517eaca5ce077640cb261c7d2698385a6a0f072a4a5447fd49fa08"},
{file = "Send2Trash-1.8.0.tar.gz", hash = "sha256:d2c24762fd3759860a0aff155e45871447ea58d2be6bdd39b5c8f966a0c99c2d"},
]
setuptools = [
{file = "setuptools-65.6.3-py3-none-any.whl", hash = "sha256:57f6f22bde4e042978bcd50176fdb381d7c21a9efa4041202288d3737a0c6a54"},
{file = "setuptools-65.6.3.tar.gz", hash = "sha256:a7620757bf984b58deaf32fc8a4577a9bbc0850cf92c20e1ce41c38c19e5fb75"},
]
setuptools-scm = [
{file = "setuptools_scm-7.0.5-py3-none-any.whl", hash = "sha256:7930f720905e03ccd1e1d821db521bff7ec2ac9cf0ceb6552dd73d24a45d3b02"},
{file = "setuptools_scm-7.0.5.tar.gz", hash = "sha256:031e13af771d6f892b941adb6ea04545bbf91ebc5ce68c78aaf3fff6e1fb4844"},
]
shap = [
{file = "shap-0.40.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:8bb8b4c01bd33592412dae5246286f62efbb24ad774b63e59b8b16969b915b6d"},
{file = "shap-0.40.0-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:d2844acab55e18bcb3d691237a720301223a38805e6e43752e6717f3a8b2cc28"},
{file = "shap-0.40.0-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:e7dd3040b0ec91bc9f477a354973d231d3a6beebe2fa7a5c6a565a79ba7746e8"},
{file = "shap-0.40.0-cp36-cp36m-win32.whl", hash = "sha256:86ea1466244c7e0d0c5dd91d26a90e0b645f5c9d7066810462a921263463529b"},
{file = "shap-0.40.0-cp36-cp36m-win_amd64.whl", hash = "sha256:bbf0cfa30cd8c51f8830d3f25c3881b9949e062124cd0d0b3d8efdc7e0cf5136"},
{file = "shap-0.40.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3d3c5ace8bd5222b455fa5650f9043146e19d80d701f95b25c4c5fb81f628547"},
{file = "shap-0.40.0-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:18b4ca36a43409b784dc76810f76aaa504c467eac17fa89ef5ee330cb460b2b7"},
{file = "shap-0.40.0-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:dbb1ec9b2c05c3939425529437c5f3cfba7a3929fed0e820fb84a42e82358cdd"},
{file = "shap-0.40.0-cp37-cp37m-win32.whl", hash = "sha256:0d12f7d86481afd000d5f144c10cadb31d52fb1f77f68659472d6f6d89f7843b"},
{file = "shap-0.40.0-cp37-cp37m-win_amd64.whl", hash = "sha256:dbd07e48fc7f4d5916f6cdd9dbb8d29b7711a265cc9beac92e7d4a4d9e738bc7"},
{file = "shap-0.40.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:399325caecc7306eb7de17ac19aa797abbf2fcda47d2bb4588d9492adb2dce65"},
{file = "shap-0.40.0-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:4ec50bd0aa24efe1add177371b8b62080484efb87c6dbcf321895c5a08cf68d6"},
{file = "shap-0.40.0-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:e2b5f2d3cac82de0c49afde6529bebb6d5b20334325640267bf25dce572175a1"},
{file = "shap-0.40.0-cp38-cp38-win32.whl", hash = "sha256:ba06256568747aaab9ad0091306550bfe826c1f195bf2cf57b405ae1de16faed"},
{file = "shap-0.40.0-cp38-cp38-win_amd64.whl", hash = "sha256:fb1b325a55fdf58061d332ed3308d44162084d4cb5f53f2c7774ce943d60b0ad"},
{file = "shap-0.40.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f282fa12ca6fc594bcadca389309d733f73fe071e29ab49cb6e51beaa8b01a1a"},
{file = "shap-0.40.0-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:2e72a47407f010f845b3ed6cb4f5160f0907ec8ab97df2bca164ebcb263b4205"},
{file = "shap-0.40.0-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:649c905f9a4629839142e1769235989fb61730eb789a70d27ec7593eb02186a7"},
{file = "shap-0.40.0-cp39-cp39-win32.whl", hash = "sha256:5c220632ba57426d450dcc8ca43c55f657fe18e18f5d223d2a4e2aa02d905047"},
{file = "shap-0.40.0-cp39-cp39-win_amd64.whl", hash = "sha256:46e7084ce021eea450306bf7434adaead53921fd32504f04d1804569839e2979"},
{file = "shap-0.40.0.tar.gz", hash = "sha256:add0a27bb4eb57f0a363c2c4265b1a1328a8c15b01c14c7d432d9cc387dd8579"},
]
six = [
{file = "six-1.16.0-py2.py3-none-any.whl", hash = "sha256:8abb2f1d86890a2dfb989f9a77cfcfd3e47c2a354b01111771326f8aa26e0254"},
{file = "six-1.16.0.tar.gz", hash = "sha256:1e61c37477a1626458e36f7b1d82aa5c9b094fa4802892072e49de9c60c4c926"},
]
slicer = [
{file = "slicer-0.0.7-py3-none-any.whl", hash = "sha256:0b94faa5251c0f23782c03f7b7eedda91d80144059645f452c4bc80fab875976"},
{file = "slicer-0.0.7.tar.gz", hash = "sha256:f5d5f7b45f98d155b9c0ba6554fa9770c6b26d5793a3e77a1030fb56910ebeec"},
]
smart-open = [
{file = "smart_open-5.2.1-py3-none-any.whl", hash = "sha256:71d14489da58b60ce12fc3ecb823facc59a8b23cd1b58edb97175640350d3a62"},
{file = "smart_open-5.2.1.tar.gz", hash = "sha256:75abf758717a92a8f53aa96953f0c245c8cedf8e1e4184903db3659b419d4c17"},
]
sniffio = [
{file = "sniffio-1.3.0-py3-none-any.whl", hash = "sha256:eecefdce1e5bbfb7ad2eeaabf7c1eeb404d7757c379bd1f7e5cce9d8bf425384"},
{file = "sniffio-1.3.0.tar.gz", hash = "sha256:e60305c5e5d314f5389259b7f22aaa33d8f7dee49763119234af3755c55b9101"},
]
snowballstemmer = [
{file = "snowballstemmer-2.2.0-py2.py3-none-any.whl", hash = "sha256:c8e1716e83cc398ae16824e5572ae04e0d9fc2c6b985fb0f900f5f0c96ecba1a"},
{file = "snowballstemmer-2.2.0.tar.gz", hash = "sha256:09b16deb8547d3412ad7b590689584cd0fe25ec8db3be37788be3810cbf19cb1"},
]
sortedcontainers = [
{file = "sortedcontainers-2.4.0-py2.py3-none-any.whl", hash = "sha256:a163dcaede0f1c021485e957a39245190e74249897e2ae4b2aa38595db237ee0"},
{file = "sortedcontainers-2.4.0.tar.gz", hash = "sha256:25caa5a06cc30b6b83d11423433f65d1f9d76c4c6a0c90e3379eaa43b9bfdb88"},
]
soupsieve = [
{file = "soupsieve-2.3.2.post1-py3-none-any.whl", hash = "sha256:3b2503d3c7084a42b1ebd08116e5f81aadfaea95863628c80a3b774a11b7c759"},
{file = "soupsieve-2.3.2.post1.tar.gz", hash = "sha256:fc53893b3da2c33de295667a0e19f078c14bf86544af307354de5fcf12a3f30d"},
]
spacy = [
{file = "spacy-3.4.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e546b314f619502ae03e5eb9a0cfd09ca7a9db265bcdd8a3af83cfb0f1432e55"},
{file = "spacy-3.4.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ded11aa8966236aab145b4d2d024b3eb61ac50078362d77d9ed7d8c240ef0f4a"},
{file = "spacy-3.4.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:462e141f514d78cff85685b5b12eb8cadac0bad2f7820149cbe18d03ccb2e59c"},
{file = "spacy-3.4.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c966d25b3f3e49f5de08546b3638928f49678c365cbbebd0eec28f74e0adb539"},
{file = "spacy-3.4.3-cp310-cp310-win_amd64.whl", hash = "sha256:2ddba486c4c981abe6f1e3fd72648dc8811966e5f0e05808f9c9fab155c388d7"},
{file = "spacy-3.4.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3c87117dd335fba44d1c0d77602f0763c3addf4e7ef9bdbe9a495466c3484c69"},
{file = "spacy-3.4.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3ce3938720f48eaeeb360a7f623f15a0d9efd1a688d5d740e3d4cdcd6f6da8a3"},
{file = "spacy-3.4.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6ad6bf5e4e7f0bc2ef94b7ff6fe59abd766f74c192bca2f17430a3b3cd5bda5a"},
{file = "spacy-3.4.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6644c678bd7af567c6ce679f71d64119282e7d6f1a6f787162a91be3ea39333"},
{file = "spacy-3.4.3-cp311-cp311-win_amd64.whl", hash = "sha256:e6b871de8857a6820140358db3943180fdbe03d44ed792155cee6cb95f4ac4ea"},
{file = "spacy-3.4.3-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d211c2b8894354bf8d961af9a9dcab38f764e1dcddd7b80760e438fcd4c9fe43"},
{file = "spacy-3.4.3-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2ea41f9de30435456235c4182d8bc2eb54a0a64719856e66e780350bb4c8cfbe"},
{file = "spacy-3.4.3-cp36-cp36m-win_amd64.whl", hash = "sha256:afaf6e716cbac4a0fbfa9e9bf95decff223936597ddd03ea869118a7576aa1b1"},
{file = "spacy-3.4.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:7115da36369b3c537caf2fe08e0b45528bd091c7f56ba3580af1e6fdfa9b1081"},
{file = "spacy-3.4.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3b3e629c889cac9656151286ec1232c6a948ce0d44a39f1ef5e60fed4f183a10"},
{file = "spacy-3.4.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9277cd0fcb96ee5dd885f7e96c639f21afd96198d61ca32100446afbff4dfbef"},
{file = "spacy-3.4.3-cp37-cp37m-win_amd64.whl", hash = "sha256:a36bd06a5a147350e5f5f6903c4777296c37b18199251bb41056c3a73aa4494f"},
{file = "spacy-3.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bdafcd0823ca804c39d0bed9e677eb7d0235b1259563d0fd4d3a201c71108af8"},
{file = "spacy-3.4.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0cdc23a48e6543402b4c56ebf2d36246001175c29fd56d3081efcec684651abc"},
{file = "spacy-3.4.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:455c2fbd1de24b6fe34fa121d87525134d7498f9f458ebc8274d7940b473999e"},
{file = "spacy-3.4.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d1c85279fbb6b75d7fb8d7c59c2b734502e51271cad90926e8df1d21b67da5aa"},
{file = "spacy-3.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:5c0d65f39184f522b4e67b965a42d121a3b2d799362682fe8847b64b0ce5bc7c"},
{file = "spacy-3.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a7b97ec21ed773edb2479ae5d6c7686b8034f418df6bccd9218f5c3c2b7cf888"},
{file = "spacy-3.4.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:36a9a506029842795099fd97ad95f0da2845c319020fcc7164cbf33650726f83"},
{file = "spacy-3.4.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5ab293eb1423fa05c7ee71b2fedda57c2b4a4ca8dc054ce678809457287b01dc"},
{file = "spacy-3.4.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bb6d0f185126decc8392cde7d28eb6e85ba4bca15424713288cccc49c2a3c52b"},
{file = "spacy-3.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:676ab9ab2cf94ba48caa306f185a166e85bd35b388ec24512c8ba7dfcbc7517e"},
{file = "spacy-3.4.3.tar.gz", hash = "sha256:22698cf5175e2b697e82699fcccee3092b42137a57d352df208d71657fd693bb"},
]
spacy-legacy = [
{file = "spacy-legacy-3.0.10.tar.gz", hash = "sha256:16104595d8ab1b7267f817a449ad1f986eb1f2a2edf1050748f08739a479679a"},
{file = "spacy_legacy-3.0.10-py2.py3-none-any.whl", hash = "sha256:8526a54d178dee9b7f218d43e5c21362c59056c5da23380b319b56043e9211f3"},
]
spacy-loggers = [
{file = "spacy-loggers-1.0.3.tar.gz", hash = "sha256:00f6fd554db9fd1fde6501b23e1f0e72f6eef14bb1e7fc15456d11d1d2de92ca"},
{file = "spacy_loggers-1.0.3-py3-none-any.whl", hash = "sha256:f74386b390a023f9615dcb499b7b4ad63338236a8187f0ec4dfe265a9f665ee8"},
]
sparse = [
{file = "sparse-0.13.0-py2.py3-none-any.whl", hash = "sha256:95ed0b649a0663b1488756ad4cf242b0a9bb2c9a25bc752a7c6ca9fbe8258966"},
{file = "sparse-0.13.0.tar.gz", hash = "sha256:685dc994aa770ee1b23f2d5392819c8429f27958771f8dceb2c4fb80210d5915"},
]
Sphinx = [
{file = "Sphinx-5.3.0.tar.gz", hash = "sha256:51026de0a9ff9fc13c05d74913ad66047e104f56a129ff73e174eb5c3ee794b5"},
{file = "sphinx-5.3.0-py3-none-any.whl", hash = "sha256:060ca5c9f7ba57a08a1219e547b269fadf125ae25b06b9fa7f66768efb652d6d"},
]
sphinx-copybutton = [
{file = "sphinx-copybutton-0.5.0.tar.gz", hash = "sha256:a0c059daadd03c27ba750da534a92a63e7a36a7736dcf684f26ee346199787f6"},
{file = "sphinx_copybutton-0.5.0-py3-none-any.whl", hash = "sha256:9684dec7434bd73f0eea58dda93f9bb879d24bff2d8b187b1f2ec08dfe7b5f48"},
]
sphinx_design = [
{file = "sphinx_design-0.3.0-py3-none-any.whl", hash = "sha256:823c1dd74f31efb3285ec2f1254caefed29d762a40cd676f58413a1e4ed5cc96"},
{file = "sphinx_design-0.3.0.tar.gz", hash = "sha256:7183fa1fae55b37ef01bda5125a21ee841f5bbcbf59a35382be598180c4cefba"},
]
sphinx-rtd-theme = [
{file = "sphinx_rtd_theme-1.1.1-py2.py3-none-any.whl", hash = "sha256:31faa07d3e97c8955637fc3f1423a5ab2c44b74b8cc558a51498c202ce5cbda7"},
{file = "sphinx_rtd_theme-1.1.1.tar.gz", hash = "sha256:6146c845f1e1947b3c3dd4432c28998a1693ccc742b4f9ad7c63129f0757c103"},
]
sphinxcontrib-applehelp = [
{file = "sphinxcontrib-applehelp-1.0.2.tar.gz", hash = "sha256:a072735ec80e7675e3f432fcae8610ecf509c5f1869d17e2eecff44389cdbc58"},
{file = "sphinxcontrib_applehelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:806111e5e962be97c29ec4c1e7fe277bfd19e9652fb1a4392105b43e01af885a"},
]
sphinxcontrib-devhelp = [
{file = "sphinxcontrib-devhelp-1.0.2.tar.gz", hash = "sha256:ff7f1afa7b9642e7060379360a67e9c41e8f3121f2ce9164266f61b9f4b338e4"},
{file = "sphinxcontrib_devhelp-1.0.2-py2.py3-none-any.whl", hash = "sha256:8165223f9a335cc1af7ffe1ed31d2871f325254c0423bc0c4c7cd1c1e4734a2e"},
]
sphinxcontrib-googleanalytics = []
sphinxcontrib-htmlhelp = [
{file = "sphinxcontrib-htmlhelp-2.0.0.tar.gz", hash = "sha256:f5f8bb2d0d629f398bf47d0d69c07bc13b65f75a81ad9e2f71a63d4b7a2f6db2"},
{file = "sphinxcontrib_htmlhelp-2.0.0-py2.py3-none-any.whl", hash = "sha256:d412243dfb797ae3ec2b59eca0e52dac12e75a241bf0e4eb861e450d06c6ed07"},
]
sphinxcontrib-jsmath = [
{file = "sphinxcontrib-jsmath-1.0.1.tar.gz", hash = "sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"},
{file = "sphinxcontrib_jsmath-1.0.1-py2.py3-none-any.whl", hash = "sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178"},
]
sphinxcontrib-qthelp = [
{file = "sphinxcontrib-qthelp-1.0.3.tar.gz", hash = "sha256:4c33767ee058b70dba89a6fc5c1892c0d57a54be67ddd3e7875a18d14cba5a72"},
{file = "sphinxcontrib_qthelp-1.0.3-py2.py3-none-any.whl", hash = "sha256:bd9fc24bcb748a8d51fd4ecaade681350aa63009a347a8c14e637895444dfab6"},
]
sphinxcontrib-serializinghtml = [
{file = "sphinxcontrib-serializinghtml-1.1.5.tar.gz", hash = "sha256:aa5f6de5dfdf809ef505c4895e51ef5c9eac17d0f287933eb49ec495280b6952"},
{file = "sphinxcontrib_serializinghtml-1.1.5-py2.py3-none-any.whl", hash = "sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd"},
]
srsly = [
{file = "srsly-2.4.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:8fed31ef8acbb5fead2152824ef39e12d749fcd254968689ba5991dd257b63b4"},
{file = "srsly-2.4.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:04d0b4cd91e098cdac12d2c28e256b1181ba98bcd00e460b8e42dee3e8542804"},
{file = "srsly-2.4.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6d83bea1f774b54d9313a374a95f11a776d37bcedcda93c526bf7f1cb5f26428"},
{file = "srsly-2.4.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cae5d48a0bda55a3728f49976ea0b652f508dbc5ac3e849f41b64a5753ec7f0a"},
{file = "srsly-2.4.5-cp310-cp310-win_amd64.whl", hash = "sha256:f74c64934423bcc2d3508cf3a079c7034e5cde988255dc57c7a09794c78f0610"},
{file = "srsly-2.4.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0f9abb7857f9363f1ac52123db94dfe1c4af8959a39d698eff791d17e45e00b6"},
{file = "srsly-2.4.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f48d40c3b3d20e38410e7a95fa5b4050c035f467b0793aaf67188b1edad37fe3"},
{file = "srsly-2.4.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1434759effec2ee266a24acd9b53793a81cac01fc1e6321c623195eda1b9c7df"},
{file = "srsly-2.4.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5e7b0cd9853b0d9e00ad23d26199c1e44d8fd74096cbbbabc92447a915bcfd78"},
{file = "srsly-2.4.5-cp311-cp311-win_amd64.whl", hash = "sha256:874010587a807264963de9a1c91668c43cee9ed2f683f5406bdf5a34dfe12cca"},
{file = "srsly-2.4.5-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa4e1fe143275339d1c4a74e46d4c75168eed8b200f44f2ea023d45ff089a2f"},
{file = "srsly-2.4.5-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1c4291ee125796fb05e778e9ca8f9a829e8c314b757826f2e1d533e424a93531"},
{file = "srsly-2.4.5-cp36-cp36m-win_amd64.whl", hash = "sha256:8f258ee69aefb053258ac2e4f4b9d597e622b79f78874534430e864cef0be199"},
{file = "srsly-2.4.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:ace951c3088204bd66f30326f93ab6e615ce1562a461a8a464759d99fa9c2a02"},
{file = "srsly-2.4.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:facab907801fbcb0e54b3532e04bc6a0709184d68004ef3a129e8c7e3ca63d82"},
{file = "srsly-2.4.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a49c089541a9a0a27ccb841a596350b7ee1d6adfc7ebd28eddedfd34dc9f12c5"},
{file = "srsly-2.4.5-cp37-cp37m-win_amd64.whl", hash = "sha256:db6bc02bd1e3372a3636e47b22098107c9df2cf12d220321b51c586ba17904b3"},
{file = "srsly-2.4.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9a95c682de8c6e6145199f10a7c597647ff7d398fb28874f845ba7d34a86a033"},
{file = "srsly-2.4.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:8c26c5c0e07ea7bb7b8b8735e1b2261fea308c2c883b99211d11747162c6d897"},
{file = "srsly-2.4.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e0043eff95be45acb5ce09cebb80ebdb9f2b6856aa3a15979e6fe3cc9a486753"},
{file = "srsly-2.4.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a2075124d4872e754af966e76f3258cd526eeac84f0995ee8cd561fd4cf1b68e"},
{file = "srsly-2.4.5-cp38-cp38-win_amd64.whl", hash = "sha256:1a41e5b10902c885cabe326ba86d549d7011e38534c45bed158ecb8abd4b44ce"},
{file = "srsly-2.4.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b5a96f0ae15b651fa3fd87421bd93e61c6dc46c0831cbe275c9b790d253126b5"},
{file = "srsly-2.4.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:764906e9f4c2ac5f748c49d95c8bf79648404ebc548864f9cb1fa0707942d830"},
{file = "srsly-2.4.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:95afe9625badaf5ce326e37b21362423d7e8578a5ec9c85b15c3fca93205a883"},
{file = "srsly-2.4.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90359cc3c5601afd45ec12c52bde1cf1ccbe0dc7d4244fd1f8d0c9e100c71707"},
{file = "srsly-2.4.5-cp39-cp39-win_amd64.whl", hash = "sha256:2d3b0d32be2267fb489da172d71399ac59f763189b47dbe68eedb0817afaa6dc"},
{file = "srsly-2.4.5.tar.gz", hash = "sha256:c842258967baa527cea9367986e42b8143a1a890e7d4a18d25a36edc3c7a33c7"},
]
stack-data = [
{file = "stack_data-0.6.2-py3-none-any.whl", hash = "sha256:cbb2a53eb64e5785878201a97ed7c7b94883f48b87bfb0bbe8b623c74679e4a8"},
{file = "stack_data-0.6.2.tar.gz", hash = "sha256:32d2dd0376772d01b6cb9fc996f3c8b57a357089dec328ed4b6553d037eaf815"},
]
statsmodels = [
{file = "statsmodels-0.13.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c75319fddded9507cc310fc3980e4ae4d64e3ff37b322ad5e203a84f89d85203"},
{file = "statsmodels-0.13.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6f148920ef27c7ba69a5735724f65de9422c0c8bcef71b50c846b823ceab8840"},
{file = "statsmodels-0.13.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cc4d3e866bfe0c4f804bca362d0e7e29d24b840aaba8d35a754387e16d2a119"},
{file = "statsmodels-0.13.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:072950d6f7820a6b0bd6a27b2d792a6d6f952a1d2f62f0dcf8dd808799475855"},
{file = "statsmodels-0.13.5-cp310-cp310-win_amd64.whl", hash = "sha256:159ae9962c61b31dcffe6356d72ae3d074bc597ad9273ec93ae653fe607b8516"},
{file = "statsmodels-0.13.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:9061c0d5ee4f3038b590afedd527a925e5de27195dc342381bac7675b2c5efe4"},
{file = "statsmodels-0.13.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e1d89cba5fafc1bf8e75296fdfad0b619de2bfb5e6c132913991d207f3ead675"},
{file = "statsmodels-0.13.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:01bc16e7c66acb30cd3dda6004c43212c758223d1966131226024a5c99ec5a7e"},
{file = "statsmodels-0.13.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d5cd9ab5de2c7489b890213cba2aec3d6468eaaec547041c2dfcb1e03411f7e"},
{file = "statsmodels-0.13.5-cp311-cp311-win_amd64.whl", hash = "sha256:857d5c0564a68a7ef77dc2252bb43c994c0699919b4e1f06a9852c2fbb588765"},
{file = "statsmodels-0.13.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5a5348b2757ab31c5c31b498f25eff2ea3c42086bef3d3b88847c25a30bdab9c"},
{file = "statsmodels-0.13.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9b21648e3a8e7514839ba000a48e495cdd8bb55f1b71c608cf314b05541e283b"},
{file = "statsmodels-0.13.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b829eada6cec07990f5e6820a152af4871c601fd458f76a896fb79ae2114985"},
{file = "statsmodels-0.13.5-cp37-cp37m-win_amd64.whl", hash = "sha256:872b3a8186ef20f647c7ab5ace512a8fc050148f3c2f366460ab359eec3d9695"},
{file = "statsmodels-0.13.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bc1abb81d24f56425febd5a22bb852a1b98e53b80c4a67f50938f9512f154141"},
{file = "statsmodels-0.13.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:a2c46f1b0811a9736db37badeb102c0903f33bec80145ced3aa54df61aee5c2b"},
{file = "statsmodels-0.13.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:947f79ba9662359f1cfa6e943851f17f72b06e55f4a7c7a2928ed3bc57ed6cb8"},
{file = "statsmodels-0.13.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:046251c939c51e7632bcc8c6d6f31b8ca0eaffdf726d2498463f8de3735c9a82"},
{file = "statsmodels-0.13.5-cp38-cp38-win_amd64.whl", hash = "sha256:84f720e8d611ef8f297e6d2ffa7248764e223ef7221a3fc136e47ae089609611"},
{file = "statsmodels-0.13.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b0d1d24e4adf96ec3c64d9a027dcee2c5d5096bb0dad33b4d91034c0a3c40371"},
{file = "statsmodels-0.13.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0f0e5c9c58fb6cba41db01504ec8dd018c96a95152266b7d5d67e0de98840474"},
{file = "statsmodels-0.13.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5b034aa4b9ad4f4d21abc4dd4841be0809a446db14c7aa5c8a65090aea9f1143"},
{file = "statsmodels-0.13.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:73f97565c29241e839ffcef74fa995afdfe781910ccc27c189e5890193085958"},
{file = "statsmodels-0.13.5-cp39-cp39-win_amd64.whl", hash = "sha256:2ff331e508f2d1a53d3a188305477f4cf05cd8c52beb6483885eb3d51c8be3ad"},
{file = "statsmodels-0.13.5.tar.gz", hash = "sha256:593526acae1c0fda0ea6c48439f67c3943094c542fe769f8b90fe9e6c6cc4871"},
]
sympy = [
{file = "sympy-1.11.1-py3-none-any.whl", hash = "sha256:938f984ee2b1e8eae8a07b884c8b7a1146010040fccddc6539c54f401c8f6fcf"},
{file = "sympy-1.11.1.tar.gz", hash = "sha256:e32380dce63cb7c0108ed525570092fd45168bdae2faa17e528221ef72e88658"},
]
tblib = [
{file = "tblib-1.7.0-py2.py3-none-any.whl", hash = "sha256:289fa7359e580950e7d9743eab36b0691f0310fce64dee7d9c31065b8f723e23"},
{file = "tblib-1.7.0.tar.gz", hash = "sha256:059bd77306ea7b419d4f76016aef6d7027cc8a0785579b5aad198803435f882c"},
]
tenacity = [
{file = "tenacity-8.1.0-py3-none-any.whl", hash = "sha256:35525cd47f82830069f0d6b73f7eb83bc5b73ee2fff0437952cedf98b27653ac"},
{file = "tenacity-8.1.0.tar.gz", hash = "sha256:e48c437fdf9340f5666b92cd7990e96bc5fc955e1298baf4a907e3972067a445"},
]
tensorboard = [
{file = "tensorboard-2.11.0-py3-none-any.whl", hash = "sha256:a0e592ee87962e17af3f0dce7faae3fbbd239030159e9e625cce810b7e35c53d"},
]
tensorboard-data-server = [
{file = "tensorboard_data_server-0.6.1-py3-none-any.whl", hash = "sha256:809fe9887682d35c1f7d1f54f0f40f98bb1f771b14265b453ca051e2ce58fca7"},
{file = "tensorboard_data_server-0.6.1-py3-none-macosx_10_9_x86_64.whl", hash = "sha256:fa8cef9be4fcae2f2363c88176638baf2da19c5ec90addb49b1cde05c95c88ee"},
{file = "tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl", hash = "sha256:d8237580755e58eff68d1f3abefb5b1e39ae5c8b127cc40920f9c4fb33f4b98a"},
]
tensorboard-plugin-wit = [
{file = "tensorboard_plugin_wit-1.8.1-py3-none-any.whl", hash = "sha256:ff26bdd583d155aa951ee3b152b3d0cffae8005dc697f72b44a8e8c2a77a8cbe"},
]
tensorflow = [
{file = "tensorflow-2.11.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:6c049fec6c2040685d6f43a63e17ccc5d6b0abc16b70cc6f5e7d691262b5d2d0"},
{file = "tensorflow-2.11.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bcc8380820cea8f68f6c90b8aee5432e8537e5bb9ec79ac61a98e6a9a02c7d40"},
{file = "tensorflow-2.11.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d973458241c8771bf95d4ba68ad5d67b094f72dd181c2d562ffab538c1b0dad7"},
{file = "tensorflow-2.11.0-cp310-cp310-win_amd64.whl", hash = "sha256:d470b772ee3c291a8c7be2331e7c379e0c338223c0bf532f5906d4556f17580d"},
{file = "tensorflow-2.11.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:d29c1179149fa469ad68234c52c83081d037ead243f90e826074e2563a0f938a"},
{file = "tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2cdba2fce00d6c924470d4fb65d5e95a4b6571a863860608c0c13f0393f4ca0d"},
{file = "tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2ab20f93d2b52a44b414ec6dcf82aa12110e90e0920039a27108de28ae2728"},
{file = "tensorflow-2.11.0-cp37-cp37m-win_amd64.whl", hash = "sha256:445510f092f7827e1f60f59b8bfb58e664aaf05d07daaa21c5735a7f76ca2b25"},
{file = "tensorflow-2.11.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:056d29f2212342536ce3856aa47910a2515eb97ec0a6cc29ed47fc4be1369ec8"},
{file = "tensorflow-2.11.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:17b29d6d360fad545ab1127db52592efd3f19ac55c1a45e5014da328ae867ab4"},
{file = "tensorflow-2.11.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:335ab5cccd7a1c46e3d89d9d46913f0715e8032df8d7438f9743b3fb97b39f69"},
{file = "tensorflow-2.11.0-cp38-cp38-win_amd64.whl", hash = "sha256:d48da37c8ae711eb38047a56a052ca8bb4ee018a91a479e42b7a8d117628c32e"},
{file = "tensorflow-2.11.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:d9cf25bca641f2e5c77caa3bfd8dd6b892a7aec0695c54d2a7c9f52a54a8d487"},
{file = "tensorflow-2.11.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0d28f9691ebc48c0075e271023b3f147ae2bc29a3d3a7f42d45019c6b4a700d2"},
{file = "tensorflow-2.11.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:276a44210d956701899dc78ad0aa116a0071f22fb0bcc1ea6bb59f7646b08d11"},
{file = "tensorflow-2.11.0-cp39-cp39-win_amd64.whl", hash = "sha256:cc3444fe1d58c65a195a69656bf56015bf19dc2916da607d784b0a1e215ec008"},
]
tensorflow-estimator = [
{file = "tensorflow_estimator-2.11.0-py2.py3-none-any.whl", hash = "sha256:ea3b64acfff3d9a244f06178c9bdedcbdd3f125b67d0888dba8229498d06468b"},
]
tensorflow-io-gcs-filesystem = [
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:22753dc28c949bfaf29b573ee376370762c88d80330fe95cfb291261eb5e927a"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:52988659f405166df79905e9859bc84ae2a71e3ff61522ba32a95e4dce8e66d2"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp310-cp310-win_amd64.whl", hash = "sha256:698d7f89e09812b9afeb47c3860797343a22f997c64ab9dab98132c61daa8a7d"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:bbf245883aa52ec687b66d0fcbe0f5f0a92d98c0b1c53e6a736039a3548d29a1"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp311-cp311-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:6d95f306ff225c5053fd06deeab3e3a2716357923cb40c44d566c11be779caa3"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp311-cp311-win_amd64.whl", hash = "sha256:5fbef5836e70026245d8d9e692c44dae2c6dbc208c743d01f5b7a2978d6b6bc6"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:00cf6a92f1f9f90b2ba2d728870bcd2a70b116316d0817ab0b91dd390c25b3fd"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f76cbe1a784841c223f6861e5f6c7e53aa6232cb626d57e76881a0638c365de6"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp37-cp37m-win_amd64.whl", hash = "sha256:c5d99f56c12a349905ff684142e4d2df06ae68ecf50c4aad5449a5f81731d858"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:b6e2d275020fb4d1a952cd3fa546483f4e46ad91d64e90d3458e5ca3d12f6477"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a6670e0da16c884267e896ea5c3334d6fd319bd6ff7cf917043a9f3b2babb1b3"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp38-cp38-win_amd64.whl", hash = "sha256:bfed720fc691d3f45802a7bed420716805aef0939c11cebf25798906201f626e"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:cc062ce13ec95fb64b1fd426818a6d2b0e5be9692bc0e43a19cce115b6da4336"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:366e1eff8dbd6b64333d7061e2a8efd081ae4742614f717ced08d8cc9379eb50"},
{file = "tensorflow_io_gcs_filesystem-0.28.0-cp39-cp39-win_amd64.whl", hash = "sha256:9484893779324b2d34874b0aacf3b824eb4f22d782e75df029cbccab2e607974"},
]
termcolor = [
{file = "termcolor-2.1.1-py3-none-any.whl", hash = "sha256:fa852e957f97252205e105dd55bbc23b419a70fec0085708fc0515e399f304fd"},
{file = "termcolor-2.1.1.tar.gz", hash = "sha256:67cee2009adc6449c650f6bcf3bdeed00c8ba53a8cda5362733c53e0a39fb70b"},
]
terminado = [
{file = "terminado-0.17.0-py3-none-any.whl", hash = "sha256:bf6fe52accd06d0661d7611cc73202121ec6ee51e46d8185d489ac074ca457c2"},
{file = "terminado-0.17.0.tar.gz", hash = "sha256:520feaa3aeab8ad64a69ca779be54be9234edb2d0d6567e76c93c2c9a4e6e43f"},
]
thinc = [
{file = "thinc-8.1.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5dc6629e4770a13dec34eda3c4d89302f1b5c91ac4663cd53f876a4e761fcc00"},
{file = "thinc-8.1.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8af5639de41a08d358fac073ac116faefe75289d9bed5c1fbf6c7a54724529ea"},
{file = "thinc-8.1.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4d66eeacc29769bf4238a0666f05e38d75dce60ab609eea5089975e6d8b82721"},
{file = "thinc-8.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:25fcf9b53317f3addca048f1295d4708a95c526821295fe42398e23520514373"},
{file = "thinc-8.1.5-cp310-cp310-win_amd64.whl", hash = "sha256:a683f5280601f2fa1625e738e2b6ce481d17b07350823164f5863aab6b8b8a5d"},
{file = "thinc-8.1.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:404af2a714d6e688d27f7816042bca85766cbc57808aa9afb3309ad786000726"},
{file = "thinc-8.1.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ee28aa9773cb69d6c95d0c58b3fa9997c88840ad1eb877576f407a5b3b0f93c0"},
{file = "thinc-8.1.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7acccd5fb2fcd6caab1f3ad9d3f6acd1c6194a638dceccb5a33bd6f1875221ab"},
{file = "thinc-8.1.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1dc59ab558c85f901ac8299eb8ff1be14404b4d47e5ed3f94f897e25496e4f80"},
{file = "thinc-8.1.5-cp311-cp311-win_amd64.whl", hash = "sha256:07a4cf13c6f0259f32c9d023e2d32d0f5e0aa12ce0422792dbadd24fa1e0379e"},
{file = "thinc-8.1.5-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3ad722c4b1351a712bf8759307ea1213f236aee4a170b2ff31f7908f31b34261"},
{file = "thinc-8.1.5-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:076d68f6c27862b66e15af3622651c58f66b3d3b1c69beadbf1c13da294f05cc"},
{file = "thinc-8.1.5-cp36-cp36m-win_amd64.whl", hash = "sha256:91a8ef8dd565b6aa9b3161b97eece079993109be156f4e8501c8bd36e02b6f3f"},
{file = "thinc-8.1.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:73538c0e596d1f281678354f6508d4af5fad3ae0743b069a96628f2a96085fa5"},
{file = "thinc-8.1.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea5e6502565fe72f9a975f6fe5d1be9d19914d2a3abb3158da08b4adffaa97c6"},
{file = "thinc-8.1.5-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d202e79e3d785a2931d580d3dafaa6ca357c5656c82341121731a3491a1c8887"},
{file = "thinc-8.1.5-cp37-cp37m-win_amd64.whl", hash = "sha256:61dfa235c891c1fa24f9607cd0cad264806adeb70d267162c6e5d91fb9f78640"},
{file = "thinc-8.1.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b62a4247cce4c3a07014b9386b9045dbc15a83aa46102a7fcd5d8eec21fa463a"},
{file = "thinc-8.1.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:345d15eb45743b305a35dd1dc77d282248e55e45a0a84c38d2dfc9fad6130125"},
{file = "thinc-8.1.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6793340b5ada30f11d9beaa6001ade6d80cf3a7877d701ec1710552145dabb33"},
{file = "thinc-8.1.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fa07750e65cc7d3bd922bf2046a10ef28cf22497990da13c3ca154b25449b758"},
{file = "thinc-8.1.5-cp38-cp38-win_amd64.whl", hash = "sha256:b7c1b8417e6bebcebe0bbded816b7b6587a1e239539109897e15cf8463dbed10"},
{file = "thinc-8.1.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ad96acada56e4a0509b834c2e0950a5066727ddfc8d2201b83f7bca8751886aa"},
{file = "thinc-8.1.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5d0144cccb3fb08b15bba73a97f83c0f311a388417fb89d5bb4451abe559b0a2"},
{file = "thinc-8.1.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ced446d2af306a29b0c9ba8940a6631e2e9ef287f9643f4a1d539d69e9fc7266"},
{file = "thinc-8.1.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bb376234c44f173445651c9bf397d05622e31c09a98f81cee98f5908d674380"},
{file = "thinc-8.1.5-cp39-cp39-win_amd64.whl", hash = "sha256:16be051c6f71d967fe87c3bda3a760699539cf75fee6b32527ea38feb3002e56"},
{file = "thinc-8.1.5.tar.gz", hash = "sha256:4d3e4de33d2d0eae7c1455c60c680e453b0204c29e3d2d548d7a9e7fe08ccfbd"},
]
threadpoolctl = [
{file = "threadpoolctl-3.1.0-py3-none-any.whl", hash = "sha256:8b99adda265feb6773280df41eece7b2e6561b772d21ffd52e372f999024907b"},
{file = "threadpoolctl-3.1.0.tar.gz", hash = "sha256:a335baacfaa4400ae1f0d8e3a58d6674d2f8828e3716bb2802c44955ad391380"},
]
tinycss2 = [
{file = "tinycss2-1.2.1-py3-none-any.whl", hash = "sha256:2b80a96d41e7c3914b8cda8bc7f705a4d9c49275616e886103dd839dfc847847"},
{file = "tinycss2-1.2.1.tar.gz", hash = "sha256:8cff3a8f066c2ec677c06dbc7b45619804a6938478d9d73c284b29d14ecb0627"},
]
tokenize-rt = [
{file = "tokenize_rt-5.0.0-py2.py3-none-any.whl", hash = "sha256:c67772c662c6b3dc65edf66808577968fb10badfc2042e3027196bed4daf9e5a"},
{file = "tokenize_rt-5.0.0.tar.gz", hash = "sha256:3160bc0c3e8491312d0485171dea861fc160a240f5f5766b72a1165408d10740"},
]
tomli = [
{file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
]
toolz = [
{file = "toolz-0.12.0-py3-none-any.whl", hash = "sha256:2059bd4148deb1884bb0eb770a3cde70e7f954cfbbdc2285f1f2de01fd21eb6f"},
{file = "toolz-0.12.0.tar.gz", hash = "sha256:88c570861c440ee3f2f6037c4654613228ff40c93a6c25e0eba70d17282c6194"},
]
torch = [
{file = "torch-1.12.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:9c038662db894a23e49e385df13d47b2a777ffd56d9bcd5b832593fab0a7e286"},
{file = "torch-1.12.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:4e1b9c14cf13fd2ab8d769529050629a0e68a6fc5cb8e84b4a3cc1dd8c4fe541"},
{file = "torch-1.12.1-cp310-cp310-win_amd64.whl", hash = "sha256:e9c8f4a311ac29fc7e8e955cfb7733deb5dbe1bdaabf5d4af2765695824b7e0d"},
{file = "torch-1.12.1-cp310-none-macosx_10_9_x86_64.whl", hash = "sha256:976c3f997cea38ee91a0dd3c3a42322785414748d1761ef926b789dfa97c6134"},
{file = "torch-1.12.1-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:68104e4715a55c4bb29a85c6a8d57d820e0757da363be1ba680fa8cc5be17b52"},
{file = "torch-1.12.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:743784ccea0dc8f2a3fe6a536bec8c4763bd82c1352f314937cb4008d4805de1"},
{file = "torch-1.12.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b5dbcca369800ce99ba7ae6dee3466607a66958afca3b740690d88168752abcf"},
{file = "torch-1.12.1-cp37-cp37m-win_amd64.whl", hash = "sha256:f3b52a634e62821e747e872084ab32fbcb01b7fa7dbb7471b6218279f02a178a"},
{file = "torch-1.12.1-cp37-none-macosx_10_9_x86_64.whl", hash = "sha256:8a34a2fbbaa07c921e1b203f59d3d6e00ed379f2b384445773bd14e328a5b6c8"},
{file = "torch-1.12.1-cp37-none-macosx_11_0_arm64.whl", hash = "sha256:42f639501928caabb9d1d55ddd17f07cd694de146686c24489ab8c615c2871f2"},
{file = "torch-1.12.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:0b44601ec56f7dd44ad8afc00846051162ef9c26a8579dda0a02194327f2d55e"},
{file = "torch-1.12.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:cd26d8c5640c3a28c526d41ccdca14cf1cbca0d0f2e14e8263a7ac17194ab1d2"},
{file = "torch-1.12.1-cp38-cp38-win_amd64.whl", hash = "sha256:42e115dab26f60c29e298559dbec88444175528b729ae994ec4c65d56fe267dd"},
{file = "torch-1.12.1-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:a8320ba9ad87e80ca5a6a016e46ada4d1ba0c54626e135d99b2129a4541c509d"},
{file = "torch-1.12.1-cp38-none-macosx_11_0_arm64.whl", hash = "sha256:03e31c37711db2cd201e02de5826de875529e45a55631d317aadce2f1ed45aa8"},
{file = "torch-1.12.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:9b356aea223772cd754edb4d9ecf2a025909b8615a7668ac7d5130f86e7ec421"},
{file = "torch-1.12.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:6cf6f54b43c0c30335428195589bd00e764a6d27f3b9ba637aaa8c11aaf93073"},
{file = "torch-1.12.1-cp39-cp39-win_amd64.whl", hash = "sha256:f00c721f489089dc6364a01fd84906348fe02243d0af737f944fddb36003400d"},
{file = "torch-1.12.1-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:bfec2843daa654f04fda23ba823af03e7b6f7650a873cdb726752d0e3718dada"},
{file = "torch-1.12.1-cp39-none-macosx_11_0_arm64.whl", hash = "sha256:69fe2cae7c39ccadd65a123793d30e0db881f1c1927945519c5c17323131437e"},
]
torchvision = [
{file = "torchvision-0.13.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:19286a733c69dcbd417b86793df807bd227db5786ed787c17297741a9b0d0fc7"},
{file = "torchvision-0.13.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:08f592ea61836ebeceb5c97f4d7a813b9d7dc651bbf7ce4401563ccfae6a21fc"},
{file = "torchvision-0.13.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:ef5fe3ec1848123cd0ec74c07658192b3147dcd38e507308c790d5943e87b88c"},
{file = "torchvision-0.13.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:099874088df104d54d8008f2a28539ca0117b512daed8bf3c2bbfa2b7ccb187a"},
{file = "torchvision-0.13.1-cp310-cp310-win_amd64.whl", hash = "sha256:8e4d02e4d8a203e0c09c10dfb478214c224d080d31efc0dbf36d9c4051f7f3c6"},
{file = "torchvision-0.13.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5e631241bee3661de64f83616656224af2e3512eb2580da7c08e08b8c965a8ac"},
{file = "torchvision-0.13.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:899eec0b9f3b99b96d6f85b9aa58c002db41c672437677b553015b9135b3be7e"},
{file = "torchvision-0.13.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:83e9e2457f23110fd53b0177e1bc621518d6ea2108f570e853b768ce36b7c679"},
{file = "torchvision-0.13.1-cp37-cp37m-win_amd64.whl", hash = "sha256:7552e80fa222252b8b217a951c85e172a710ea4cad0ae0c06fbb67addece7871"},
{file = "torchvision-0.13.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f230a1a40ed70d51e463ce43df243ec520902f8725de2502e485efc5eea9d864"},
{file = "torchvision-0.13.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e9a563894f9fa40692e24d1aa58c3ef040450017cfed3598ff9637f404f3fe3b"},
{file = "torchvision-0.13.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7cb789ceefe6dcd0dc8eeda37bfc45efb7cf34770eac9533861d51ca508eb5b3"},
{file = "torchvision-0.13.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:87c137f343197769a51333076e66bfcd576301d2cd8614b06657187c71b06c4f"},
{file = "torchvision-0.13.1-cp38-cp38-win_amd64.whl", hash = "sha256:4d8bf321c4380854ef04613935fdd415dce29d1088a7ff99e06e113f0efe9203"},
{file = "torchvision-0.13.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:0298bae3b09ac361866088434008d82b99d6458fe8888c8df90720ef4b347d44"},
{file = "torchvision-0.13.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c5ed609c8bc88c575226400b2232e0309094477c82af38952e0373edef0003fd"},
{file = "torchvision-0.13.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:3567fb3def829229ec217c1e38f08c5128ff7fb65854cac17ebac358ff7aa309"},
{file = "torchvision-0.13.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:b167934a5943242da7b1e59318f911d2d253feeca0d13ad5d832b58eed943401"},
{file = "torchvision-0.13.1-cp39-cp39-win_amd64.whl", hash = "sha256:0e77706cc90462653620e336bb90daf03d7bf1b88c3a9a3037df8d111823a56e"},
]
tornado = [
{file = "tornado-6.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:20f638fd8cc85f3cbae3c732326e96addff0a15e22d80f049e00121651e82e72"},
{file = "tornado-6.2-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:87dcafae3e884462f90c90ecc200defe5e580a7fbbb4365eda7c7c1eb809ebc9"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba09ef14ca9893954244fd872798b4ccb2367c165946ce2dd7376aebdde8e3ac"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8150f721c101abdef99073bf66d3903e292d851bee51910839831caba341a75"},
{file = "tornado-6.2-cp37-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3a2f5999215a3a06a4fc218026cd84c61b8b2b40ac5296a6db1f1451ef04c1e"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5f8c52d219d4995388119af7ccaa0bcec289535747620116a58d830e7c25d8a8"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_i686.whl", hash = "sha256:6fdfabffd8dfcb6cf887428849d30cf19a3ea34c2c248461e1f7d718ad30b66b"},
{file = "tornado-6.2-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:1d54d13ab8414ed44de07efecb97d4ef7c39f7438cf5e976ccd356bebb1b5fca"},
{file = "tornado-6.2-cp37-abi3-win32.whl", hash = "sha256:5c87076709343557ef8032934ce5f637dbb552efa7b21d08e89ae7619ed0eb23"},
{file = "tornado-6.2-cp37-abi3-win_amd64.whl", hash = "sha256:e5f923aa6a47e133d1cf87d60700889d7eae68988704e20c75fb2d65677a8e4b"},
{file = "tornado-6.2.tar.gz", hash = "sha256:9b630419bde84ec666bfd7ea0a4cb2a8a651c2d5cccdbdd1972a0c859dfc3c13"},
]
tqdm = [
{file = "tqdm-4.64.1-py2.py3-none-any.whl", hash = "sha256:6fee160d6ffcd1b1c68c65f14c829c22832bc401726335ce92c52d395944a6a1"},
{file = "tqdm-4.64.1.tar.gz", hash = "sha256:5f4f682a004951c1b450bc753c710e9280c5746ce6ffedee253ddbcbf54cf1e4"},
]
traitlets = [
{file = "traitlets-5.5.0-py3-none-any.whl", hash = "sha256:1201b2c9f76097195989cdf7f65db9897593b0dfd69e4ac96016661bb6f0d30f"},
{file = "traitlets-5.5.0.tar.gz", hash = "sha256:b122f9ff2f2f6c1709dab289a05555be011c87828e911c0cf4074b85cb780a79"},
]
typer = [
{file = "typer-0.7.0-py3-none-any.whl", hash = "sha256:b5e704f4e48ec263de1c0b3a2387cd405a13767d2f907f44c1a08cbad96f606d"},
{file = "typer-0.7.0.tar.gz", hash = "sha256:ff797846578a9f2a201b53442aedeb543319466870fbe1c701eab66dd7681165"},
]
typing-extensions = [
{file = "typing_extensions-4.4.0-py3-none-any.whl", hash = "sha256:16fa4864408f655d35ec496218b85f79b3437c829e93320c7c9215ccfd92489e"},
{file = "typing_extensions-4.4.0.tar.gz", hash = "sha256:1511434bb92bf8dd198c12b1cc812e800d4181cfcb867674e0f8279cc93087aa"},
]
tzdata = [
{file = "tzdata-2022.6-py2.py3-none-any.whl", hash = "sha256:04a680bdc5b15750c39c12a448885a51134a27ec9af83667663f0b3a1bf3f342"},
{file = "tzdata-2022.6.tar.gz", hash = "sha256:91f11db4503385928c15598c98573e3af07e7229181bee5375bd30f1695ddcae"},
]
tzlocal = [
{file = "tzlocal-4.2-py3-none-any.whl", hash = "sha256:89885494684c929d9191c57aa27502afc87a579be5cdd3225c77c463ea043745"},
{file = "tzlocal-4.2.tar.gz", hash = "sha256:ee5842fa3a795f023514ac2d801c4a81d1743bbe642e3940143326b3a00addd7"},
]
urllib3 = [
{file = "urllib3-1.26.13-py2.py3-none-any.whl", hash = "sha256:47cc05d99aaa09c9e72ed5809b60e7ba354e64b59c9c173ac3018642d8bb41fc"},
{file = "urllib3-1.26.13.tar.gz", hash = "sha256:c083dd0dce68dbfbe1129d5271cb90f9447dea7d52097c6e0126120c521ddea8"},
]
wasabi = [
{file = "wasabi-0.10.1-py3-none-any.whl", hash = "sha256:fe862cc24034fbc9f04717cd312ab884f71f51a8ecabebc3449b751c2a649d83"},
{file = "wasabi-0.10.1.tar.gz", hash = "sha256:c8e372781be19272942382b14d99314d175518d7822057cb7a97010c4259d249"},
]
wcwidth = [
{file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"},
{file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"},
]
webencodings = [
{file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
{file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
]
websocket-client = [
{file = "websocket-client-1.4.2.tar.gz", hash = "sha256:d6e8f90ca8e2dd4e8027c4561adeb9456b54044312dba655e7cae652ceb9ae59"},
{file = "websocket_client-1.4.2-py3-none-any.whl", hash = "sha256:d6b06432f184438d99ac1f456eaf22fe1ade524c3dd16e661142dc54e9cba574"},
]
Werkzeug = [
{file = "Werkzeug-2.2.2-py3-none-any.whl", hash = "sha256:f979ab81f58d7318e064e99c4506445d60135ac5cd2e177a2de0089bfd4c9bd5"},
{file = "Werkzeug-2.2.2.tar.gz", hash = "sha256:7ea2d48322cc7c0f8b3a215ed73eabd7b5d75d0b50e31ab006286ccff9e00b8f"},
]
wheel = [
{file = "wheel-0.38.4-py3-none-any.whl", hash = "sha256:b60533f3f5d530e971d6737ca6d58681ee434818fab630c83a734bb10c083ce8"},
{file = "wheel-0.38.4.tar.gz", hash = "sha256:965f5259b566725405b05e7cf774052044b1ed30119b5d586b2703aafe8719ac"},
]
widgetsnbextension = [
{file = "widgetsnbextension-4.0.3-py3-none-any.whl", hash = "sha256:7f3b0de8fda692d31ef03743b598620e31c2668b835edbd3962d080ccecf31eb"},
{file = "widgetsnbextension-4.0.3.tar.gz", hash = "sha256:34824864c062b0b3030ad78210db5ae6a3960dfb61d5b27562d6631774de0286"},
]
wrapt = [
{file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"},
{file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"},
{file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"},
{file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"},
{file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"},
{file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"},
{file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"},
{file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"},
{file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"},
{file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"},
{file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"},
{file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"},
{file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"},
{file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"},
{file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"},
{file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"},
{file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"},
{file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"},
{file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"},
{file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"},
{file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"},
{file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"},
{file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"},
{file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"},
{file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"},
{file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"},
{file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"},
{file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"},
{file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"},
{file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"},
{file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"},
{file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"},
]
xgboost = [
{file = "xgboost-1.7.1-py3-none-macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64.whl", hash = "sha256:373d8e95f2f0c0a680ee625a96141b0009f334e132be8493e0f6c69026221bbd"},
{file = "xgboost-1.7.1-py3-none-macosx_12_0_arm64.whl", hash = "sha256:91dfd4af12c01c6e683b0412f48744d2d30d6754e33b297e40845e2d136b3d30"},
{file = "xgboost-1.7.1-py3-none-manylinux2014_aarch64.whl", hash = "sha256:18b9fbad68d2af60737618072e77a43f88eec1113a143f9498698eb5db0d9c41"},
{file = "xgboost-1.7.1-py3-none-manylinux2014_x86_64.whl", hash = "sha256:e96305eb8c8b6061d83ac9fef25437e8ebc8d9c9300e75b8d07f35de1031166b"},
{file = "xgboost-1.7.1-py3-none-win_amd64.whl", hash = "sha256:fbe06896e1b12843c7f428ae56da6ac1c5975545d8785f137f73fd591c54e5f5"},
{file = "xgboost-1.7.1.tar.gz", hash = "sha256:bb302c5c33e14bab94603940987940f29203ecb8767a7a719daf579fbfaace64"},
]
zict = [
{file = "zict-2.2.0-py2.py3-none-any.whl", hash = "sha256:dabcc8c8b6833aa3b6602daad50f03da068322c1a90999ff78aed9eecc8fa92c"},
{file = "zict-2.2.0.tar.gz", hash = "sha256:d7366c2e2293314112dcf2432108428a67b927b00005619feefc310d12d833f3"},
]
zipp = [
{file = "zipp-3.11.0-py3-none-any.whl", hash = "sha256:83a28fcb75844b5c0cdaf5aa4003c2d728c77e05f5aeabe8e95e56727005fbaa"},
{file = "zipp-3.11.0.tar.gz", hash = "sha256:a7a22e05929290a67401440b39690ae6563279bced5f314609d9d03798f56766"},
]
| andresmor-ms | 11c4e0dafd6e824eb81ad14262457d954ae61468 | affe0952f4aba6845247355c171565510c2c1673 | yep it is autogenerated | andresmor-ms | 262 |
py-why/dowhy | 737 | Add polynom regressor and classifier to gcm | This replaces the ProductRegressor.
Signed-off-by: Patrick Bloebaum <[email protected]> | null | 2022-11-01 15:56:18+00:00 | 2022-11-04 17:32:01+00:00 | dowhy/gcm/auto.py | import warnings
from enum import Enum, auto
from functools import partial
from typing import Callable, List, Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from sklearn import metrics
from sklearn.exceptions import ConvergenceWarning
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.model_selection import KFold, train_test_split
from sklearn.preprocessing import MultiLabelBinarizer
from dowhy.gcm.cms import ProbabilisticCausalModel
from dowhy.gcm.fcms import AdditiveNoiseModel, ClassificationModel, ClassifierFCM, PredictionModel
from dowhy.gcm.graph import CAUSAL_MECHANISM, get_ordered_predecessors, is_root_node, validate_causal_model_assignment
from dowhy.gcm.ml import (
create_elastic_net_regressor,
create_hist_gradient_boost_classifier,
create_hist_gradient_boost_regressor,
create_lasso_regressor,
create_linear_regressor,
create_logistic_regression_classifier,
create_random_forest_regressor,
create_ridge_regressor,
create_support_vector_regressor,
)
from dowhy.gcm.ml.classification import (
create_ada_boost_classifier,
create_extra_trees_classifier,
create_gaussian_nb_classifier,
create_knn_classifier,
create_random_forest_classifier,
create_support_vector_classifier,
)
from dowhy.gcm.ml.regression import (
create_ada_boost_regressor,
create_extra_trees_regressor,
create_knn_regressor,
create_product_regressor,
)
from dowhy.gcm.stochastic_models import EmpiricalDistribution
from dowhy.gcm.util.general import (
apply_one_hot_encoding,
fit_one_hot_encoders,
is_categorical,
set_random_seed,
shape_into_2d,
)
_LIST_OF_POTENTIAL_CLASSIFIERS = [
partial(create_logistic_regression_classifier, max_iter=1000),
create_random_forest_classifier,
create_hist_gradient_boost_classifier,
create_extra_trees_classifier,
create_support_vector_classifier,
create_knn_classifier,
create_gaussian_nb_classifier,
create_ada_boost_classifier,
]
_LIST_OF_POTENTIAL_REGRESSORS = [
create_linear_regressor,
create_ridge_regressor,
partial(create_lasso_regressor, max_iter=5000),
partial(create_elastic_net_regressor, max_iter=5000),
create_random_forest_regressor,
create_hist_gradient_boost_regressor,
create_support_vector_regressor,
create_extra_trees_regressor,
create_knn_regressor,
create_ada_boost_regressor,
create_product_regressor,
]
class AssignmentQuality(Enum):
GOOD = (auto(),)
BETTER = auto()
def assign_causal_mechanisms(
causal_model: ProbabilisticCausalModel,
based_on: pd.DataFrame,
quality: AssignmentQuality = AssignmentQuality.GOOD,
override_models: bool = False,
) -> None:
"""Automatically assigns appropriate causal models. If causal models are already assigned to nodes and
override_models is set to False, this function only validates the assignments with respect to the graph structure.
Here, the validation checks whether root nodes have StochasticModels and non-root ConditionalStochasticModels
assigned.
:param causal_model: The causal model to whose nodes to assign causal models.
:param based_on: Jointly sampled data corresponding to the nodes of the given graph.
:param quality: AssignmentQuality for the automatic model selection and model accuracy. This changes the type of
prediction model and time spent on the selection. Options are:
- AssignmentQuality.GOOD: Checks whether the data is linear. If the data is linear, an OLS model is
used, otherwise a gradient boost model.
Model selection speed: Fast
Model training speed: Fast
Model inference speed: Fast
Model accuracy: Medium
- AssignmentQuality.BETTER: Compares multiple model types and uses the one with the best performance
averaged over multiple splits of the training data. By default, the model with the smallest root mean
squared error is selected for regression problems and the model with the highest F1 score is selected for
classification problems. For a list of possible models, see _LIST_OF_POTENTIAL_REGRESSORS and
_LIST_OF_POTENTIAL_CLASSIFIERS, respectively.
Model selection speed: Medium
Model training speed: Fast
Model inference speed: Fast
Model accuracy: Good
:param override_models: If set to True, existing model assignments are replaced with automatically selected
ones. If set to False, the assigned models are only validated with respect to the graph structure.
:return: None
"""
for node in causal_model.graph.nodes:
if not override_models and CAUSAL_MECHANISM in causal_model.graph.nodes[node]:
validate_causal_model_assignment(causal_model.graph, node)
continue
if is_root_node(causal_model.graph, node):
causal_model.set_causal_mechanism(node, EmpiricalDistribution())
else:
prediction_model = select_model(
based_on[get_ordered_predecessors(causal_model.graph, node)].to_numpy(),
based_on[node].to_numpy(),
quality,
)
if isinstance(prediction_model, ClassificationModel):
causal_model.set_causal_mechanism(node, ClassifierFCM(prediction_model))
else:
causal_model.set_causal_mechanism(node, AdditiveNoiseModel(prediction_model))
def select_model(
X: np.ndarray, Y: np.ndarray, model_selection_quality: AssignmentQuality
) -> Union[PredictionModel, ClassificationModel]:
target_is_categorical = is_categorical(Y)
if model_selection_quality == AssignmentQuality.GOOD:
use_linear_prediction_models = has_linear_relationship(X, Y)
if target_is_categorical:
if use_linear_prediction_models:
return create_logistic_regression_classifier(max_iter=1000)
else:
return create_hist_gradient_boost_classifier()
else:
if use_linear_prediction_models:
return find_best_model(
[create_linear_regressor, create_product_regressor], X, Y, model_selection_splits=2
)()
else:
return find_best_model(
[create_hist_gradient_boost_regressor, create_product_regressor], X, Y, model_selection_splits=2
)()
elif model_selection_quality == AssignmentQuality.BETTER:
if target_is_categorical:
return find_best_model(_LIST_OF_POTENTIAL_CLASSIFIERS, X, Y)()
else:
return find_best_model(_LIST_OF_POTENTIAL_REGRESSORS, X, Y)()
def has_linear_relationship(X: np.ndarray, Y: np.ndarray, max_num_samples: int = 3000) -> bool:
X, Y = shape_into_2d(X, Y)
target_is_categorical = is_categorical(Y)
# Making sure there are at least 30% test samples.
num_trainings_samples = min(max_num_samples, round(X.shape[0] * 0.7))
num_test_samples = min(X.shape[0] - num_trainings_samples, max_num_samples)
if target_is_categorical:
all_classes, indices, counts = np.unique(Y, return_counts=True, return_index=True)
for i in range(all_classes.size):
# Making sure that there are at least 2 samples from one class (here, simply duplicate the point).
if counts[i] == 1:
X = np.row_stack([X, X[indices[i], :]])
Y = np.row_stack([Y, Y[indices[i], :]])
x_train, x_test, y_train, y_test = train_test_split(
X, Y, train_size=num_trainings_samples, test_size=num_test_samples, stratify=Y
)
else:
x_train, x_test, y_train, y_test = train_test_split(
X, Y, train_size=num_trainings_samples, test_size=num_test_samples
)
one_hot_encoder = fit_one_hot_encoders(np.row_stack([x_train, x_test]))
x_train = apply_one_hot_encoding(x_train, one_hot_encoder)
x_test = apply_one_hot_encoding(x_test, one_hot_encoder)
if target_is_categorical:
linear_mdl = LogisticRegression(max_iter=1000)
nonlinear_mdl = create_hist_gradient_boost_classifier()
linear_mdl.fit(x_train, y_train.squeeze())
nonlinear_mdl.fit(x_train, y_train.squeeze())
# Compare number of correct classifications.
return np.sum(shape_into_2d(linear_mdl.predict(x_test)) == y_test) >= np.sum(
shape_into_2d(nonlinear_mdl.predict(x_test)) == y_test
)
else:
linear_mdl = LinearRegression()
nonlinear_mdl = create_hist_gradient_boost_regressor()
linear_mdl.fit(x_train, y_train.squeeze())
nonlinear_mdl.fit(x_train, y_train.squeeze())
return np.mean((y_test - shape_into_2d(linear_mdl.predict(x_test))) ** 2) <= np.mean(
(y_test - shape_into_2d(nonlinear_mdl.predict(x_test))) ** 2
)
def find_best_model(
prediction_model_factories: List[Callable[[], PredictionModel]],
X: np.ndarray,
Y: np.ndarray,
metric: Optional[Callable[[np.ndarray, np.ndarray], float]] = None,
max_samples_per_split: int = 10000,
model_selection_splits: int = 5,
n_jobs: int = -1,
) -> Callable[[], PredictionModel]:
X, Y = shape_into_2d(X, Y)
is_classification_problem = isinstance(prediction_model_factories[0](), ClassificationModel)
if metric is None:
if is_classification_problem:
metric = lambda y_true, y_preds: -metrics.f1_score(
y_true, y_preds, average="macro", zero_division=0
) # Higher score is better
else:
metric = metrics.mean_squared_error
labelBinarizer = None
if is_classification_problem:
labelBinarizer = MultiLabelBinarizer()
labelBinarizer.fit(Y)
kfolds = list(KFold(n_splits=model_selection_splits).split(range(X.shape[0])))
def estimate_average_score(prediction_model_factory: Callable[[], PredictionModel], random_seed: int) -> float:
set_random_seed(random_seed)
average_result = 0
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=ConvergenceWarning)
for train_indices, test_indices in kfolds:
model_instance = prediction_model_factory()
model_instance.fit(X[train_indices[:max_samples_per_split]], Y[train_indices[:max_samples_per_split]])
y_true = Y[test_indices[:max_samples_per_split]]
y_pred = model_instance.predict(X[test_indices[:max_samples_per_split]])
if labelBinarizer is not None:
y_true = labelBinarizer.transform(y_true)
y_pred = labelBinarizer.transform(y_pred)
average_result += metric(y_true, y_pred)
return average_result / model_selection_splits
random_seeds = np.random.randint(np.iinfo(np.int32).max, size=len(prediction_model_factories))
average_metric_scores = Parallel(n_jobs=n_jobs)(
delayed(estimate_average_score)(prediction_model_factory, random_seed)
for prediction_model_factory, random_seed in zip(prediction_model_factories, random_seeds)
)
return sorted(zip(prediction_model_factories, average_metric_scores), key=lambda x: x[1])[0][0]
| import warnings
from enum import Enum, auto
from functools import partial
from typing import Callable, List, Optional, Union
import numpy as np
import pandas as pd
from joblib import Parallel, delayed
from sklearn import metrics
from sklearn.exceptions import ConvergenceWarning
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.model_selection import KFold, train_test_split
from sklearn.preprocessing import MultiLabelBinarizer
from dowhy.gcm.cms import ProbabilisticCausalModel
from dowhy.gcm.fcms import AdditiveNoiseModel, ClassificationModel, ClassifierFCM, PredictionModel
from dowhy.gcm.graph import CAUSAL_MECHANISM, get_ordered_predecessors, is_root_node, validate_causal_model_assignment
from dowhy.gcm.ml import (
create_hist_gradient_boost_classifier,
create_hist_gradient_boost_regressor,
create_lasso_regressor,
create_linear_regressor,
create_logistic_regression_classifier,
create_random_forest_regressor,
create_ridge_regressor,
create_support_vector_regressor,
)
from dowhy.gcm.ml.classification import (
create_ada_boost_classifier,
create_extra_trees_classifier,
create_gaussian_nb_classifier,
create_knn_classifier,
create_polynom_logistic_regression_classifier,
create_random_forest_classifier,
create_support_vector_classifier,
)
from dowhy.gcm.ml.regression import (
create_ada_boost_regressor,
create_elastic_net_regressor,
create_extra_trees_regressor,
create_knn_regressor,
create_polynom_regressor,
)
from dowhy.gcm.stochastic_models import EmpiricalDistribution
from dowhy.gcm.util.general import (
apply_one_hot_encoding,
fit_one_hot_encoders,
is_categorical,
set_random_seed,
shape_into_2d,
)
_LIST_OF_POTENTIAL_CLASSIFIERS = [
partial(create_logistic_regression_classifier, max_iter=1000),
partial(create_polynom_logistic_regression_classifier, max_iter=1000),
create_random_forest_classifier,
create_hist_gradient_boost_classifier,
create_extra_trees_classifier,
create_support_vector_classifier,
create_knn_classifier,
create_gaussian_nb_classifier,
create_ada_boost_classifier,
]
_LIST_OF_POTENTIAL_REGRESSORS = [
create_linear_regressor,
create_ridge_regressor,
create_polynom_regressor,
partial(create_lasso_regressor, max_iter=5000),
partial(create_elastic_net_regressor, max_iter=5000),
create_random_forest_regressor,
create_hist_gradient_boost_regressor,
create_support_vector_regressor,
create_extra_trees_regressor,
create_knn_regressor,
create_ada_boost_regressor,
]
class AssignmentQuality(Enum):
GOOD = (auto(),)
BETTER = auto()
def assign_causal_mechanisms(
causal_model: ProbabilisticCausalModel,
based_on: pd.DataFrame,
quality: AssignmentQuality = AssignmentQuality.GOOD,
override_models: bool = False,
) -> None:
"""Automatically assigns appropriate causal models. If causal models are already assigned to nodes and
override_models is set to False, this function only validates the assignments with respect to the graph structure.
Here, the validation checks whether root nodes have StochasticModels and non-root ConditionalStochasticModels
assigned.
:param causal_model: The causal model to whose nodes to assign causal models.
:param based_on: Jointly sampled data corresponding to the nodes of the given graph.
:param quality: AssignmentQuality for the automatic model selection and model accuracy. This changes the type of
prediction model and time spent on the selection. Options are:
- AssignmentQuality.GOOD: Checks whether the data is linear. If the data is linear, an OLS model is
used, otherwise a gradient boost model.
Model selection speed: Fast
Model training speed: Fast
Model inference speed: Fast
Model accuracy: Medium
- AssignmentQuality.BETTER: Compares multiple model types and uses the one with the best performance
averaged over multiple splits of the training data. By default, the model with the smallest root mean
squared error is selected for regression problems and the model with the highest F1 score is selected for
classification problems. For a list of possible models, see _LIST_OF_POTENTIAL_REGRESSORS and
_LIST_OF_POTENTIAL_CLASSIFIERS, respectively.
Model selection speed: Medium
Model training speed: Fast
Model inference speed: Fast
Model accuracy: Good
:param override_models: If set to True, existing model assignments are replaced with automatically selected
ones. If set to False, the assigned models are only validated with respect to the graph structure.
:return: None
"""
for node in causal_model.graph.nodes:
if not override_models and CAUSAL_MECHANISM in causal_model.graph.nodes[node]:
validate_causal_model_assignment(causal_model.graph, node)
continue
if is_root_node(causal_model.graph, node):
causal_model.set_causal_mechanism(node, EmpiricalDistribution())
else:
prediction_model = select_model(
based_on[get_ordered_predecessors(causal_model.graph, node)].to_numpy(),
based_on[node].to_numpy(),
quality,
)
if isinstance(prediction_model, ClassificationModel):
causal_model.set_causal_mechanism(node, ClassifierFCM(prediction_model))
else:
causal_model.set_causal_mechanism(node, AdditiveNoiseModel(prediction_model))
def select_model(
X: np.ndarray, Y: np.ndarray, model_selection_quality: AssignmentQuality
) -> Union[PredictionModel, ClassificationModel]:
target_is_categorical = is_categorical(Y)
if model_selection_quality == AssignmentQuality.GOOD:
use_linear_prediction_models = has_linear_relationship(X, Y)
if target_is_categorical:
if use_linear_prediction_models:
return create_logistic_regression_classifier(max_iter=1000)
else:
return create_hist_gradient_boost_classifier()
else:
if use_linear_prediction_models:
return find_best_model(
[create_linear_regressor, create_polynom_regressor], X, Y, model_selection_splits=2
)()
else:
return find_best_model(
[create_hist_gradient_boost_regressor, create_polynom_regressor], X, Y, model_selection_splits=2
)()
elif model_selection_quality == AssignmentQuality.BETTER:
if target_is_categorical:
return find_best_model(_LIST_OF_POTENTIAL_CLASSIFIERS, X, Y)()
else:
return find_best_model(_LIST_OF_POTENTIAL_REGRESSORS, X, Y)()
def has_linear_relationship(X: np.ndarray, Y: np.ndarray, max_num_samples: int = 3000) -> bool:
X, Y = shape_into_2d(X, Y)
target_is_categorical = is_categorical(Y)
# Making sure there are at least 30% test samples.
num_trainings_samples = min(max_num_samples, round(X.shape[0] * 0.7))
num_test_samples = min(X.shape[0] - num_trainings_samples, max_num_samples)
if target_is_categorical:
all_classes, indices, counts = np.unique(Y, return_counts=True, return_index=True)
for i in range(all_classes.size):
# Making sure that there are at least 2 samples from one class (here, simply duplicate the point).
if counts[i] == 1:
X = np.row_stack([X, X[indices[i], :]])
Y = np.row_stack([Y, Y[indices[i], :]])
x_train, x_test, y_train, y_test = train_test_split(
X, Y, train_size=num_trainings_samples, test_size=num_test_samples, stratify=Y
)
else:
x_train, x_test, y_train, y_test = train_test_split(
X, Y, train_size=num_trainings_samples, test_size=num_test_samples
)
one_hot_encoder = fit_one_hot_encoders(np.row_stack([x_train, x_test]))
x_train = apply_one_hot_encoding(x_train, one_hot_encoder)
x_test = apply_one_hot_encoding(x_test, one_hot_encoder)
if target_is_categorical:
linear_mdl = LogisticRegression(max_iter=1000)
nonlinear_mdl = create_hist_gradient_boost_classifier()
linear_mdl.fit(x_train, y_train.squeeze())
nonlinear_mdl.fit(x_train, y_train.squeeze())
# Compare number of correct classifications.
return np.sum(shape_into_2d(linear_mdl.predict(x_test)) == y_test) >= np.sum(
shape_into_2d(nonlinear_mdl.predict(x_test)) == y_test
)
else:
linear_mdl = LinearRegression()
nonlinear_mdl = create_hist_gradient_boost_regressor()
linear_mdl.fit(x_train, y_train.squeeze())
nonlinear_mdl.fit(x_train, y_train.squeeze())
return np.mean((y_test - shape_into_2d(linear_mdl.predict(x_test))) ** 2) <= np.mean(
(y_test - shape_into_2d(nonlinear_mdl.predict(x_test))) ** 2
)
def find_best_model(
prediction_model_factories: List[Callable[[], PredictionModel]],
X: np.ndarray,
Y: np.ndarray,
metric: Optional[Callable[[np.ndarray, np.ndarray], float]] = None,
max_samples_per_split: int = 10000,
model_selection_splits: int = 5,
n_jobs: int = -1,
) -> Callable[[], PredictionModel]:
X, Y = shape_into_2d(X, Y)
is_classification_problem = isinstance(prediction_model_factories[0](), ClassificationModel)
if metric is None:
if is_classification_problem:
metric = lambda y_true, y_preds: -metrics.f1_score(
y_true, y_preds, average="macro", zero_division=0
) # Higher score is better
else:
metric = metrics.mean_squared_error
labelBinarizer = None
if is_classification_problem:
labelBinarizer = MultiLabelBinarizer()
labelBinarizer.fit(Y)
kfolds = list(KFold(n_splits=model_selection_splits).split(range(X.shape[0])))
def estimate_average_score(prediction_model_factory: Callable[[], PredictionModel], random_seed: int) -> float:
set_random_seed(random_seed)
average_result = 0
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=ConvergenceWarning)
for train_indices, test_indices in kfolds:
model_instance = prediction_model_factory()
model_instance.fit(X[train_indices[:max_samples_per_split]], Y[train_indices[:max_samples_per_split]])
y_true = Y[test_indices[:max_samples_per_split]]
y_pred = model_instance.predict(X[test_indices[:max_samples_per_split]])
if labelBinarizer is not None:
y_true = labelBinarizer.transform(y_true)
y_pred = labelBinarizer.transform(y_pred)
average_result += metric(y_true, y_pred)
return average_result / model_selection_splits
random_seeds = np.random.randint(np.iinfo(np.int32).max, size=len(prediction_model_factories))
average_metric_scores = Parallel(n_jobs=n_jobs)(
delayed(estimate_average_score)(prediction_model_factory, random_seed)
for prediction_model_factory, random_seed in zip(prediction_model_factories, random_seeds)
)
return sorted(zip(prediction_model_factories, average_metric_scores), key=lambda x: x[1])[0][0]
| bloebp | fb5b4d52606826cd54a0c2436193753ff06c4855 | 2ed7cf4e93e01de4f16ebd2f66af07196aa1065f | Address the typo `creat_` everywhere. | kailashbuki | 263 |
py-why/dowhy | 737 | Add polynom regressor and classifier to gcm | This replaces the ProductRegressor.
Signed-off-by: Patrick Bloebaum <[email protected]> | null | 2022-11-01 15:56:18+00:00 | 2022-11-04 17:32:01+00:00 | tests/gcm/test_auto.py | import networkx as nx
import numpy as np
import pandas as pd
from flaky import flaky
from sklearn.ensemble import HistGradientBoostingClassifier, HistGradientBoostingRegressor
from sklearn.linear_model import ElasticNetCV, LassoCV, LinearRegression, LogisticRegression, RidgeCV
from sklearn.naive_bayes import GaussianNB
from dowhy.gcm import ProbabilisticCausalModel
from dowhy.gcm.auto import AssignmentQuality, assign_causal_mechanisms
def _generate_linear_regression_data():
X = np.random.normal(0, 1, (1000, 5))
Y = np.sum(X * np.random.uniform(-5, 5, X.shape[1]), axis=1)
return X, Y
def _generate_non_linear_regression_data():
X = np.random.normal(0, 1, (1000, 5))
Y = np.sum(X**2, axis=1)
return X, Y
def _generate_linear_classification_data():
X = np.random.normal(0, 1, (1000, 5))
Y = (np.sum(X * np.random.uniform(-5, 5, X.shape[1]), axis=1) > 0).astype(str)
return X, Y
def _generate_non_classification_data():
X = np.random.normal(0, 1, (1000, 5))
Y = (np.sum(np.exp(X), axis=1) > np.median(np.sum(np.exp(X), axis=1))).astype(str)
return X, Y
@flaky(max_runs=3)
def test_given_linear_regression_problem_when_auto_assign_causal_models_with_good_quality_returns_linear_model():
X, Y = _generate_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LinearRegression)
@flaky(max_runs=3)
def test_given_linear_regression_problem_when_auto_assign_causal_models_with_better_quality_returns_linear_model():
X, Y = _generate_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LinearRegression)
@flaky(max_runs=3)
def test_given_non_linear_regression_problem_when_auto_assign_causal_models_with_good_quality_returns_non_linear_model():
X, Y = _generate_non_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, HistGradientBoostingRegressor)
@flaky(max_runs=3)
def test_given_non_linear_regression_problem_when_auto_assign_causal_models_with_better_quality_returns_non_linear_model():
X, Y = _generate_non_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LinearRegression)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LassoCV)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, ElasticNetCV)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, RidgeCV)
@flaky(max_runs=3)
def test_given_linear_classification_problem_when_auto_assign_causal_models_with_good_quality_returns_linear_model():
X, Y = _generate_linear_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, LogisticRegression)
@flaky(max_runs=3)
def test_given_linear_classification_problem_when_auto_assign_causal_models_with_better_quality_returns_linear_model():
X, Y = _generate_linear_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, LogisticRegression)
@flaky(max_runs=3)
def test_given_non_linear_classification_problem_when_auto_assign_causal_models_with_good_quality_returns_non_linear_model():
X, Y = _generate_non_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, HistGradientBoostingClassifier)
@flaky(max_runs=3)
def test_given_non_linear_classification_problem_when_auto_assign_causal_models_with_better_quality_returns_non_linear_model():
X, Y = _generate_non_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert not isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, LogisticRegression)
assert not isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, GaussianNB)
def test_when_auto_called_from_main_namespace_returns_no_attribute_error():
from dowhy import gcm
_ = gcm.auto.AssignmentQuality.GOOD
| import networkx as nx
import numpy as np
import pandas as pd
from flaky import flaky
from sklearn.ensemble import HistGradientBoostingClassifier, HistGradientBoostingRegressor
from sklearn.linear_model import ElasticNetCV, LassoCV, LinearRegression, LogisticRegression, RidgeCV
from sklearn.naive_bayes import GaussianNB
from dowhy.gcm import ProbabilisticCausalModel
from dowhy.gcm.auto import AssignmentQuality, assign_causal_mechanisms
def _generate_linear_regression_data():
X = np.random.normal(0, 1, (1000, 5))
Y = np.sum(X * np.random.uniform(-5, 5, X.shape[1]), axis=1)
return X, Y
def _generate_non_linear_regression_data():
X = np.random.normal(0, 1, (1000, 5))
Y = np.sum(np.log(abs(X)), axis=1)
return X, Y
def _generate_linear_classification_data():
X = np.random.normal(0, 1, (1000, 5))
Y = (np.sum(X * np.random.uniform(-5, 5, X.shape[1]), axis=1) > 0).astype(str)
return X, Y
def _generate_non_classification_data():
X = np.random.normal(0, 1, (1000, 5))
Y = (np.sum(np.exp(X), axis=1) > np.median(np.sum(np.exp(X), axis=1))).astype(str)
return X, Y
@flaky(max_runs=3)
def test_given_linear_regression_problem_when_auto_assign_causal_models_with_good_quality_returns_linear_model():
X, Y = _generate_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LinearRegression)
@flaky(max_runs=3)
def test_given_linear_regression_problem_when_auto_assign_causal_models_with_better_quality_returns_linear_model():
X, Y = _generate_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LinearRegression)
@flaky(max_runs=3)
def test_given_non_linear_regression_problem_when_auto_assign_causal_models_with_good_quality_returns_non_linear_model():
X, Y = _generate_non_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, HistGradientBoostingRegressor)
@flaky(max_runs=3)
def test_given_non_linear_regression_problem_when_auto_assign_causal_models_with_better_quality_returns_non_linear_model():
X, Y = _generate_non_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LinearRegression)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LassoCV)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, ElasticNetCV)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, RidgeCV)
@flaky(max_runs=3)
def test_given_linear_classification_problem_when_auto_assign_causal_models_with_good_quality_returns_linear_model():
X, Y = _generate_linear_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, LogisticRegression)
@flaky(max_runs=3)
def test_given_linear_classification_problem_when_auto_assign_causal_models_with_better_quality_returns_linear_model():
X, Y = _generate_linear_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, LogisticRegression)
@flaky(max_runs=3)
def test_given_non_linear_classification_problem_when_auto_assign_causal_models_with_good_quality_returns_non_linear_model():
X, Y = _generate_non_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, HistGradientBoostingClassifier)
@flaky(max_runs=3)
def test_given_non_linear_classification_problem_when_auto_assign_causal_models_with_better_quality_returns_non_linear_model():
X, Y = _generate_non_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert not isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, LogisticRegression)
assert not isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, GaussianNB)
def test_when_auto_called_from_main_namespace_returns_no_attribute_error():
from dowhy import gcm
_ = gcm.auto.AssignmentQuality.GOOD
| bloebp | fb5b4d52606826cd54a0c2436193753ff06c4855 | 2ed7cf4e93e01de4f16ebd2f66af07196aa1065f | Is there a reason why moved to this generative model? | kailashbuki | 264 |
py-why/dowhy | 737 | Add polynom regressor and classifier to gcm | This replaces the ProductRegressor.
Signed-off-by: Patrick Bloebaum <[email protected]> | null | 2022-11-01 15:56:18+00:00 | 2022-11-04 17:32:01+00:00 | tests/gcm/test_auto.py | import networkx as nx
import numpy as np
import pandas as pd
from flaky import flaky
from sklearn.ensemble import HistGradientBoostingClassifier, HistGradientBoostingRegressor
from sklearn.linear_model import ElasticNetCV, LassoCV, LinearRegression, LogisticRegression, RidgeCV
from sklearn.naive_bayes import GaussianNB
from dowhy.gcm import ProbabilisticCausalModel
from dowhy.gcm.auto import AssignmentQuality, assign_causal_mechanisms
def _generate_linear_regression_data():
X = np.random.normal(0, 1, (1000, 5))
Y = np.sum(X * np.random.uniform(-5, 5, X.shape[1]), axis=1)
return X, Y
def _generate_non_linear_regression_data():
X = np.random.normal(0, 1, (1000, 5))
Y = np.sum(X**2, axis=1)
return X, Y
def _generate_linear_classification_data():
X = np.random.normal(0, 1, (1000, 5))
Y = (np.sum(X * np.random.uniform(-5, 5, X.shape[1]), axis=1) > 0).astype(str)
return X, Y
def _generate_non_classification_data():
X = np.random.normal(0, 1, (1000, 5))
Y = (np.sum(np.exp(X), axis=1) > np.median(np.sum(np.exp(X), axis=1))).astype(str)
return X, Y
@flaky(max_runs=3)
def test_given_linear_regression_problem_when_auto_assign_causal_models_with_good_quality_returns_linear_model():
X, Y = _generate_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LinearRegression)
@flaky(max_runs=3)
def test_given_linear_regression_problem_when_auto_assign_causal_models_with_better_quality_returns_linear_model():
X, Y = _generate_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LinearRegression)
@flaky(max_runs=3)
def test_given_non_linear_regression_problem_when_auto_assign_causal_models_with_good_quality_returns_non_linear_model():
X, Y = _generate_non_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, HistGradientBoostingRegressor)
@flaky(max_runs=3)
def test_given_non_linear_regression_problem_when_auto_assign_causal_models_with_better_quality_returns_non_linear_model():
X, Y = _generate_non_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LinearRegression)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LassoCV)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, ElasticNetCV)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, RidgeCV)
@flaky(max_runs=3)
def test_given_linear_classification_problem_when_auto_assign_causal_models_with_good_quality_returns_linear_model():
X, Y = _generate_linear_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, LogisticRegression)
@flaky(max_runs=3)
def test_given_linear_classification_problem_when_auto_assign_causal_models_with_better_quality_returns_linear_model():
X, Y = _generate_linear_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, LogisticRegression)
@flaky(max_runs=3)
def test_given_non_linear_classification_problem_when_auto_assign_causal_models_with_good_quality_returns_non_linear_model():
X, Y = _generate_non_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, HistGradientBoostingClassifier)
@flaky(max_runs=3)
def test_given_non_linear_classification_problem_when_auto_assign_causal_models_with_better_quality_returns_non_linear_model():
X, Y = _generate_non_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert not isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, LogisticRegression)
assert not isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, GaussianNB)
def test_when_auto_called_from_main_namespace_returns_no_attribute_error():
from dowhy import gcm
_ = gcm.auto.AssignmentQuality.GOOD
| import networkx as nx
import numpy as np
import pandas as pd
from flaky import flaky
from sklearn.ensemble import HistGradientBoostingClassifier, HistGradientBoostingRegressor
from sklearn.linear_model import ElasticNetCV, LassoCV, LinearRegression, LogisticRegression, RidgeCV
from sklearn.naive_bayes import GaussianNB
from dowhy.gcm import ProbabilisticCausalModel
from dowhy.gcm.auto import AssignmentQuality, assign_causal_mechanisms
def _generate_linear_regression_data():
X = np.random.normal(0, 1, (1000, 5))
Y = np.sum(X * np.random.uniform(-5, 5, X.shape[1]), axis=1)
return X, Y
def _generate_non_linear_regression_data():
X = np.random.normal(0, 1, (1000, 5))
Y = np.sum(np.log(abs(X)), axis=1)
return X, Y
def _generate_linear_classification_data():
X = np.random.normal(0, 1, (1000, 5))
Y = (np.sum(X * np.random.uniform(-5, 5, X.shape[1]), axis=1) > 0).astype(str)
return X, Y
def _generate_non_classification_data():
X = np.random.normal(0, 1, (1000, 5))
Y = (np.sum(np.exp(X), axis=1) > np.median(np.sum(np.exp(X), axis=1))).astype(str)
return X, Y
@flaky(max_runs=3)
def test_given_linear_regression_problem_when_auto_assign_causal_models_with_good_quality_returns_linear_model():
X, Y = _generate_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LinearRegression)
@flaky(max_runs=3)
def test_given_linear_regression_problem_when_auto_assign_causal_models_with_better_quality_returns_linear_model():
X, Y = _generate_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LinearRegression)
@flaky(max_runs=3)
def test_given_non_linear_regression_problem_when_auto_assign_causal_models_with_good_quality_returns_non_linear_model():
X, Y = _generate_non_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, HistGradientBoostingRegressor)
@flaky(max_runs=3)
def test_given_non_linear_regression_problem_when_auto_assign_causal_models_with_better_quality_returns_non_linear_model():
X, Y = _generate_non_linear_regression_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LinearRegression)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, LassoCV)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, ElasticNetCV)
assert not isinstance(causal_model.causal_mechanism("Y").prediction_model.sklearn_model, RidgeCV)
@flaky(max_runs=3)
def test_given_linear_classification_problem_when_auto_assign_causal_models_with_good_quality_returns_linear_model():
X, Y = _generate_linear_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, LogisticRegression)
@flaky(max_runs=3)
def test_given_linear_classification_problem_when_auto_assign_causal_models_with_better_quality_returns_linear_model():
X, Y = _generate_linear_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, LogisticRegression)
@flaky(max_runs=3)
def test_given_non_linear_classification_problem_when_auto_assign_causal_models_with_good_quality_returns_non_linear_model():
X, Y = _generate_non_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.GOOD)
assert isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, HistGradientBoostingClassifier)
@flaky(max_runs=3)
def test_given_non_linear_classification_problem_when_auto_assign_causal_models_with_better_quality_returns_non_linear_model():
X, Y = _generate_non_classification_data()
causal_model = ProbabilisticCausalModel(
nx.DiGraph([("X0", "Y"), ("X1", "Y"), ("X2", "Y"), ("X3", "Y"), ("X4", "Y")])
)
data = {"X" + str(i): X[:, i] for i in range(X.shape[1])}
data.update({"Y": Y})
assign_causal_mechanisms(causal_model, pd.DataFrame(data), quality=AssignmentQuality.BETTER)
assert not isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, LogisticRegression)
assert not isinstance(causal_model.causal_mechanism("Y").classifier_model.sklearn_model, GaussianNB)
def test_when_auto_called_from_main_namespace_returns_no_attribute_error():
from dowhy import gcm
_ = gcm.auto.AssignmentQuality.GOOD
| bloebp | fb5b4d52606826cd54a0c2436193753ff06c4855 | 2ed7cf4e93e01de4f16ebd2f66af07196aa1065f | Wanted to have non-linear data that cannot be capture by a model with polynomial features (here, `X**2` would be captured by it with a degree 2). | bloebp | 265 |
py-why/dowhy | 736 | Add independence test based on the General Covariance Measure | Signed-off-by: Patrick Bloebaum <[email protected]> | null | 2022-11-01 01:38:35+00:00 | 2022-11-22 17:51:14+00:00 | dowhy/gcm/independence_test/__init__.py | from .kernel import approx_kernel_based, kernel_based
from .regression import regression_based
def independence_test(X, Y, conditioned_on=None, method="kernel"):
"""Performs a (conditional) independence test.
Three methods for (conditional) independence test are supported at the moment:
* `kernel`: Kernel-based (conditional) independence test.
* K. Zhang, J. Peters, D. Janzing, B. Schölkopf. *Kernel-based Conditional Independence Test and Application in Causal Discovery*. UAI'11, Pages 804–813, 2011.
* A. Gretton, K. Fukumizu, C.-H. Teo, L. Song, B. Schölkopf, A. Smola. *A Kernel Statistical Test of Independence*. NIPS 21, 2007.
Here, we utilize the implementations of the https://github.com/cmu-phil/causal-learn package.
* `approx_kernel`: Approximate kernel-based (conditional) independence test.
* E. Strobl, K. Zhang, S. Visweswaran. *Approximate kernel-based conditional independence tests for fast non-parametric causal discovery*. Journal of Causal Inference, 2019.
* `regression`: Regression based (conditional) independence test using a f-test. See :func:`~dowhy.gcm.regression_based` for more details.
:param X: Observations of X.
:param Y: Observations of Y.
:param conditioned_on: Observations of conditioning variable if we want to perform a conditional independence test. By default, independence test is carried out.
:param method: Method for conditional independence test. The choices are:
`kernel` (default): :func:`~dowhy.gcm.kernel_based` (conditional) independence test.
`approx_kernel`: :func:`~dowhy.gcm.approx_kernel_based` (conditional) independence test.
`regression`: :func:`~dowhy.gcm.regression_based` (conditional) independence test.
For more information about these methods, see above.
:return: p-value of the (conditional) independence test. (Conditional) Independence is the null hypothesis.
"""
if method == "kernel":
return kernel_based(X, Y, Z=conditioned_on)
elif method == "approx_kernel":
return approx_kernel_based(X, Y, Z=conditioned_on)
elif method == "regression":
return regression_based(X, Y, Z=conditioned_on)
else:
raise ValueError(f'Invalid method "{method}"')
| from .general_cov_measure import general_cov_based
from .kernel import approx_kernel_based, kernel_based
from .regression import regression_based
def independence_test(X, Y, conditioned_on=None, method="kernel"):
"""Performs a (conditional) independence test.
Three methods for (conditional) independence test are supported at the moment:
* `kernel`: Kernel-based (conditional) independence test.
* K. Zhang, J. Peters, D. Janzing, B. Schölkopf. *Kernel-based Conditional Independence Test and Application in Causal Discovery*. UAI'11, Pages 804–813, 2011.
* A. Gretton, K. Fukumizu, C.-H. Teo, L. Song, B. Schölkopf, A. Smola. *A Kernel Statistical Test of Independence*. NIPS 21, 2007.
Here, we utilize the implementations of the https://github.com/cmu-phil/causal-learn package.
* `approx_kernel`: Approximate kernel-based (conditional) independence test.
* E. Strobl, K. Zhang, S. Visweswaran. *Approximate kernel-based conditional independence tests for fast non-parametric causal discovery*. Journal of Causal Inference, 2019.
* `regression`: Regression based (conditional) independence test using a f-test. See :func:`~dowhy.gcm.regression_based` for more details.
* `gcm`: (Conditional) independence test based on the Generalised Covariance Measure. See :func:`~dowhy.gcm.general_cov_based` for more details.
* R. D. Shah and J Peters. *The hardness of conditional independence testing and the generalised covariance measure*, The Annals of Statistics 48(3), 2018
:param X: Observations of X.
:param Y: Observations of Y.
:param conditioned_on: Observations of conditioning variable if we want to perform a conditional independence test. By default, independence test is carried out.
:param method: Method for conditional independence test. The choices are:
`kernel` (default): :func:`~dowhy.gcm.kernel_based` (conditional) independence test.
`approx_kernel`: :func:`~dowhy.gcm.approx_kernel_based` (conditional) independence test.
`regression`: :func:`~dowhy.gcm.regression_based` (conditional) independence test.
`gcm`: :func:`~dowhy.gcm.general_cov_based` (conditional) independence test.
For more information about these methods, see above.
:return: p-value of the (conditional) independence test. (Conditional) Independence is the null hypothesis.
"""
if method == "kernel":
return kernel_based(X, Y, Z=conditioned_on)
elif method == "approx_kernel":
return approx_kernel_based(X, Y, Z=conditioned_on)
elif method == "regression":
return regression_based(X, Y, Z=conditioned_on)
else:
raise ValueError(f'Invalid method "{method}"')
| bloebp | d9f27afc18cfec14ffd2e0178f7ba143f409c832 | 099b8c474c35cc3d528be001e8c49fc16643eebc | Typo: Generalised Covariance Measure | kailashbuki | 266 |
py-why/dowhy | 732 | Set seed on data generation for deterministic test | * Set seed for deterministic data generation on `dowhy_function_api.ipynb` notebook.
* Fix wrong parameters on backwards compatibility example.
Fixes #704
Signed-off-by: Andres Morales <[email protected]> | null | 2022-10-31 18:37:31+00:00 | 2022-11-01 15:16:07+00:00 | docs/source/example_notebooks/dowhy_functional_api.ipynb | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_estimators.propensity_score_matching_estimator import PropensityScoreMatchingEstimator\n",
"\n",
"from dowhy.utils.api import parse_state\n",
"\n",
"from dowhy.causal_estimator import estimate_effect # Estimate effect function\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"print(treatment_name)\n",
"outcome_name = data[\"outcome_name\"]\n",
"print(outcome_name)\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Basic Estimate Effect function\n",
"\n",
"\n",
"propensity_score_estimator = PropensityScoreMatchingEstimator(\n",
" data=data[\"df\"],\n",
" identified_estimand=identified_estimand,\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" control_value=0,\n",
" treatment_value=1,\n",
" test_significance=None,\n",
" evaluate_effect_strength=False,\n",
" confidence_intervals=False,\n",
" target_units=\"ate\",\n",
" effect_modifiers=graph.get_effect_modifiers(treatment_name, outcome_name),\n",
")\n",
"\n",
"estimate = estimate_effect(\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" identified_estimand=identified_estimand,\n",
" identifier_name=\"backdoor\",\n",
" method=propensity_score_estimator,\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create Causal Model\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand, estimate, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand, estimate, \"data_subset_refuter\"\n",
")\n",
"print(data_subset_refutation_causal_model_api)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_estimators.propensity_score_matching_estimator import PropensityScoreMatchingEstimator\n",
"\n",
"from dowhy.utils.api import parse_state\n",
"\n",
"from dowhy.causal_estimator import estimate_effect # Estimate effect function\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"\n",
"# set random seed for deterministic dataset generation \n",
"# and avoid problems when running tests\n",
"import numpy as np\n",
"np.random.seed(1)\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"print(treatment_name)\n",
"outcome_name = data[\"outcome_name\"]\n",
"print(outcome_name)\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Basic Estimate Effect function\n",
"\n",
"\n",
"propensity_score_estimator = PropensityScoreMatchingEstimator(\n",
" data=data[\"df\"],\n",
" identified_estimand=identified_estimand,\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" control_value=0,\n",
" treatment_value=1,\n",
" test_significance=None,\n",
" evaluate_effect_strength=False,\n",
" confidence_intervals=False,\n",
" target_units=\"ate\",\n",
" effect_modifiers=graph.get_effect_modifiers(treatment_name, outcome_name),\n",
")\n",
"\n",
"estimate = estimate_effect(\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" identified_estimand=identified_estimand,\n",
" identifier_name=\"backdoor\",\n",
" method=propensity_score_estimator,\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create Causal Model\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand_causal_model_api, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand_causal_model_api, estimate_causal_model_api, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand_causal_model_api, estimate_causal_model_api, \"data_subset_refuter\"\n",
")\n",
"\n",
"print(data_subset_refutation_causal_model_api)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| andresmor-ms | b9bab69d737f34fc32a73feb426d0ddc2f471df2 | 68f5d2b1bc7c5357b243dcef31510cfdc65ff871 | Why comment this stuff out? Should we delete it itstead? | darthtrevino | 267 |
py-why/dowhy | 732 | Set seed on data generation for deterministic test | * Set seed for deterministic data generation on `dowhy_function_api.ipynb` notebook.
* Fix wrong parameters on backwards compatibility example.
Fixes #704
Signed-off-by: Andres Morales <[email protected]> | null | 2022-10-31 18:37:31+00:00 | 2022-11-01 15:16:07+00:00 | docs/source/example_notebooks/dowhy_functional_api.ipynb | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_estimators.propensity_score_matching_estimator import PropensityScoreMatchingEstimator\n",
"\n",
"from dowhy.utils.api import parse_state\n",
"\n",
"from dowhy.causal_estimator import estimate_effect # Estimate effect function\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"print(treatment_name)\n",
"outcome_name = data[\"outcome_name\"]\n",
"print(outcome_name)\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Basic Estimate Effect function\n",
"\n",
"\n",
"propensity_score_estimator = PropensityScoreMatchingEstimator(\n",
" data=data[\"df\"],\n",
" identified_estimand=identified_estimand,\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" control_value=0,\n",
" treatment_value=1,\n",
" test_significance=None,\n",
" evaluate_effect_strength=False,\n",
" confidence_intervals=False,\n",
" target_units=\"ate\",\n",
" effect_modifiers=graph.get_effect_modifiers(treatment_name, outcome_name),\n",
")\n",
"\n",
"estimate = estimate_effect(\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" identified_estimand=identified_estimand,\n",
" identifier_name=\"backdoor\",\n",
" method=propensity_score_estimator,\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create Causal Model\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand, estimate, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand, estimate, \"data_subset_refuter\"\n",
")\n",
"print(data_subset_refutation_causal_model_api)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_estimators.propensity_score_matching_estimator import PropensityScoreMatchingEstimator\n",
"\n",
"from dowhy.utils.api import parse_state\n",
"\n",
"from dowhy.causal_estimator import estimate_effect # Estimate effect function\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"\n",
"# set random seed for deterministic dataset generation \n",
"# and avoid problems when running tests\n",
"import numpy as np\n",
"np.random.seed(1)\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"print(treatment_name)\n",
"outcome_name = data[\"outcome_name\"]\n",
"print(outcome_name)\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Basic Estimate Effect function\n",
"\n",
"\n",
"propensity_score_estimator = PropensityScoreMatchingEstimator(\n",
" data=data[\"df\"],\n",
" identified_estimand=identified_estimand,\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" control_value=0,\n",
" treatment_value=1,\n",
" test_significance=None,\n",
" evaluate_effect_strength=False,\n",
" confidence_intervals=False,\n",
" target_units=\"ate\",\n",
" effect_modifiers=graph.get_effect_modifiers(treatment_name, outcome_name),\n",
")\n",
"\n",
"estimate = estimate_effect(\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" identified_estimand=identified_estimand,\n",
" identifier_name=\"backdoor\",\n",
" method=propensity_score_estimator,\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create Causal Model\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand_causal_model_api, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand_causal_model_api, estimate_causal_model_api, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand_causal_model_api, estimate_causal_model_api, \"data_subset_refuter\"\n",
")\n",
"\n",
"print(data_subset_refutation_causal_model_api)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| andresmor-ms | b9bab69d737f34fc32a73feb426d0ddc2f471df2 | 68f5d2b1bc7c5357b243dcef31510cfdc65ff871 | Those are other examples of executing the same code above, I commented them to avoid making this notebook take more time executing. A user could just copy them to use the API in a slightly different way. But now that I write this, I realize that this also works as test and making sure that it actually works :D I'll uncomment them again. | andresmor-ms | 268 |
py-why/dowhy | 732 | Set seed on data generation for deterministic test | * Set seed for deterministic data generation on `dowhy_function_api.ipynb` notebook.
* Fix wrong parameters on backwards compatibility example.
Fixes #704
Signed-off-by: Andres Morales <[email protected]> | null | 2022-10-31 18:37:31+00:00 | 2022-11-01 15:16:07+00:00 | docs/source/example_notebooks/dowhy_functional_api.ipynb | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_estimators.propensity_score_matching_estimator import PropensityScoreMatchingEstimator\n",
"\n",
"from dowhy.utils.api import parse_state\n",
"\n",
"from dowhy.causal_estimator import estimate_effect # Estimate effect function\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"print(treatment_name)\n",
"outcome_name = data[\"outcome_name\"]\n",
"print(outcome_name)\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Basic Estimate Effect function\n",
"\n",
"\n",
"propensity_score_estimator = PropensityScoreMatchingEstimator(\n",
" data=data[\"df\"],\n",
" identified_estimand=identified_estimand,\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" control_value=0,\n",
" treatment_value=1,\n",
" test_significance=None,\n",
" evaluate_effect_strength=False,\n",
" confidence_intervals=False,\n",
" target_units=\"ate\",\n",
" effect_modifiers=graph.get_effect_modifiers(treatment_name, outcome_name),\n",
")\n",
"\n",
"estimate = estimate_effect(\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" identified_estimand=identified_estimand,\n",
" identifier_name=\"backdoor\",\n",
" method=propensity_score_estimator,\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create Causal Model\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand, estimate, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand, estimate, \"data_subset_refuter\"\n",
")\n",
"print(data_subset_refutation_causal_model_api)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_estimators.propensity_score_matching_estimator import PropensityScoreMatchingEstimator\n",
"\n",
"from dowhy.utils.api import parse_state\n",
"\n",
"from dowhy.causal_estimator import estimate_effect # Estimate effect function\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"\n",
"# set random seed for deterministic dataset generation \n",
"# and avoid problems when running tests\n",
"import numpy as np\n",
"np.random.seed(1)\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"print(treatment_name)\n",
"outcome_name = data[\"outcome_name\"]\n",
"print(outcome_name)\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Basic Estimate Effect function\n",
"\n",
"\n",
"propensity_score_estimator = PropensityScoreMatchingEstimator(\n",
" data=data[\"df\"],\n",
" identified_estimand=identified_estimand,\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" control_value=0,\n",
" treatment_value=1,\n",
" test_significance=None,\n",
" evaluate_effect_strength=False,\n",
" confidence_intervals=False,\n",
" target_units=\"ate\",\n",
" effect_modifiers=graph.get_effect_modifiers(treatment_name, outcome_name),\n",
")\n",
"\n",
"estimate = estimate_effect(\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" identified_estimand=identified_estimand,\n",
" identifier_name=\"backdoor\",\n",
" method=propensity_score_estimator,\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create Causal Model\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand_causal_model_api, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand_causal_model_api, estimate_causal_model_api, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand_causal_model_api, estimate_causal_model_api, \"data_subset_refuter\"\n",
")\n",
"\n",
"print(data_subset_refutation_causal_model_api)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| andresmor-ms | b9bab69d737f34fc32a73feb426d0ddc2f471df2 | 68f5d2b1bc7c5357b243dcef31510cfdc65ff871 | Just want to note that we shouldn't use random seeds in unit tests. However, here it's a notebook, so its fine :) | bloebp | 269 |
py-why/dowhy | 732 | Set seed on data generation for deterministic test | * Set seed for deterministic data generation on `dowhy_function_api.ipynb` notebook.
* Fix wrong parameters on backwards compatibility example.
Fixes #704
Signed-off-by: Andres Morales <[email protected]> | null | 2022-10-31 18:37:31+00:00 | 2022-11-01 15:16:07+00:00 | docs/source/example_notebooks/dowhy_functional_api.ipynb | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_estimators.propensity_score_matching_estimator import PropensityScoreMatchingEstimator\n",
"\n",
"from dowhy.utils.api import parse_state\n",
"\n",
"from dowhy.causal_estimator import estimate_effect # Estimate effect function\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"print(treatment_name)\n",
"outcome_name = data[\"outcome_name\"]\n",
"print(outcome_name)\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Basic Estimate Effect function\n",
"\n",
"\n",
"propensity_score_estimator = PropensityScoreMatchingEstimator(\n",
" data=data[\"df\"],\n",
" identified_estimand=identified_estimand,\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" control_value=0,\n",
" treatment_value=1,\n",
" test_significance=None,\n",
" evaluate_effect_strength=False,\n",
" confidence_intervals=False,\n",
" target_units=\"ate\",\n",
" effect_modifiers=graph.get_effect_modifiers(treatment_name, outcome_name),\n",
")\n",
"\n",
"estimate = estimate_effect(\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" identified_estimand=identified_estimand,\n",
" identifier_name=\"backdoor\",\n",
" method=propensity_score_estimator,\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create Causal Model\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand, estimate, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand, estimate, \"data_subset_refuter\"\n",
")\n",
"print(data_subset_refutation_causal_model_api)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_estimators.propensity_score_matching_estimator import PropensityScoreMatchingEstimator\n",
"\n",
"from dowhy.utils.api import parse_state\n",
"\n",
"from dowhy.causal_estimator import estimate_effect # Estimate effect function\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"\n",
"# set random seed for deterministic dataset generation \n",
"# and avoid problems when running tests\n",
"import numpy as np\n",
"np.random.seed(1)\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"print(treatment_name)\n",
"outcome_name = data[\"outcome_name\"]\n",
"print(outcome_name)\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Basic Estimate Effect function\n",
"\n",
"\n",
"propensity_score_estimator = PropensityScoreMatchingEstimator(\n",
" data=data[\"df\"],\n",
" identified_estimand=identified_estimand,\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" control_value=0,\n",
" treatment_value=1,\n",
" test_significance=None,\n",
" evaluate_effect_strength=False,\n",
" confidence_intervals=False,\n",
" target_units=\"ate\",\n",
" effect_modifiers=graph.get_effect_modifiers(treatment_name, outcome_name),\n",
")\n",
"\n",
"estimate = estimate_effect(\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" identified_estimand=identified_estimand,\n",
" identifier_name=\"backdoor\",\n",
" method=propensity_score_estimator,\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create Causal Model\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand_causal_model_api, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand_causal_model_api, estimate_causal_model_api, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand_causal_model_api, estimate_causal_model_api, \"data_subset_refuter\"\n",
")\n",
"\n",
"print(data_subset_refutation_causal_model_api)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| andresmor-ms | b9bab69d737f34fc32a73feb426d0ddc2f471df2 | 68f5d2b1bc7c5357b243dcef31510cfdc65ff871 | We use notebooks as unit tests though, so we should probably disable random seeds in all of them by default. | darthtrevino | 270 |
py-why/dowhy | 727 | Re-introduce include_simulated_confounder as method | Fixes #721
Signed-off-by: Andres Morales <[email protected]> | null | 2022-10-28 22:32:39+00:00 | 2022-10-31 16:28:12+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
logger = logging.getLogger(__name__)
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
if self.simulation_method == "linear-partial-R2":
return sensitivity_linear_partial_r2(
self._data,
self._estimate,
self._treatment_name,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.percent_change_estimate,
self.benchmark_common_causes,
self.significance_level,
self.null_hypothesis_effect,
self.plot_estimate,
)
elif self.simulation_method == "non-parametric-partial-R2":
return sensitivity_non_parametric_partial_r2(
self._estimate,
self.kappa_t,
self.kappa_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.benchmark_common_causes,
self.plot_estimate,
self.alpha_s_estimator_list,
self.alpha_s_estimator_param_list,
self.g_s_estimator_list,
self.g_s_estimator_param_list,
self.plugin_reisz,
)
elif self.simulation_method == "e-value":
return sensitivity_e_value(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.plot_estimate,
)
elif self.simulation_method == "direct-simulation":
refute = sensitivity_simulation(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self.effect_on_t,
self.effect_on_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.plotmethod,
show_progress_bar,
)
refute.add_refuter(self)
return refute
def _infer_default_kappa_t(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
treatment_name: List[str],
effect_on_t: str,
frac_strength_treatment: float,
len_kappa_t: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = data[treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(min_coeff: float, max_coeff: float, effect_strength_fraction: np.ndarray):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def _infer_default_kappa_y(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
outcome_name: List[str],
effect_on_y: str,
frac_strength_outcome: float,
len_kappa_y: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = data[outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _include_confounders_effect(
data: pd.DataFrame,
new_data: pd.DataFrame,
effect_on_t: str,
treatment_name: str,
kappa_t: float,
effect_on_y: str,
outcome_name: str,
kappa_y: float,
):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, treatment_name]
)
for tname in treatment_name:
if pd.api.types.is_bool_dtype(data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[treatment_name] = new_data[treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
if effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, outcome_name] = 1 - new_data.loc[rel_interval <= w_random, outcome_name]
for yname in outcome_name:
if pd.api.types.is_bool_dtype(data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[outcome_name] = new_data[outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
kappa_t: float,
kappa_y: float,
convergence_threshold: float = 0.1,
c_star_max: int = 1000,
):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + treatment_name + outcome_name
# Taking a subset of the dataframe that has only observed variables
data = data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = data[outcome_name[0]]
observed_variables_with_treatment = observed_variables + treatment_name
X = data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = data[treatment_name[0]].astype("int64")
X = data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = data[i]
outcome_values = data[outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if kappa_t is not None:
correlation_t_observed = kappa_t
if kappa_y is not None:
correlation_y_observed = kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = _generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def _generate_confounder_from_residuals(c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
def sensitivity_linear_partial_r2(
data: pd.DataFrame,
estimate: CausalEstimate,
treatment_name: str,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
percent_change_estimate: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
significance_level: Optional[float] = None,
null_hypothesis_effect: Optional[float] = None,
plot_estimate: bool = True,
) -> LinearSensitivityAnalyzer:
"""Add an unobserved confounder for refutation using Linear partial R2 methond (Sensitivity Analysis for linear models).
:param data: pd.DataFrame: Data to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
"""
if not (isinstance(estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if frac_strength_outcome == 1:
frac_strength_outcome = frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=estimate.estimator,
data=data,
treatment_name=treatment_name,
percent_change_estimate=percent_change_estimate,
significance_level=significance_level,
benchmark_common_causes=benchmark_common_causes,
null_hypothesis_effect=null_hypothesis_effect,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
common_causes_order=estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_non_parametric_partial_r2(
estimate: CausalEstimate,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
plot_estimate: bool = True,
alpha_s_estimator_list: Optional[List] = None,
alpha_s_estimator_param_list: Optional[List[Dict]] = None,
g_s_estimator_list: Optional[List] = None,
g_s_estimator_param_list: Optional[List[Dict]] = None,
plugin_reisz: bool = False,
) -> Union[PartialLinearSensitivityAnalyzer, NonParametricSensitivityAnalyzer]:
"""Add an unobserved confounder for refutation using Non-parametric partial R2 methond (Sensitivity Analysis for non-parametric models).
:param estimate: CausalEstimate: Estimate to run the refutation
:param kappa_t: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param kappa_y: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param alpha_s_estimator_list: list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
:plugin_reisz: bool: Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s).
"""
import dowhy.causal_estimators.econml
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(estimate.estimator, dowhy.causal_estimators.econml.Econml):
if estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=estimate._estimator_object,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=estimate.estimator,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_list=alpha_s_estimator_list,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
theta_s=estimate.value,
plugin_reisz=plugin_reisz,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_e_value(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: List[str],
outcome_name: List[str],
plot_estimate: bool = True,
) -> EValueSensitivityAnalyzer:
if not isinstance(estimate.estimator, RegressionEstimator):
raise NotImplementedError("E-Value sensitivity analysis is currently only implemented RegressionEstimator.")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=estimate,
estimand=target_estimand,
data=data,
treatment_name=treatment_name[0],
outcome_name=outcome_name[0],
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_simulation(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: str,
outcome_name: str,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
confounders_effect_on_treatment: str = "binary_flip",
confounders_effect_on_outcome: str = "linear",
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
plotmethod: Optional[str] = None,
show_progress_bar=False,
**_,
) -> CausalRefutation:
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param outcome_name: str: Name of the outcome
:param kappa_t: float, numpy.ndarray: Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param kappa_y: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if kappa_t is None:
kappa_t = _infer_default_kappa_t(
data, target_estimand, treatment_name, confounders_effect_on_treatment, frac_strength_treatment
)
if kappa_y is None:
kappa_y = _infer_default_kappa_y(
data, target_estimand, outcome_name, confounders_effect_on_outcome, frac_strength_outcome
)
if not isinstance(kappa_t, (list, np.ndarray)) and not isinstance(
kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(data)
new_data = _include_confounders_effect(
data,
new_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
return refute
else: # Deal with multiple value inputs
if isinstance(kappa_t, (list, np.ndarray)) and isinstance(
kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(kappa_t), len(kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(data)
for i in tqdm(
range(len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(kappa_y)):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y[j],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if plotmethod == "contour":
cp = plt.contourf(kappa_y, kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif plotmethod == "colormesh":
cp = plt.pcolormesh(kappa_y, kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(kappa_t)
ax.xaxis.set_ticks(kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_t))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_t, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_y))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y[i],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_y, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
| import copy
import logging
import math
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
logger = logging.getLogger(__name__)
DEFAULT_CONVERGENCE_THRESHOLD = 0.1
DEFAULT_C_STAR_MAX = 1000
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
if self.simulation_method == "linear-partial-R2":
return sensitivity_linear_partial_r2(
self._data,
self._estimate,
self._treatment_name,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.percent_change_estimate,
self.benchmark_common_causes,
self.significance_level,
self.null_hypothesis_effect,
self.plot_estimate,
)
elif self.simulation_method == "non-parametric-partial-R2":
return sensitivity_non_parametric_partial_r2(
self._estimate,
self.kappa_t,
self.kappa_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.benchmark_common_causes,
self.plot_estimate,
self.alpha_s_estimator_list,
self.alpha_s_estimator_param_list,
self.g_s_estimator_list,
self.g_s_estimator_param_list,
self.plugin_reisz,
)
elif self.simulation_method == "e-value":
return sensitivity_e_value(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.plot_estimate,
)
elif self.simulation_method == "direct-simulation":
refute = sensitivity_simulation(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self.effect_on_t,
self.effect_on_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.plotmethod,
show_progress_bar,
)
refute.add_refuter(self)
return refute
def include_simulated_confounder(
self, convergence_threshold=DEFAULT_CONVERGENCE_THRESHOLD, c_star_max=DEFAULT_C_STAR_MAX
):
return include_simulated_confounder(
self._data,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self._variables_of_interest,
convergence_threshold,
c_star_max,
)
def _infer_default_kappa_t(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
treatment_name: List[str],
effect_on_t: str,
frac_strength_treatment: float,
len_kappa_t: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = data[treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(min_coeff: float, max_coeff: float, effect_strength_fraction: np.ndarray):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def _infer_default_kappa_y(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
outcome_name: List[str],
effect_on_y: str,
frac_strength_outcome: float,
len_kappa_y: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = data[outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _include_confounders_effect(
data: pd.DataFrame,
new_data: pd.DataFrame,
effect_on_t: str,
treatment_name: str,
kappa_t: float,
effect_on_y: str,
outcome_name: str,
kappa_y: float,
):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, treatment_name]
)
for tname in treatment_name:
if pd.api.types.is_bool_dtype(data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[treatment_name] = new_data[treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
if effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, outcome_name] = 1 - new_data.loc[rel_interval <= w_random, outcome_name]
for yname in outcome_name:
if pd.api.types.is_bool_dtype(data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[outcome_name] = new_data[outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
kappa_t: float,
kappa_y: float,
variables_of_interest: List,
convergence_threshold: float = DEFAULT_CONVERGENCE_THRESHOLD,
c_star_max: int = DEFAULT_C_STAR_MAX,
):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = choose_variables(required_variables, variables_of_interest)
observed_variables_with_treatment_and_outcome = observed_variables + treatment_name + outcome_name
# Taking a subset of the dataframe that has only observed variables
data = data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = data[outcome_name[0]]
observed_variables_with_treatment = observed_variables + treatment_name
X = data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = data[treatment_name[0]].astype("int64")
X = data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = data[i]
outcome_values = data[outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if kappa_t is not None:
correlation_t_observed = kappa_t
if kappa_y is not None:
correlation_y_observed = kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = _generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def _generate_confounder_from_residuals(c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
def sensitivity_linear_partial_r2(
data: pd.DataFrame,
estimate: CausalEstimate,
treatment_name: str,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
percent_change_estimate: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
significance_level: Optional[float] = None,
null_hypothesis_effect: Optional[float] = None,
plot_estimate: bool = True,
) -> LinearSensitivityAnalyzer:
"""Add an unobserved confounder for refutation using Linear partial R2 methond (Sensitivity Analysis for linear models).
:param data: pd.DataFrame: Data to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
"""
if not (isinstance(estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if frac_strength_outcome == 1:
frac_strength_outcome = frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=estimate.estimator,
data=data,
treatment_name=treatment_name,
percent_change_estimate=percent_change_estimate,
significance_level=significance_level,
benchmark_common_causes=benchmark_common_causes,
null_hypothesis_effect=null_hypothesis_effect,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
common_causes_order=estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_non_parametric_partial_r2(
estimate: CausalEstimate,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
plot_estimate: bool = True,
alpha_s_estimator_list: Optional[List] = None,
alpha_s_estimator_param_list: Optional[List[Dict]] = None,
g_s_estimator_list: Optional[List] = None,
g_s_estimator_param_list: Optional[List[Dict]] = None,
plugin_reisz: bool = False,
) -> Union[PartialLinearSensitivityAnalyzer, NonParametricSensitivityAnalyzer]:
"""Add an unobserved confounder for refutation using Non-parametric partial R2 methond (Sensitivity Analysis for non-parametric models).
:param estimate: CausalEstimate: Estimate to run the refutation
:param kappa_t: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param kappa_y: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param alpha_s_estimator_list: list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
:plugin_reisz: bool: Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s).
"""
import dowhy.causal_estimators.econml
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(estimate.estimator, dowhy.causal_estimators.econml.Econml):
if estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=estimate._estimator_object,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=estimate.estimator,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_list=alpha_s_estimator_list,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
theta_s=estimate.value,
plugin_reisz=plugin_reisz,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_e_value(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: List[str],
outcome_name: List[str],
plot_estimate: bool = True,
) -> EValueSensitivityAnalyzer:
if not isinstance(estimate.estimator, RegressionEstimator):
raise NotImplementedError("E-Value sensitivity analysis is currently only implemented RegressionEstimator.")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=estimate,
estimand=target_estimand,
data=data,
treatment_name=treatment_name[0],
outcome_name=outcome_name[0],
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_simulation(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: str,
outcome_name: str,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
confounders_effect_on_treatment: str = "binary_flip",
confounders_effect_on_outcome: str = "linear",
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
plotmethod: Optional[str] = None,
show_progress_bar=False,
**_,
) -> CausalRefutation:
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param outcome_name: str: Name of the outcome
:param kappa_t: float, numpy.ndarray: Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param kappa_y: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if kappa_t is None:
kappa_t = _infer_default_kappa_t(
data, target_estimand, treatment_name, confounders_effect_on_treatment, frac_strength_treatment
)
if kappa_y is None:
kappa_y = _infer_default_kappa_y(
data, target_estimand, outcome_name, confounders_effect_on_outcome, frac_strength_outcome
)
if not isinstance(kappa_t, (list, np.ndarray)) and not isinstance(
kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(data)
new_data = _include_confounders_effect(
data,
new_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
return refute
else: # Deal with multiple value inputs
if isinstance(kappa_t, (list, np.ndarray)) and isinstance(
kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(kappa_t), len(kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(data)
for i in tqdm(
range(len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(kappa_y)):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y[j],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if plotmethod == "contour":
cp = plt.contourf(kappa_y, kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif plotmethod == "colormesh":
cp = plt.pcolormesh(kappa_y, kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(kappa_t)
ax.xaxis.set_ticks(kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_t))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_t, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_y))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y[i],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_y, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
| andresmor-ms | f13ed30f42440552e4a912372abcb7c3023fc9c0 | 18bd1fe5d9941867dbd135e0d2a0af2fb24feea7 | should these constants be named? | darthtrevino | 271 |
py-why/dowhy | 727 | Re-introduce include_simulated_confounder as method | Fixes #721
Signed-off-by: Andres Morales <[email protected]> | null | 2022-10-28 22:32:39+00:00 | 2022-10-31 16:28:12+00:00 | dowhy/causal_refuters/add_unobserved_common_cause.py | import copy
import logging
import math
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
logger = logging.getLogger(__name__)
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
if self.simulation_method == "linear-partial-R2":
return sensitivity_linear_partial_r2(
self._data,
self._estimate,
self._treatment_name,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.percent_change_estimate,
self.benchmark_common_causes,
self.significance_level,
self.null_hypothesis_effect,
self.plot_estimate,
)
elif self.simulation_method == "non-parametric-partial-R2":
return sensitivity_non_parametric_partial_r2(
self._estimate,
self.kappa_t,
self.kappa_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.benchmark_common_causes,
self.plot_estimate,
self.alpha_s_estimator_list,
self.alpha_s_estimator_param_list,
self.g_s_estimator_list,
self.g_s_estimator_param_list,
self.plugin_reisz,
)
elif self.simulation_method == "e-value":
return sensitivity_e_value(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.plot_estimate,
)
elif self.simulation_method == "direct-simulation":
refute = sensitivity_simulation(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self.effect_on_t,
self.effect_on_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.plotmethod,
show_progress_bar,
)
refute.add_refuter(self)
return refute
def _infer_default_kappa_t(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
treatment_name: List[str],
effect_on_t: str,
frac_strength_treatment: float,
len_kappa_t: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = data[treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(min_coeff: float, max_coeff: float, effect_strength_fraction: np.ndarray):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def _infer_default_kappa_y(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
outcome_name: List[str],
effect_on_y: str,
frac_strength_outcome: float,
len_kappa_y: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = data[outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _include_confounders_effect(
data: pd.DataFrame,
new_data: pd.DataFrame,
effect_on_t: str,
treatment_name: str,
kappa_t: float,
effect_on_y: str,
outcome_name: str,
kappa_y: float,
):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, treatment_name]
)
for tname in treatment_name:
if pd.api.types.is_bool_dtype(data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[treatment_name] = new_data[treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
if effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, outcome_name] = 1 - new_data.loc[rel_interval <= w_random, outcome_name]
for yname in outcome_name:
if pd.api.types.is_bool_dtype(data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[outcome_name] = new_data[outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
kappa_t: float,
kappa_y: float,
convergence_threshold: float = 0.1,
c_star_max: int = 1000,
):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = choose_variables(required_variables)
observed_variables_with_treatment_and_outcome = observed_variables + treatment_name + outcome_name
# Taking a subset of the dataframe that has only observed variables
data = data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = data[outcome_name[0]]
observed_variables_with_treatment = observed_variables + treatment_name
X = data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = data[treatment_name[0]].astype("int64")
X = data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = data[i]
outcome_values = data[outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if kappa_t is not None:
correlation_t_observed = kappa_t
if kappa_y is not None:
correlation_y_observed = kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = _generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def _generate_confounder_from_residuals(c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
def sensitivity_linear_partial_r2(
data: pd.DataFrame,
estimate: CausalEstimate,
treatment_name: str,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
percent_change_estimate: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
significance_level: Optional[float] = None,
null_hypothesis_effect: Optional[float] = None,
plot_estimate: bool = True,
) -> LinearSensitivityAnalyzer:
"""Add an unobserved confounder for refutation using Linear partial R2 methond (Sensitivity Analysis for linear models).
:param data: pd.DataFrame: Data to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
"""
if not (isinstance(estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if frac_strength_outcome == 1:
frac_strength_outcome = frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=estimate.estimator,
data=data,
treatment_name=treatment_name,
percent_change_estimate=percent_change_estimate,
significance_level=significance_level,
benchmark_common_causes=benchmark_common_causes,
null_hypothesis_effect=null_hypothesis_effect,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
common_causes_order=estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_non_parametric_partial_r2(
estimate: CausalEstimate,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
plot_estimate: bool = True,
alpha_s_estimator_list: Optional[List] = None,
alpha_s_estimator_param_list: Optional[List[Dict]] = None,
g_s_estimator_list: Optional[List] = None,
g_s_estimator_param_list: Optional[List[Dict]] = None,
plugin_reisz: bool = False,
) -> Union[PartialLinearSensitivityAnalyzer, NonParametricSensitivityAnalyzer]:
"""Add an unobserved confounder for refutation using Non-parametric partial R2 methond (Sensitivity Analysis for non-parametric models).
:param estimate: CausalEstimate: Estimate to run the refutation
:param kappa_t: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param kappa_y: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param alpha_s_estimator_list: list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
:plugin_reisz: bool: Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s).
"""
import dowhy.causal_estimators.econml
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(estimate.estimator, dowhy.causal_estimators.econml.Econml):
if estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=estimate._estimator_object,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=estimate.estimator,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_list=alpha_s_estimator_list,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
theta_s=estimate.value,
plugin_reisz=plugin_reisz,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_e_value(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: List[str],
outcome_name: List[str],
plot_estimate: bool = True,
) -> EValueSensitivityAnalyzer:
if not isinstance(estimate.estimator, RegressionEstimator):
raise NotImplementedError("E-Value sensitivity analysis is currently only implemented RegressionEstimator.")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=estimate,
estimand=target_estimand,
data=data,
treatment_name=treatment_name[0],
outcome_name=outcome_name[0],
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_simulation(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: str,
outcome_name: str,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
confounders_effect_on_treatment: str = "binary_flip",
confounders_effect_on_outcome: str = "linear",
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
plotmethod: Optional[str] = None,
show_progress_bar=False,
**_,
) -> CausalRefutation:
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param outcome_name: str: Name of the outcome
:param kappa_t: float, numpy.ndarray: Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param kappa_y: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if kappa_t is None:
kappa_t = _infer_default_kappa_t(
data, target_estimand, treatment_name, confounders_effect_on_treatment, frac_strength_treatment
)
if kappa_y is None:
kappa_y = _infer_default_kappa_y(
data, target_estimand, outcome_name, confounders_effect_on_outcome, frac_strength_outcome
)
if not isinstance(kappa_t, (list, np.ndarray)) and not isinstance(
kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(data)
new_data = _include_confounders_effect(
data,
new_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
return refute
else: # Deal with multiple value inputs
if isinstance(kappa_t, (list, np.ndarray)) and isinstance(
kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(kappa_t), len(kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(data)
for i in tqdm(
range(len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(kappa_y)):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y[j],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if plotmethod == "contour":
cp = plt.contourf(kappa_y, kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif plotmethod == "colormesh":
cp = plt.pcolormesh(kappa_y, kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(kappa_t)
ax.xaxis.set_ticks(kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_t))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_t, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_y))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y[i],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_y, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
| import copy
import logging
import math
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from tqdm.auto import tqdm
from dowhy.causal_estimator import CausalEstimate, CausalEstimator
from dowhy.causal_estimators.linear_regression_estimator import LinearRegressionEstimator
from dowhy.causal_estimators.regression_estimator import RegressionEstimator
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.causal_refuter import CausalRefutation, CausalRefuter, choose_variables
from dowhy.causal_refuters.evalue_sensitivity_analyzer import EValueSensitivityAnalyzer
from dowhy.causal_refuters.linear_sensitivity_analyzer import LinearSensitivityAnalyzer
from dowhy.causal_refuters.non_parametric_sensitivity_analyzer import NonParametricSensitivityAnalyzer
from dowhy.causal_refuters.partial_linear_sensitivity_analyzer import PartialLinearSensitivityAnalyzer
logger = logging.getLogger(__name__)
DEFAULT_CONVERGENCE_THRESHOLD = 0.1
DEFAULT_C_STAR_MAX = 1000
class AddUnobservedCommonCause(CausalRefuter):
"""Add an unobserved confounder for refutation.
AddUnobservedCommonCause class supports three methods:
1) Simulation of an unobserved confounder
2) Linear partial R2 : Sensitivity Analysis for linear models.
3) Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models.
Supports additional parameters that can be specified in the refute_estimate() method.
"""
def __init__(self, *args, **kwargs):
"""
Initialize the parameters required for the refuter.
For direct_simulation, if effect_strength_on_treatment or effect_strength_on_outcome is not
given, it is calculated automatically as a range between the
minimum and maximum effect strength of observed confounders on treatment
and outcome respectively.
:param simulation_method: The method to use for simulating effect of unobserved confounder. Possible values are ["direct-simulation", "linear-partial-R2", "non-parametric-partial-R2", "e-value"].
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param effect_strength_on_treatment: float, numpy.ndarray: [Used when simulation_method="direct-simulation"] Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param effect_strength_on_outcome: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param partial_r2_confounder_treatment: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param partial_r2_confounder_outcome: float, numpy.ndarray: [Used when simulation_method is linear-partial-R2 or non-parametric-partial-R2] Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param confounder_increases_estimate: True implies that confounder increases the absolute value of estimate and vice versa. (Default = False). (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param num_splits: number of splits for cross validation. (default = 5). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_data : shuffle data or not before splitting into folds (default = False). (relevant only for non-parametric-partial-R2 simulation method)
:param shuffle_random_seed: seed for randomly shuffling data. (relevant only for non-parametric-partial-R2 simulation method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
"""
super().__init__(*args, **kwargs)
self.simulation_method = kwargs["simulation_method"] if "simulation_method" in kwargs else "direct-simulation"
self.effect_on_t = (
kwargs["confounders_effect_on_treatment"] if "confounders_effect_on_treatment" in kwargs else "binary_flip"
)
self.effect_on_y = (
kwargs["confounders_effect_on_outcome"] if "confounders_effect_on_outcome" in kwargs else "linear"
)
if self.simulation_method == "direct-simulation":
self.kappa_t = kwargs["effect_strength_on_treatment"] if "effect_strength_on_treatment" in kwargs else None
self.kappa_y = kwargs["effect_strength_on_outcome"] if "effect_strength_on_outcome" in kwargs else None
elif self.simulation_method in ["linear-partial-R2", "non-parametric-partial-R2"]:
self.kappa_t = (
kwargs["partial_r2_confounder_treatment"] if "partial_r2_confounder_treatment" in kwargs else None
)
self.kappa_y = (
kwargs["partial_r2_confounder_outcome"] if "partial_r2_confounder_outcome" in kwargs else None
)
elif self.simulation_method == "e-value":
pass
else:
raise ValueError(
"simulation method is not supported. Try direct-simulation, linear-partial-R2, non-parametric-partial-R2, or e-value"
)
self.frac_strength_treatment = (
kwargs["effect_fraction_on_treatment"] if "effect_fraction_on_treatment" in kwargs else 1
)
self.frac_strength_outcome = (
kwargs["effect_fraction_on_outcome"] if "effect_fraction_on_outcome" in kwargs else 1
)
self.plotmethod = kwargs["plotmethod"] if "plotmethod" in kwargs else "colormesh"
self.percent_change_estimate = kwargs["percent_change_estimate"] if "percent_change_estimate" in kwargs else 1.0
self.significance_level = kwargs["significance_level"] if "significance_level" in kwargs else 0.05
self.confounder_increases_estimate = (
kwargs["confounder_increases_estimate"] if "confounder_increases_estimate" in kwargs else False
)
self.benchmark_common_causes = (
kwargs["benchmark_common_causes"] if "benchmark_common_causes" in kwargs else None
)
self.null_hypothesis_effect = kwargs["null_hypothesis_effect"] if "null_hypothesis_effect" in kwargs else 0
self.plot_estimate = kwargs["plot_estimate"] if "plot_estimate" in kwargs else True
self.num_splits = kwargs["num_splits"] if "num_splits" in kwargs else 5
self.shuffle_data = kwargs["shuffle_data"] if "shuffle_data" in kwargs else False
self.shuffle_random_seed = kwargs["shuffle_random_seed"] if "shuffle_random_seed" in kwargs else None
self.alpha_s_estimator_param_list = (
kwargs["alpha_s_estimator_param_list"] if "alpha_s_estimator_param_list" in kwargs else None
)
self.alpha_s_estimator_list = kwargs["alpha_s_estimator_list"] if "alpha_s_estimator_list" in kwargs else None
self.g_s_estimator_list = kwargs["g_s_estimator_list"] if "g_s_estimator_list" in kwargs else None
self.g_s_estimator_param_list = (
kwargs["g_s_estimator_param_list"] if "g_s_estimator_param_list" in kwargs else None
)
self.plugin_reisz = kwargs["plugin_reisz"] if "plugin_reisz" in kwargs else False
self.logger = logging.getLogger(__name__)
def refute_estimate(self, show_progress_bar=False):
if self.simulation_method == "linear-partial-R2":
return sensitivity_linear_partial_r2(
self._data,
self._estimate,
self._treatment_name,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.percent_change_estimate,
self.benchmark_common_causes,
self.significance_level,
self.null_hypothesis_effect,
self.plot_estimate,
)
elif self.simulation_method == "non-parametric-partial-R2":
return sensitivity_non_parametric_partial_r2(
self._estimate,
self.kappa_t,
self.kappa_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.benchmark_common_causes,
self.plot_estimate,
self.alpha_s_estimator_list,
self.alpha_s_estimator_param_list,
self.g_s_estimator_list,
self.g_s_estimator_param_list,
self.plugin_reisz,
)
elif self.simulation_method == "e-value":
return sensitivity_e_value(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.plot_estimate,
)
elif self.simulation_method == "direct-simulation":
refute = sensitivity_simulation(
self._data,
self._target_estimand,
self._estimate,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self.effect_on_t,
self.effect_on_y,
self.frac_strength_treatment,
self.frac_strength_outcome,
self.plotmethod,
show_progress_bar,
)
refute.add_refuter(self)
return refute
def include_simulated_confounder(
self, convergence_threshold=DEFAULT_CONVERGENCE_THRESHOLD, c_star_max=DEFAULT_C_STAR_MAX
):
return include_simulated_confounder(
self._data,
self._treatment_name,
self._outcome_name,
self.kappa_t,
self.kappa_y,
self._variables_of_interest,
convergence_threshold,
c_star_max,
)
def _infer_default_kappa_t(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
treatment_name: List[str],
effect_on_t: str,
frac_strength_treatment: float,
len_kappa_t: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_t."
+ " Provide a value for kappa_t"
)
t = data[treatment_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_t == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
tmodel = LogisticRegression().fit(observed_common_causes, t)
tpred = tmodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
tcap = tmodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(tcap - tpred)) / tpred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_t == "linear":
# Estimating the regression coefficient from standardized features to t
corrcoef_var_t = np.corrcoef(observed_common_causes, t, rowvar=False)[-1, :-1]
std_dev_t = np.std(t)[0]
max_coeff = max(corrcoef_var_t) * std_dev_t
min_coeff = min(corrcoef_var_t) * std_dev_t
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_treatment)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_t
logger.info("(Min, Max) kappa_t for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _compute_min_max_coeff(min_coeff: float, max_coeff: float, effect_strength_fraction: np.ndarray):
max_coeff = effect_strength_fraction * max_coeff
min_coeff = effect_strength_fraction * min_coeff
return min_coeff, max_coeff
def _infer_default_kappa_y(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
outcome_name: List[str],
effect_on_y: str,
frac_strength_outcome: float,
len_kappa_y: int = 10,
):
"""Infer default effect strength of simulated confounder on treatment."""
observed_common_causes_names = target_estimand.get_backdoor_variables()
if len(observed_common_causes_names) > 0:
observed_common_causes = data[observed_common_causes_names]
observed_common_causes = pd.get_dummies(observed_common_causes, drop_first=True)
else:
raise ValueError(
"There needs to be at least one common cause to"
+ "automatically compute the default value of kappa_y."
+ " Provide a value for kappa_y"
)
y = data[outcome_name]
# Standardizing the data
observed_common_causes = StandardScaler().fit_transform(observed_common_causes)
if effect_on_y == "binary_flip":
# Fit a model containing all confounders and compare predictions
# using all features compared to all features except a given
# confounder.
ymodel = LogisticRegression().fit(observed_common_causes, y)
ypred = ymodel.predict(observed_common_causes).astype(int)
flips = []
for i in range(observed_common_causes.shape[1]):
oldval = np.copy(observed_common_causes[:, i])
observed_common_causes[:, i] = 0
ycap = ymodel.predict(observed_common_causes).astype(int)
observed_common_causes[:, i] = oldval
flips.append(np.sum(abs(ycap - ypred)) / ypred.shape[0])
min_coeff, max_coeff = min(flips), max(flips)
elif effect_on_y == "linear":
corrcoef_var_y = np.corrcoef(observed_common_causes, y, rowvar=False)[-1, :-1]
std_dev_y = np.std(y)[0]
max_coeff = max(corrcoef_var_y) * std_dev_y
min_coeff = min(corrcoef_var_y) * std_dev_y
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
min_coeff, max_coeff = _compute_min_max_coeff(min_coeff, max_coeff, frac_strength_outcome)
# By default, return a plot with 10 points
# consider 10 values of the effect of the unobserved confounder
step = (max_coeff - min_coeff) / len_kappa_y
logger.info("(Min, Max) kappa_y for observed common causes, ({0}, {1})".format(min_coeff, max_coeff))
if np.equal(max_coeff, min_coeff):
return max_coeff
else:
return np.arange(min_coeff, max_coeff, step)
def _include_confounders_effect(
data: pd.DataFrame,
new_data: pd.DataFrame,
effect_on_t: str,
treatment_name: str,
kappa_t: float,
effect_on_y: str,
outcome_name: str,
kappa_y: float,
):
"""
This function deals with the change in the value of the data due to the effect of the unobserved confounder.
In the case of a binary flip, we flip only if the random number is greater than the threshold set.
In the case of a linear effect, we use the variable as the linear regression constant.
:param new_data: pandas.DataFrame: The data to be changed due to the effects of the unobserved confounder.
:param kappa_t: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:param kappa_y: numpy.float64: The value of the threshold for binary_flip or the value of the regression coefficient for linear effect.
:return: pandas.DataFrame: The DataFrame that includes the effects of the unobserved confounder.
"""
num_rows = data.shape[0]
stdnorm = scipy.stats.norm()
w_random = stdnorm.rvs(num_rows)
if effect_on_t == "binary_flip":
alpha = 2 * kappa_t - 1 if kappa_t >= 0.5 else 1 - 2 * kappa_t
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_t >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, treatment_name] = (
1 - new_data.loc[rel_interval <= w_random, treatment_name]
)
for tname in treatment_name:
if pd.api.types.is_bool_dtype(data[tname]):
new_data = new_data.astype({tname: "bool"}, copy=False)
elif effect_on_t == "linear":
confounder_t_effect = kappa_t * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[treatment_name] = new_data[treatment_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_t_effect
)
else:
raise NotImplementedError("'" + effect_on_t + "' method not supported for confounders' effect on treatment")
if effect_on_y == "binary_flip":
alpha = 2 * kappa_y - 1 if kappa_y >= 0.5 else 1 - 2 * kappa_y
interval = stdnorm.interval(alpha)
rel_interval = interval[0] if kappa_y >= 0.5 else interval[1]
new_data.loc[rel_interval <= w_random, outcome_name] = 1 - new_data.loc[rel_interval <= w_random, outcome_name]
for yname in outcome_name:
if pd.api.types.is_bool_dtype(data[yname]):
new_data = new_data.astype({yname: "bool"}, copy=False)
elif effect_on_y == "linear":
confounder_y_effect = (-1) * kappa_y * w_random
# By default, we add the effect of simulated confounder for treatment.
# But subtract it from outcome to create a negative correlation
# assuming that the original confounder's effect was positive on both.
# This is to remove the effect of the original confounder.
new_data[outcome_name] = new_data[outcome_name].values + np.ndarray(
shape=(num_rows, 1), buffer=confounder_y_effect
)
else:
raise NotImplementedError("'" + effect_on_y + "' method not supported for confounders' effect on outcome")
return new_data
def include_simulated_confounder(
data: pd.DataFrame,
treatment_name: str,
outcome_name: str,
kappa_t: float,
kappa_y: float,
variables_of_interest: List,
convergence_threshold: float = DEFAULT_CONVERGENCE_THRESHOLD,
c_star_max: int = DEFAULT_C_STAR_MAX,
):
"""
This function simulates an unobserved confounder based on the data using the following steps:
1. It calculates the "residuals" from the treatment and outcome model
i.) The outcome model has outcome as the dependent variable and all the observed variables including treatment as independent variables
ii.) The treatment model has treatment as the dependent variable and all the observed variables as independent variables.
2. U is an intermediate random variable drawn from the normal distribution with the weighted average of residuals as mean and a unit variance
U ~ N(c1*d_y + c2*d_t, 1)
where
*d_y and d_t are residuals from the treatment and outcome model
*c1 and c2 are coefficients to the residuals
3. The final U, which is the simulated unobserved confounder is obtained by debiasing the intermediate variable U by residualising it with X
Choosing the coefficients c1 and c2:
The coefficients are chosen based on these basic assumptions:
1. There is a hyperbolic relationship satisfying c1*c2 = c_star
2. c_star is chosen from a range of possible values based on the correlation of the obtained simulated variable with outcome and treatment.
3. The product of correlations with treatment and outcome should be at a minimum distance to the maximum correlations with treatment and outcome in any of the observed confounders
4. The ratio of the weights should be such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
:param c_star_max: The maximum possible value for the hyperbolic curve on which the coefficients to the residuals lie. It defaults to 1000 in the code if not specified by the user.
:type int
:param convergence_threshold: The threshold to check the plateauing of the correlation while selecting a c_star. It defaults to 0.1 in the code if not specified by the user
:type float
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
# Obtaining the list of observed variables
required_variables = True
observed_variables = choose_variables(required_variables, variables_of_interest)
observed_variables_with_treatment_and_outcome = observed_variables + treatment_name + outcome_name
# Taking a subset of the dataframe that has only observed variables
data = data[observed_variables_with_treatment_and_outcome]
# Residuals from the outcome model obtained by fitting a linear model
y = data[outcome_name[0]]
observed_variables_with_treatment = observed_variables + treatment_name
X = data[observed_variables_with_treatment]
model = sm.OLS(y, X.astype("float"))
results = model.fit()
residuals_y = y - results.fittedvalues
d_y = list(pd.Series(residuals_y))
# Residuals from the treatment model obtained by fitting a linear model
t = data[treatment_name[0]].astype("int64")
X = data[observed_variables]
model = sm.OLS(t, X)
results = model.fit()
residuals_t = t - results.fittedvalues
d_t = list(pd.Series(residuals_t))
# Initialising product_cor_metric_observed with a really low value as finding maximum
product_cor_metric_observed = -10000000000
for i in observed_variables:
current_obs_confounder = data[i]
outcome_values = data[outcome_name[0]]
correlation_y = current_obs_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_obs_confounder.corr(treatment_values)
product_cor_metric_current = correlation_y * correlation_t
if product_cor_metric_current >= product_cor_metric_observed:
product_cor_metric_observed = product_cor_metric_current
correlation_t_observed = correlation_t
correlation_y_observed = correlation_y
# The user has an option to give the the effect_strength_on_y and effect_strength_on_t which can be then used instead of maximum correlation with treatment and outcome in the observed variables as it specifies the desired effect.
if kappa_t is not None:
correlation_t_observed = kappa_t
if kappa_y is not None:
correlation_y_observed = kappa_y
# Choosing a c_star based on the data.
# The correlations stop increasing upon increasing c_star after a certain value, that is it plateaus and we choose the value of c_star to be the value it plateaus.
correlation_y_list = []
correlation_t_list = []
product_cor_metric_simulated_list = []
x_list = []
step = int(c_star_max / 10)
for i in range(0, int(c_star_max), step):
c1 = math.sqrt(i)
c2 = c1
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
correlation_y_list.append(correlation_y)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
correlation_t_list.append(correlation_t)
product_cor_metric_simulated = correlation_y * correlation_t
product_cor_metric_simulated_list.append(product_cor_metric_simulated)
x_list.append(i)
index = 1
while index < len(correlation_y_list):
if (correlation_y_list[index] - correlation_y_list[index - 1]) <= convergence_threshold:
c_star = x_list[index]
break
index = index + 1
# Choosing c1 and c2 based on the hyperbolic relationship once c_star is chosen by going over various combinations of c1 and c2 values and choosing the combination which
# which maintains the minimum distance between the product of correlations of the simulated variable and the product of maximum correlations of one of the observed variables
# and additionally checks if the ratio of the weights are such that they maintain the ratio of the maximum possible observed coefficients within some confidence interval
# c1_final and c2_final are initialised to the values on the hyperbolic curve such that c1_final = c2_final and c1_final*c2_final = c_star
c1_final = math.sqrt(c_star)
c2_final = math.sqrt(c_star)
# initialising min_distance_between_product_cor_metrics to be a value greater than 1
min_distance_between_product_cor_metrics = 1.5
i = 0.05
threshold = c_star / 0.05
while i <= threshold:
c2 = i
c1 = c_star / c2
final_U = _generate_confounder_from_residuals(c1, c2, d_y, d_t, X)
current_simulated_confounder = final_U
outcome_values = data[outcome_name[0]]
correlation_y = current_simulated_confounder.corr(outcome_values)
treatment_values = t
correlation_t = current_simulated_confounder.corr(treatment_values)
product_cor_metric_simulated = correlation_y * correlation_t
if min_distance_between_product_cor_metrics >= abs(product_cor_metric_simulated - product_cor_metric_observed):
min_distance_between_product_cor_metrics = abs(product_cor_metric_simulated - product_cor_metric_observed)
additional_condition = correlation_y_observed / correlation_t_observed
if ((c1 / c2) <= (additional_condition + 0.3 * additional_condition)) and (
(c1 / c2) >= (additional_condition - 0.3 * additional_condition)
): # choose minimum positive value
c1_final = c1
c2_final = c2
i = i * 1.5
"""#closed form solution
print("c_star_max before closed form", c_star_max)
if max_correlation_with_t == -1000:
c2 = 0
c1 = c_star_max
else:
additional_condition = abs(max_correlation_with_y/max_correlation_with_t)
print("additional_condition", additional_condition)
c2 = math.sqrt(c_star_max/additional_condition)
c1 = c_star_max/c2"""
final_U = _generate_confounder_from_residuals(c1_final, c2_final, d_y, d_t, X)
return final_U
def _generate_confounder_from_residuals(c1, c2, d_y, d_t, X):
"""
This function takes the residuals from the treatment and outcome model and their coefficients and simulates the intermediate random variable U by taking
the row wise normal distribution corresponding to each residual value and then debiasing the intermediate variable to get the final variable.
:param c1: coefficient to the residual from the outcome model
:type float
:param c2: coefficient to the residual from the treatment model
:type float
:param d_y: residuals from the outcome model
:type list
:param d_t: residuals from the treatment model
:type list
:returns: The simulated values of the unobserved confounder based on the data
:type pandas.core.series.Series
"""
U = []
for j in range(len(d_t)):
simulated_variable_mean = c1 * d_y[j] + c2 * d_t[j]
simulated_variable_stddev = 1
U.append(np.random.normal(simulated_variable_mean, simulated_variable_stddev, 1))
U = np.array(U)
model = sm.OLS(U, X)
results = model.fit()
U = U.reshape(
-1,
)
final_U = U - results.fittedvalues.values
final_U = pd.Series(U)
return final_U
def sensitivity_linear_partial_r2(
data: pd.DataFrame,
estimate: CausalEstimate,
treatment_name: str,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
percent_change_estimate: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
significance_level: Optional[float] = None,
null_hypothesis_effect: Optional[float] = None,
plot_estimate: bool = True,
) -> LinearSensitivityAnalyzer:
"""Add an unobserved confounder for refutation using Linear partial R2 methond (Sensitivity Analysis for linear models).
:param data: pd.DataFrame: Data to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param percent_change_estimate: It is the percentage of reduction of treatment estimate that could alter the results (default = 1).
if percent_change_estimate = 1, the robustness value describes the strength of association of confounders with treatment and outcome in order to reduce the estimate by 100% i.e bring it down to 0. (relevant only for Linear Sensitivity Analysis, ignore for rest)
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param significance_level: confidence interval for statistical inference(default = 0.05). (relevant only for partial-r2 based simulation methods)
:param null_hypothesis_effect: assumed effect under the null hypothesis. (relevant only for linear-partial-R2, ignore for rest)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
"""
if not (isinstance(estimate.estimator, LinearRegressionEstimator)):
raise NotImplementedError("Currently only LinearRegressionEstimator is supported for Sensitivity Analysis")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
if frac_strength_outcome == 1:
frac_strength_outcome = frac_strength_treatment
analyzer = LinearSensitivityAnalyzer(
estimator=estimate.estimator,
data=data,
treatment_name=treatment_name,
percent_change_estimate=percent_change_estimate,
significance_level=significance_level,
benchmark_common_causes=benchmark_common_causes,
null_hypothesis_effect=null_hypothesis_effect,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
common_causes_order=estimate.estimator._observed_common_causes.columns,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_non_parametric_partial_r2(
estimate: CausalEstimate,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
benchmark_common_causes: Optional[List[str]] = None,
plot_estimate: bool = True,
alpha_s_estimator_list: Optional[List] = None,
alpha_s_estimator_param_list: Optional[List[Dict]] = None,
g_s_estimator_list: Optional[List] = None,
g_s_estimator_param_list: Optional[List[Dict]] = None,
plugin_reisz: bool = False,
) -> Union[PartialLinearSensitivityAnalyzer, NonParametricSensitivityAnalyzer]:
"""Add an unobserved confounder for refutation using Non-parametric partial R2 methond (Sensitivity Analysis for non-parametric models).
:param estimate: CausalEstimate: Estimate to run the refutation
:param kappa_t: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the treatment conditioned on the observed confounders. Only in the case of general non-parametric-partial-R2, it is the fraction of variance in the reisz representer that is explained by the unobserved confounder; specifically (1-r), where r is the ratio of variance of reisz representer, alpha^2, based on observed confounders and that based on all confounders.
:param kappa_y: float, numpy.ndarray: Partial R2 of the unobserved confounder wrt the outcome conditioned on the treatment and observed confounders.
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param benchmark_common_causes: names of variables for bounding strength of confounders. (relevant only for partial-r2 based simulation methods)
:param plot_estimate: Generate contour plot for estimate while performing sensitivity analysis. (default = True).
(relevant only for partial-r2 based simulation methods)
:param alpha_s_estimator_list: list of estimator objects for estimating alpha_s. These objects should have fit() and predict() methods (relevant only for non-parametric-partial-R2 method)
:param alpha_s_estimator_param_list: list of dictionaries with parameters for finding alpha_s. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_list: list of estimator objects for finding g_s. These objects should have fit() and predict() functions implemented. (relevant only for non-parametric-partial-R2 simulation method)
:param g_s_estimator_param_list: list of dictionaries with parameters for tuning respective estimators in "g_s_estimator_list". The order of the dictionaries in the list should be consistent with the estimator objects order in "g_s_estimator_list". (relevant only for non-parametric-partial-R2 simulation method)
:plugin_reisz: bool: Flag on whether to use the plugin estimator or the nonparametric estimator for reisz representer function (alpha_s).
"""
import dowhy.causal_estimators.econml
# If the estimator used is LinearDML, partially linear sensitivity analysis will be automatically chosen
if isinstance(estimate.estimator, dowhy.causal_estimators.econml.Econml):
if estimate.estimator._econml_methodname == "econml.dml.LinearDML":
analyzer = PartialLinearSensitivityAnalyzer(
estimator=estimate._estimator_object,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
analyzer = NonParametricSensitivityAnalyzer(
estimator=estimate.estimator,
observed_common_causes=estimate.estimator._observed_common_causes,
treatment=estimate.estimator._treatment,
outcome=estimate.estimator._outcome,
alpha_s_estimator_list=alpha_s_estimator_list,
alpha_s_estimator_param_list=alpha_s_estimator_param_list,
g_s_estimator_list=g_s_estimator_list,
g_s_estimator_param_list=g_s_estimator_param_list,
effect_strength_treatment=kappa_t,
effect_strength_outcome=kappa_y,
benchmark_common_causes=benchmark_common_causes,
frac_strength_treatment=frac_strength_treatment,
frac_strength_outcome=frac_strength_outcome,
theta_s=estimate.value,
plugin_reisz=plugin_reisz,
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_e_value(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: List[str],
outcome_name: List[str],
plot_estimate: bool = True,
) -> EValueSensitivityAnalyzer:
if not isinstance(estimate.estimator, RegressionEstimator):
raise NotImplementedError("E-Value sensitivity analysis is currently only implemented RegressionEstimator.")
if len(estimate.estimator._effect_modifier_names) > 0:
raise NotImplementedError("The current implementation does not support effect modifiers")
analyzer = EValueSensitivityAnalyzer(
estimate=estimate,
estimand=target_estimand,
data=data,
treatment_name=treatment_name[0],
outcome_name=outcome_name[0],
)
analyzer.check_sensitivity(plot=plot_estimate)
return analyzer
def sensitivity_simulation(
data: pd.DataFrame,
target_estimand: IdentifiedEstimand,
estimate: CausalEstimate,
treatment_name: str,
outcome_name: str,
kappa_t: Optional[Union[float, np.ndarray]] = None,
kappa_y: Optional[Union[float, np.ndarray]] = None,
confounders_effect_on_treatment: str = "binary_flip",
confounders_effect_on_outcome: str = "linear",
frac_strength_treatment: float = 1.0,
frac_strength_outcome: float = 1.0,
plotmethod: Optional[str] = None,
show_progress_bar=False,
**_,
) -> CausalRefutation:
"""
This function attempts to add an unobserved common cause to the outcome and the treatment. At present, we have implemented the behavior for one dimensional behaviors for continuous
and binary variables. This function can either take single valued inputs or a range of inputs. The function then looks at the data type of the input and then decides on the course of
action.
:param data: pd.DataFrame: Data to run the refutation
:param target_estimand: IdentifiedEstimand: Identified estimand to run the refutation
:param estimate: CausalEstimate: Estimate to run the refutation
:param treatment_name: str: Name of the treatment
:param outcome_name: str: Name of the outcome
:param kappa_t: float, numpy.ndarray: Strength of the confounder's effect on treatment. When confounders_effect_on_treatment is linear, it is the regression coefficient. When the confounders_effect_on_treatment is binary flip, it is the probability with which effect of unobserved confounder can invert the value of the treatment.
:param kappa_y: float, numpy.ndarray: Strength of the confounder's effect on outcome. Its interpretation depends on confounders_effect_on_outcome and the simulation_method. When simulation_method is direct-simulation, for a linear effect it behaves like the regression coefficient and for a binary flip, it is the probability with which it can invert the value of the outcome.
:param confounders_effect_on_treatment: str : The type of effect on the treatment due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param confounders_effect_on_outcome: str : The type of effect on the outcome due to the unobserved confounder. Possible values are ['binary_flip', 'linear']
:param frac_strength_treatment: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on treatment. Defaults to 1.
:param frac_strength_outcome: float: This parameter decides the effect strength of the simulated confounder as a fraction of the effect strength of observed confounders on outcome. Defaults to 1.
:param plotmethod: string: Type of plot to be shown. If None, no plot is generated. This parameter is used only only when more than one treatment confounder effect values or outcome confounder effect values are provided. Default is "colormesh". Supported values are "contour", "colormesh" when more than one value is provided for both confounder effect value parameters; "line" when provided for only one of them.
:return: CausalRefuter: An object that contains the estimated effect and a new effect and the name of the refutation used.
"""
if kappa_t is None:
kappa_t = _infer_default_kappa_t(
data, target_estimand, treatment_name, confounders_effect_on_treatment, frac_strength_treatment
)
if kappa_y is None:
kappa_y = _infer_default_kappa_y(
data, target_estimand, outcome_name, confounders_effect_on_outcome, frac_strength_outcome
)
if not isinstance(kappa_t, (list, np.ndarray)) and not isinstance(
kappa_y, (list, np.ndarray)
): # Deal with single value inputs
new_data = copy.deepcopy(data)
new_data = _include_confounders_effect(
data,
new_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
refute.new_effect_array = np.array(new_effect.value)
refute.new_effect = new_effect.value
return refute
else: # Deal with multiple value inputs
if isinstance(kappa_t, (list, np.ndarray)) and isinstance(
kappa_y, (list, np.ndarray)
): # Deal with range inputs
# Get a 2D matrix of values
# x,y = np.meshgrid(self.kappa_t, self.kappa_y) # x,y are both MxN
results_matrix = np.random.rand(len(kappa_t), len(kappa_y)) # Matrix to hold all the results of NxM
orig_data = copy.deepcopy(data)
for i in tqdm(
range(len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
for j in range(len(kappa_y)):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y[j],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value,
new_effect.value,
refutation_type="Refute: Add an Unobserved Common Cause",
)
results_matrix[i][j] = refute.new_effect # Populate the results
refute.new_effect_array = results_matrix
refute.new_effect = (np.min(results_matrix), np.max(results_matrix))
# Store the values into the refute object
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
oe = estimate.value
contour_levels = [oe / 4.0, oe / 2.0, (3.0 / 4) * oe, oe]
contour_levels.extend([0, np.min(results_matrix), np.max(results_matrix)])
if plotmethod == "contour":
cp = plt.contourf(kappa_y, kappa_t, results_matrix, levels=sorted(contour_levels))
# Adding a label on the contour line for the original estimate
fmt = {}
trueeffect_index = np.where(cp.levels == oe)[0][0]
fmt[cp.levels[trueeffect_index]] = "Estimated Effect"
# Label every other level using strings
plt.clabel(cp, [cp.levels[trueeffect_index]], inline=True, fmt=fmt)
plt.colorbar(cp)
elif plotmethod == "colormesh":
cp = plt.pcolormesh(kappa_y, kappa_t, results_matrix, shading="nearest")
plt.colorbar(cp, ticks=contour_levels)
ax.yaxis.set_ticks(kappa_t)
ax.xaxis.set_ticks(kappa_y)
plt.xticks(rotation=45)
ax.set_title("Effect of Unobserved Common Cause")
ax.set_ylabel("Value of Linear Constant on Treatment")
ax.set_xlabel("Value of Linear Constant on Outcome")
plt.show()
return refute
elif isinstance(kappa_t, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_t))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_t)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t[i],
confounders_effect_on_outcome,
outcome_name,
kappa_y,
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_t, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Treatment")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
elif isinstance(kappa_y, (list, np.ndarray)):
outcomes = np.random.rand(len(kappa_y))
orig_data = copy.deepcopy(data)
for i in tqdm(
range(0, len(kappa_y)),
colour=CausalRefuter.PROGRESS_BAR_COLOR,
disable=not show_progress_bar,
desc="Refuting Estimates: ",
):
new_data = _include_confounders_effect(
data,
orig_data,
confounders_effect_on_treatment,
treatment_name,
kappa_t,
confounders_effect_on_outcome,
outcome_name,
kappa_y[i],
)
new_estimator = CausalEstimator.get_estimator_object(new_data, target_estimand, estimate)
new_effect = new_estimator.estimate_effect()
refute = CausalRefutation(
estimate.value, new_effect.value, refutation_type="Refute: Add an Unobserved Common Cause"
)
logger.debug(refute)
outcomes[i] = refute.new_effect # Populate the results
refute.new_effect_array = outcomes
refute.new_effect = (np.min(outcomes), np.max(outcomes))
if plotmethod is None:
return refute
import matplotlib
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 5))
left, bottom, width, height = 0.1, 0.1, 0.8, 0.8
ax = fig.add_axes([left, bottom, width, height])
plt.plot(kappa_y, outcomes)
plt.axhline(estimate.value, linestyle="--", color="gray")
ax.set_title("Effect of Unobserved Common Cause")
ax.set_xlabel("Value of Linear Constant on Outcome")
ax.set_ylabel("Estimated Effect after adding the common cause")
plt.show()
return refute
| andresmor-ms | f13ed30f42440552e4a912372abcb7c3023fc9c0 | 18bd1fe5d9941867dbd135e0d2a0af2fb24feea7 | Added names for these | andresmor-ms | 272 |
py-why/dowhy | 695 | Change mechanics of Docker image build and usage | - Introduce a GH Action to build Docker image for docs generation every time the Dockerfile is changed
- Trigger docs generation when a new Docker image was built
Addresses:
- https://github.com/py-why/dowhy/issues/690
- https://github.com/py-why/dowhy/issues/691 | null | 2022-10-19 15:03:32+00:00 | 2022-10-26 21:10:00+00:00 | docs/Dockerfile | #
# This dockerfile was used to generate darthtrevino/dowhy-docs, used in the
# build-and-publish-docs.yml workflow
#
FROM pego/dowhy-examples-notebooks-deps
RUN apt update
RUN DEBIAN_FRONTEND=noninteractive apt install --yes --quiet curl
RUN curl -sSL https://install.python-poetry.org | python3 - --version 1.2.1
ENV PATH=$PATH:/root/.local/share/pypoetry/venv/bin | #
# This dockerfile was used to generate darthtrevino/dowhy-docs, used in the
# build-and-publish-docs.yml workflow
#
FROM ghcr.io/py-why/dowhy-example-notebooks-deps
RUN apt update
RUN DEBIAN_FRONTEND=noninteractive apt install --yes --quiet curl
RUN curl -sSL https://install.python-poetry.org | python3 - --version 1.2.1
ENV PATH=$PATH:/root/.local/share/pypoetry/venv/bin
| petergtz | 9b7e6a74c4dbb7228999998c81d43c5330df9d31 | 3c5cf3a67b054b2baebaae254c31f457e1fd3365 | I'm assuming this comes packed with all the R deps? Do we have a plan for deprecating the R notebooks? | darthtrevino | 273 |
py-why/dowhy | 695 | Change mechanics of Docker image build and usage | - Introduce a GH Action to build Docker image for docs generation every time the Dockerfile is changed
- Trigger docs generation when a new Docker image was built
Addresses:
- https://github.com/py-why/dowhy/issues/690
- https://github.com/py-why/dowhy/issues/691 | null | 2022-10-19 15:03:32+00:00 | 2022-10-26 21:10:00+00:00 | docs/Dockerfile | #
# This dockerfile was used to generate darthtrevino/dowhy-docs, used in the
# build-and-publish-docs.yml workflow
#
FROM pego/dowhy-examples-notebooks-deps
RUN apt update
RUN DEBIAN_FRONTEND=noninteractive apt install --yes --quiet curl
RUN curl -sSL https://install.python-poetry.org | python3 - --version 1.2.1
ENV PATH=$PATH:/root/.local/share/pypoetry/venv/bin | #
# This dockerfile was used to generate darthtrevino/dowhy-docs, used in the
# build-and-publish-docs.yml workflow
#
FROM ghcr.io/py-why/dowhy-example-notebooks-deps
RUN apt update
RUN DEBIAN_FRONTEND=noninteractive apt install --yes --quiet curl
RUN curl -sSL https://install.python-poetry.org | python3 - --version 1.2.1
ENV PATH=$PATH:/root/.local/share/pypoetry/venv/bin
| petergtz | 9b7e6a74c4dbb7228999998c81d43c5330df9d31 | 3c5cf3a67b054b2baebaae254c31f457e1fd3365 | > I'm assuming this comes packed with all the R deps?
yes
> Do we have a plan for deprecating the R notebooks?
Yes. I've already removed R dependencies in a couple of notebooks that use the Lalonde dataset and use R only to load it (see recent commit history).
Actually, the plan is not to deprecate the notebooks, but to replace the R code in there with native Python. We'll need to take inventory and see what this will mean concretely. I.e., how many notebooks are we actually talking about. And then what algorithms are used and what can already be replaces.
| petergtz | 274 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | docs/source/example_notebooks/dowhy_functional_api.ipynb | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"outcome_name = data[\"outcome_name\"]\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect\n",
"\n",
"Estimate Effect is performed by using the causal_model api as there is not functional equivalent yet"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# We will still need CausalModel as the Functional Effect Estimation is still Work-In-Progress\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n",
"\n",
"estimate = causal_model.estimate_effect(identified_estimand, method_name=\"backdoor.propensity_score_matching\")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand, estimate, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand, estimate, \"data_subset_refuter\"\n",
")\n",
"print(data_subset_refutation_causal_model_api)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_estimators.propensity_score_matching_estimator import PropensityScoreMatchingEstimator\n",
"\n",
"from dowhy.utils.api import parse_state\n",
"\n",
"from dowhy.causal_estimator import estimate_effect # Estimate effect function\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"print(treatment_name)\n",
"outcome_name = data[\"outcome_name\"]\n",
"print(outcome_name)\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Basic Estimate Effect function\n",
"\n",
"\n",
"propensity_score_estimator = PropensityScoreMatchingEstimator(\n",
" data=data[\"df\"],\n",
" identified_estimand=identified_estimand,\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" control_value=0,\n",
" treatment_value=1,\n",
" test_significance=None,\n",
" evaluate_effect_strength=False,\n",
" confidence_intervals=False,\n",
" target_units=\"ate\",\n",
" effect_modifiers=graph.get_effect_modifiers(treatment_name, outcome_name),\n",
")\n",
"\n",
"estimate = estimate_effect(\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" identified_estimand=identified_estimand,\n",
" identifier_name=\"backdoor\",\n",
" method=propensity_score_estimator,\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create Causal Model\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand, estimate, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand, estimate, \"data_subset_refuter\"\n",
")\n",
"print(data_subset_refutation_causal_model_api)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | @andresmor-ms @amit-sharma Hey guys, I was wondering how you think about the following proposal (which is what I believe is what we have also discussed in the past at some point). It might be a bit naiv, because I don't understand all the details and subtleties of the existing implementation. But just throwing it out there:
```python
estimator = PropensityScoreMatchingEstimator(
identified_estimand=identified_estimand,
... # I think most parameters are only needed during effect estimation, so they should not be necessary here
effect_modifiers=graph.get_effect_modifiers(treatment_name, outcome_name),
)
estimator.fit(data=data["df"])
estimate = estimate_effect(
treatment=treatment_name,
outcome=outcome_name,
identified_estimand=identified_estimand, # do we even need this here again, given that this is already in the esitmator?
identifier_name="backdoor",
method=estimator,
treatment_value=...,
control_value=...
)
```
What I like about this proposal is that it separates the fitting from effect estimation. Still, to make this less verbose:
```python
estimate = estimate_effect(
treatment=treatment_name,
outcome=outcome_name,
identified_estimand=identified_estimand,
identifier_name="backdoor",
method=PropensityScoreMatchingEstimator(
identified_estimand=identified_estimand,
...
effect_modifiers=graph.get_effect_modifiers(treatment_name, outcome_name),
).fit(data=data["df"]),
treatment_value=...,
control_value=...
)
```
This assumes that `fit(...)` returns `self`. This would preserve the property of the current DoWhy API where effect estimation is one step.
Finally, since `estimate_effect` ultimately calls the estimators `estimate_effect`, do we even need the free function `estimate_effect`? What exactly does it provide?
If we don't need such function, we could even simplify to:
```python
estimate = PropensityScoreMatchingEstimator(
identified_estimand=identified_estimand,
...
effect_modifiers=graph.get_effect_modifiers(treatment_name, outcome_name),
) \
.fit(data=data["df"]) \
.estimate_effect(treatment=treatment_name,
outcome=outcome_name,
identified_estimand=identified_estimand,
identifier_name="backdoor",
treatment_value=...,
control_value=...
```
Would love to get your thoughts on this :-). | petergtz | 275 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | docs/source/example_notebooks/dowhy_functional_api.ipynb | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"outcome_name = data[\"outcome_name\"]\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect\n",
"\n",
"Estimate Effect is performed by using the causal_model api as there is not functional equivalent yet"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# We will still need CausalModel as the Functional Effect Estimation is still Work-In-Progress\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n",
"\n",
"estimate = causal_model.estimate_effect(identified_estimand, method_name=\"backdoor.propensity_score_matching\")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand, estimate, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand, estimate, \"data_subset_refuter\"\n",
")\n",
"print(data_subset_refutation_causal_model_api)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_estimators.propensity_score_matching_estimator import PropensityScoreMatchingEstimator\n",
"\n",
"from dowhy.utils.api import parse_state\n",
"\n",
"from dowhy.causal_estimator import estimate_effect # Estimate effect function\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"print(treatment_name)\n",
"outcome_name = data[\"outcome_name\"]\n",
"print(outcome_name)\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Basic Estimate Effect function\n",
"\n",
"\n",
"propensity_score_estimator = PropensityScoreMatchingEstimator(\n",
" data=data[\"df\"],\n",
" identified_estimand=identified_estimand,\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" control_value=0,\n",
" treatment_value=1,\n",
" test_significance=None,\n",
" evaluate_effect_strength=False,\n",
" confidence_intervals=False,\n",
" target_units=\"ate\",\n",
" effect_modifiers=graph.get_effect_modifiers(treatment_name, outcome_name),\n",
")\n",
"\n",
"estimate = estimate_effect(\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" identified_estimand=identified_estimand,\n",
" identifier_name=\"backdoor\",\n",
" method=propensity_score_estimator,\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create Causal Model\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand, estimate, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand, estimate, \"data_subset_refuter\"\n",
")\n",
"print(data_subset_refutation_causal_model_api)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | Or maybe I'm misunderstanding this, and the whole point of the _function_ `estimate_effect` is to take care of calling the _methods_ `fit` and `estimate_effect`. Then never mind my comment above. | petergtz | 276 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | docs/source/example_notebooks/dowhy_functional_api.ipynb | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"outcome_name = data[\"outcome_name\"]\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect\n",
"\n",
"Estimate Effect is performed by using the causal_model api as there is not functional equivalent yet"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# We will still need CausalModel as the Functional Effect Estimation is still Work-In-Progress\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n",
"\n",
"estimate = causal_model.estimate_effect(identified_estimand, method_name=\"backdoor.propensity_score_matching\")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand, estimate, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand, estimate, \"data_subset_refuter\"\n",
")\n",
"print(data_subset_refutation_causal_model_api)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_estimators.propensity_score_matching_estimator import PropensityScoreMatchingEstimator\n",
"\n",
"from dowhy.utils.api import parse_state\n",
"\n",
"from dowhy.causal_estimator import estimate_effect # Estimate effect function\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"print(treatment_name)\n",
"outcome_name = data[\"outcome_name\"]\n",
"print(outcome_name)\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Basic Estimate Effect function\n",
"\n",
"\n",
"propensity_score_estimator = PropensityScoreMatchingEstimator(\n",
" data=data[\"df\"],\n",
" identified_estimand=identified_estimand,\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" control_value=0,\n",
" treatment_value=1,\n",
" test_significance=None,\n",
" evaluate_effect_strength=False,\n",
" confidence_intervals=False,\n",
" target_units=\"ate\",\n",
" effect_modifiers=graph.get_effect_modifiers(treatment_name, outcome_name),\n",
")\n",
"\n",
"estimate = estimate_effect(\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" identified_estimand=identified_estimand,\n",
" identifier_name=\"backdoor\",\n",
" method=propensity_score_estimator,\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create Causal Model\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand, estimate, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand, estimate, \"data_subset_refuter\"\n",
")\n",
"print(data_subset_refutation_causal_model_api)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | Hey @petergtz, I believe that what you just described is my end goal here, I think that @amit-sharma still wants to keep the `estimate_effect` function as a way to automatically select parameters (for users that maybe don't know which parameters to pick) I want to separate this into several PRs to avoid creating one giant PR and also keep testing for backwards compatibility. This first PR is just extracting the `estimate_effect` function to make it "functional". My next PR will be introducing the `fit()` method and moving parameters around to the place they are actually used (and removing the some of the `**kwargs`). | andresmor-ms | 277 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | docs/source/example_notebooks/dowhy_functional_api.ipynb | {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"outcome_name = data[\"outcome_name\"]\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect\n",
"\n",
"Estimate Effect is performed by using the causal_model api as there is not functional equivalent yet"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# We will still need CausalModel as the Functional Effect Estimation is still Work-In-Progress\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n",
"\n",
"estimate = causal_model.estimate_effect(identified_estimand, method_name=\"backdoor.propensity_score_matching\")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand, estimate, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand, estimate, \"data_subset_refuter\"\n",
")\n",
"print(data_subset_refutation_causal_model_api)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| {
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Functional API Preview\n",
"\n",
"This notebook is part of a set of notebooks that provides a preview of the proposed functional API for dowhy. For details on the new API for DoWhy, check out https://github.com/py-why/dowhy/wiki/API-proposal-for-v1 It is a work-in-progress and is updated as we add new functionality. We welcome your feedback through Discord or on the Discussions page.\n",
"This functional API is designed with backwards compatibility. So both the old and new API will continue to co-exist and work for the immediate new releases. Gradually the old API using CausalModel will be deprecated in favor of the new API. \n",
"\n",
"The current Functional API covers:\n",
"* Identify Effect:\n",
" * `identify_effect(...)`: Run the identify effect algorithm using defaults just provide the graph, treatment and outcome.\n",
" * `auto_identify_effect(...)`: More configurable version of `identify_effect(...)`.\n",
" * `id_identify_effect(...)`: Identify Effect using the ID-Algorithm.\n",
"* Refute Estimate:\n",
" * `refute_estimate`: Function to run a set of the refuters below with the default parameters.\n",
" * `refute_bootstrap`: Refute an estimate by running it on a random sample of the data containing measurement error in the confounders.\n",
" * `refute_data_subset`: Refute an estimate by rerunning it on a random subset of the original data.\n",
" * `refute_random_common_cause`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved).\n",
" * `refute_placebo_treatment`: Refute an estimate by replacing treatment with a randomly-generated placebo variable.\n",
" * `sensitivity_simulation`: Add an unobserved confounder for refutation (Simulation of an unobserved confounder).\n",
" * `sensitivity_linear_partial_r2`: Add an unobserved confounder for refutation (Linear partial R2 : Sensitivity Analysis for linear models).\n",
" * `sensitivity_non_parametric_partial_r2`: Add an unobserved confounder for refutation (Non-Parametric partial R2 based : Sensitivity Analyis for non-parametric models).\n",
" * `sensitivity_e_value`: Computes E-value for point estimate and confidence limits. Benchmarks E-values against measured confounders using Observed Covariate E-values. Plots E-values and Observed\n",
" Covariate E-values.\n",
" * `refute_dummy_outcome`: Refute an estimate by introducing a randomly generated confounder (that may have been unobserved)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Functional API imports\n",
"from dowhy.causal_identifier import (\n",
" BackdoorAdjustment,\n",
" EstimandType,\n",
" identify_effect,\n",
" identify_effect_auto,\n",
" identify_effect_id,\n",
") # import effect identifier\n",
"from dowhy.causal_refuters import (\n",
" refute_bootstrap,\n",
" refute_data_subset,\n",
" refute_random_common_cause,\n",
" refute_placebo_treatment,\n",
" sensitivity_e_value,\n",
" sensitivity_linear_partial_r2,\n",
" sensitivity_non_parametric_partial_r2,\n",
" sensitivity_simulation,\n",
" refute_dummy_outcome,\n",
" refute_estimate,\n",
") # import refuters\n",
"\n",
"from dowhy.causal_estimators.propensity_score_matching_estimator import PropensityScoreMatchingEstimator\n",
"\n",
"from dowhy.utils.api import parse_state\n",
"\n",
"from dowhy.causal_estimator import estimate_effect # Estimate effect function\n",
"\n",
"from dowhy.causal_graph import CausalGraph\n",
"\n",
"# Other imports required\n",
"from dowhy.datasets import linear_dataset\n",
"from dowhy import CausalModel # We still need this as we haven't created the functional API for effect estimation\n",
"import econml\n",
"\n",
"# Config dict to set the logging level\n",
"import logging.config\n",
"\n",
"DEFAULT_LOGGING = {\n",
" \"version\": 1,\n",
" \"disable_existing_loggers\": False,\n",
" \"loggers\": {\n",
" \"\": {\n",
" \"level\": \"WARN\",\n",
" },\n",
" },\n",
"}\n",
"\n",
"logging.config.dictConfig(DEFAULT_LOGGING)\n",
"# Disabling warnings output\n",
"import warnings\n",
"from sklearn.exceptions import DataConversionWarning\n",
"\n",
"warnings.filterwarnings(action=\"ignore\", category=DataConversionWarning)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create the Datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Parameters for creating the Dataset\n",
"TREATMENT_IS_BINARY = True\n",
"BETA = 10\n",
"NUM_SAMPLES = 500\n",
"NUM_CONFOUNDERS = 3\n",
"NUM_INSTRUMENTS = 2\n",
"NUM_EFFECT_MODIFIERS = 2\n",
"\n",
"# Creating a Linear Dataset with the given parameters\n",
"data = linear_dataset(\n",
" beta=BETA,\n",
" num_common_causes=NUM_CONFOUNDERS,\n",
" num_instruments=NUM_INSTRUMENTS,\n",
" num_effect_modifiers=NUM_EFFECT_MODIFIERS,\n",
" num_samples=NUM_SAMPLES,\n",
" treatment_is_binary=True,\n",
")\n",
"\n",
"treatment_name = data[\"treatment_name\"]\n",
"print(treatment_name)\n",
"outcome_name = data[\"outcome_name\"]\n",
"print(outcome_name)\n",
"\n",
"graph = CausalGraph(\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" graph=data[\"gml_graph\"],\n",
" effect_modifier_names=data[\"effect_modifier_names\"],\n",
" common_cause_names=data[\"common_causes_names\"],\n",
" observed_node_names=data[\"df\"].columns.tolist(),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default identify_effect call example:\n",
"identified_estimand = identify_effect(graph, treatment_name, outcome_name)\n",
"\n",
"# auto_identify_effect example with extra parameters:\n",
"identified_estimand_auto = identify_effect_auto(\n",
" graph,\n",
" treatment_name,\n",
" outcome_name,\n",
" estimand_type=EstimandType.NONPARAMETRIC_ATE,\n",
" backdoor_adjustment=BackdoorAdjustment.BACKDOOR_EFFICIENT,\n",
")\n",
"\n",
"# id_identify_effect example:\n",
"identified_estimand_id = identify_effect_id(\n",
" graph, treatment_name, outcome_name\n",
") # Note that the return type for id_identify_effect is IDExpression and not IdentifiedEstimand\n",
"\n",
"print(identified_estimand)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Basic Estimate Effect function\n",
"\n",
"\n",
"propensity_score_estimator = PropensityScoreMatchingEstimator(\n",
" data=data[\"df\"],\n",
" identified_estimand=identified_estimand,\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" control_value=0,\n",
" treatment_value=1,\n",
" test_significance=None,\n",
" evaluate_effect_strength=False,\n",
" confidence_intervals=False,\n",
" target_units=\"ate\",\n",
" effect_modifiers=graph.get_effect_modifiers(treatment_name, outcome_name),\n",
")\n",
"\n",
"estimate = estimate_effect(\n",
" treatment=treatment_name,\n",
" outcome=outcome_name,\n",
" identified_estimand=identified_estimand,\n",
" identifier_name=\"backdoor\",\n",
" method=propensity_score_estimator,\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate - Functional API (Preview)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# You can call the refute_estimate function for executing several refuters using default parameters\n",
"# Currently this function does not support sensitivity_* functions\n",
"refutation_results = refute_estimate(\n",
" data[\"df\"],\n",
" identified_estimand,\n",
" estimate,\n",
" treatment_name=treatment_name,\n",
" outcome_name=outcome_name,\n",
" refuters=[refute_bootstrap, refute_data_subset],\n",
")\n",
"\n",
"for result in refutation_results:\n",
" print(result)\n",
"\n",
"# Or you can execute refute methods directly\n",
"# You can change the refute_bootstrap - refute_data_subset for any of the other refuters and add the missing parameters\n",
"\n",
"bootstrap_refutation = refute_bootstrap(data[\"df\"], identified_estimand, estimate)\n",
"print(bootstrap_refutation)\n",
"\n",
"data_subset_refutation = refute_data_subset(data[\"df\"], identified_estimand, estimate)\n",
"print(data_subset_refutation)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Backwards Compatibility\n",
"\n",
"This section shows replicating the same results using only the CausalModel API"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create Causal Model\n",
"causal_model = CausalModel(data=data[\"df\"], treatment=treatment_name, outcome=outcome_name, graph=data[\"gml_graph\"])\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Identify Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"identified_estimand_causal_model_api = (\n",
" causal_model.identify_effect()\n",
") # graph, treatment and outcome comes from the causal_model object\n",
"\n",
"print(identified_estimand_causal_model_api)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimate Effect"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"estimate_causal_model_api = causal_model.estimate_effect(\n",
" identified_estimand, method_name=\"backdoor.propensity_score_matching\"\n",
")\n",
"\n",
"print(estimate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Refute Estimate"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bootstrap_refutation_causal_model_api = causal_model.refute_estimate(identified_estimand, estimate, \"bootstrap_refuter\")\n",
"print(bootstrap_refutation_causal_model_api)\n",
"\n",
"data_subset_refutation_causal_model_api = causal_model.refute_estimate(\n",
" identified_estimand, estimate, \"data_subset_refuter\"\n",
")\n",
"print(data_subset_refutation_causal_model_api)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.10 ('dowhy-_zBapv7Q-py3.8')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": false,
"sideBar": true,
"skip_h1_title": true,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"vscode": {
"interpreter": {
"hash": "dcb481ad5d98e2afacd650b2c07afac80a299b7b701b553e333fc82865502500"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | That's great! Thanks for providing the context, @andresmor-ms. Resolving. | petergtz | 278 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | estimate_effect does not need the graph as a parameter | amit-sharma | 279 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | shall we provide the method object here? And get rid of method_kwargs, as done in identification? | amit-sharma | 280 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | this needs to be modified since causal_estimator is not defined so far. Assuming that the user provides a method object, we can use just use that method object and assign it to `causal_estimator` | amit-sharma | 281 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | In line 741 we use graph to get the effect modifiers, unless you want them to be provided in parameter without option to get them from the CausalGraph object? | andresmor-ms | 282 |
py-why/dowhy | 693 | Functional api/estimate effect function | #### Estimate Effect function
* Refactors the estimate effect into a separate function to keep backwards compatibility
#### TODO (future PRs):
* Add `fit(...)` method to estimators - Move data related parameters from the constructor to the `fit(...)` method
* Refactor code to avoid `**kwargs` in `__init__(...)` constructors
| null | 2022-10-18 15:49:21+00:00 | 2022-10-25 17:02:02+00:00 | dowhy/causal_estimator.py | import logging
from collections import namedtuple
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy.utils.api import parse_state
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
# Now saving the effect modifiers
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"],
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| import logging
from collections import namedtuple
from typing import Dict, List, Optional, Union
import numpy as np
import pandas as pd
import sympy as sp
from sklearn.utils import resample
import dowhy.interpreters as interpreters
from dowhy import causal_estimators
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier.identified_estimand import IdentifiedEstimand
from dowhy.utils.api import parse_state
logger = logging.getLogger(__name__)
class CausalEstimator:
"""Base class for an estimator of causal effect.
Subclasses implement different estimation methods. All estimation methods are in the package "dowhy.causal_estimators"
"""
# The default number of simulations for statistical testing
DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST = 1000
# The default number of simulations to obtain confidence intervals
DEFAULT_NUMBER_OF_SIMULATIONS_CI = 100
# The portion of the total size that should be taken each time to find the confidence intervals
# 1 is the recommended value
# https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
# https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
DEFAULT_SAMPLE_SIZE_FRACTION = 1
# The default Confidence Level
DEFAULT_CONFIDENCE_LEVEL = 0.95
# Number of quantiles to discretize continuous columns, for applying groupby
NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS = 5
# Prefix to add to temporary categorical variables created after discretization
TEMP_CAT_COLUMN_PREFIX = "__categorical__"
DEFAULT_NOTIMPLEMENTEDERROR_MSG = "not yet implemented for {0}. If you would this to be implemented in the next version, please raise an issue at https://github.com/microsoft/dowhy/issues"
BootstrapEstimates = namedtuple("BootstrapEstimates", ["estimates", "params"])
DEFAULT_INTERPRET_METHOD = ["textual_effect_interpreter"]
# std args to be removed from locals() before being passed to args_dict
_STD_INIT_ARGS = ("self", "__class__", "args", "kwargs")
def __init__(
self,
data,
identified_estimand,
treatment,
outcome,
control_value=0,
treatment_value=1,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=None,
effect_modifiers=None,
num_null_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_STAT_TEST,
num_simulations=DEFAULT_NUMBER_OF_SIMULATIONS_CI,
sample_size_fraction=DEFAULT_SAMPLE_SIZE_FRACTION,
confidence_level=DEFAULT_CONFIDENCE_LEVEL,
need_conditional_estimates="auto",
num_quantiles_to_discretize_cont_cols=NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS,
**kwargs,
):
"""Initializes an estimator with data and names of relevant variables.
This method is called from the constructors of its child classes.
:param data: data frame containing the data
:param identified_estimand: probability expression
representing the target identified estimand to estimate.
:param treatment: name of the treatment variable
:param outcome: name of the outcome variable
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag or a string indicating whether to test significance and by which method. All estimators support test_significance="bootstrap" that estimates a p-value for the obtained estimate using the bootstrap method. Individual estimators can override this to support custom testing methods. The bootstrap method supports an optional parameter, num_null_simulations. If False, no testing is done. If True, significance of the estimate is tested using the custom method if available, otherwise by bootstrap.
:param evaluate_effect_strength: (Experimental) whether to evaluate the strength of effect
:param confidence_intervals: Binary flag or a string indicating whether the confidence intervals should be computed and which method should be used. All methods support estimation of confidence intervals using the bootstrap method by using the parameter confidence_intervals="bootstrap". The bootstrap method takes in two arguments (num_simulations and sample_size_fraction) that can be optionally specified in the params dictionary. Estimators may also override this to implement their own confidence interval method. If this parameter is False, no confidence intervals are computed. If True, confidence intervals are computed by the estimator's specific method if available, otherwise through bootstrap.
:param target_units: The units for which the treatment effect should be estimated. This can be a string for common specifications of target units (namely, "ate", "att" and "atc"). It can also be a lambda function that can be used as an index for the data (pandas DataFrame). Alternatively, it can be a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Variables on which to compute separate
effects, or return a heterogeneous effect function. Not all
methods support this currently.
:param num_null_simulations: The number of simulations for testing the
statistical significance of the estimator
:param num_simulations: The number of simulations for finding the
confidence interval (and/or standard error) for a estimate
:param sample_size_fraction: The size of the sample for the bootstrap
estimator
:param confidence_level: The confidence level of the confidence
interval estimate
:param need_conditional_estimates: Boolean flag indicating whether
conditional estimates should be computed. Defaults to True if
there are effect modifiers in the graph
:param num_quantiles_to_discretize_cont_cols: The number of quantiles
into which a numeric effect modifier is split, to enable
estimation of conditional treatment effect over it.
:param kwargs: (optional) Additional estimator-specific parameters
:returns: an instance of the estimator class.
"""
self._data = data
self._target_estimand = identified_estimand
# Currently estimation methods only support univariate treatment and outcome
self._treatment_name = treatment
self._outcome_name = outcome[0] # assuming one-dimensional outcome
self._control_value = control_value
self._treatment_value = treatment_value
self._significance_test = test_significance
self._effect_strength_eval = evaluate_effect_strength
self._target_units = target_units
self._effect_modifier_names = effect_modifiers
self._confidence_intervals = confidence_intervals
self._bootstrap_estimates = None # for confidence intervals and std error
self._bootstrap_null_estimates = None # for significance test
self._effect_modifiers = None
self.method_params = kwargs
# Setting the default interpret method
self.interpret_method = CausalEstimator.DEFAULT_INTERPRET_METHOD
self.logger = logging.getLogger(__name__)
# Setting treatment and outcome values
if self._data is not None:
self._treatment = self._data[self._treatment_name]
self._outcome = self._data[self._outcome_name]
if self._effect_modifier_names:
# only add the observed nodes
self._effect_modifier_names = [
cname for cname in self._effect_modifier_names if cname in self._data.columns
]
if len(self._effect_modifier_names) > 0:
self._effect_modifiers = self._data[self._effect_modifier_names]
self._effect_modifiers = pd.get_dummies(self._effect_modifiers, drop_first=True)
self.logger.debug("Effect modifiers: " + ",".join(self._effect_modifier_names))
else:
self._effect_modifier_names = None
# Check if some parameters were set, otherwise set to default values
self.num_null_simulations = num_null_simulations
self.num_simulations = num_simulations
self.sample_size_fraction = sample_size_fraction
self.confidence_level = confidence_level
self.num_quantiles_to_discretize_cont_cols = num_quantiles_to_discretize_cont_cols
# Estimate conditional estimates by default
self.need_conditional_estimates = (
need_conditional_estimates if need_conditional_estimates != "auto" else bool(self._effect_modifier_names)
)
@staticmethod
def get_estimator_object(new_data, identified_estimand, estimate):
"""Create a new estimator of the same type as the one passed in the estimate argument.
Creates a new object with new_data and the identified_estimand
:param new_data: np.ndarray, pd.Series, pd.DataFrame
The newly assigned data on which the estimator should run
:param identified_estimand: IdentifiedEstimand
An instance of the identified estimand class that provides the information with
respect to which causal pathways are employed when the treatment effects the outcome
:param estimate: CausalEstimate
It is an already existing estimate whose properties we wish to replicate
:returns: An instance of the same estimator class that had generated the given estimate.
"""
estimator_class = estimate.params["estimator_class"]
new_estimator = estimator_class(
new_data,
identified_estimand,
identified_estimand.treatment_variable,
identified_estimand.outcome_variable,
# names of treatment and outcome
control_value=estimate.control_value,
treatment_value=estimate.treatment_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=estimate.params["confidence_intervals"],
target_units=estimate.params["target_units"],
effect_modifiers=estimate.params["effect_modifiers"],
**estimate.params["method_params"] if estimate.params["method_params"] is not None else {},
)
return new_estimator
def _estimate_effect(self):
"""This method is to be overriden by the child classes, so that they can run the estimation technique of their choice"""
raise NotImplementedError(
("Main estimation method is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def estimate_effect(self):
"""Base estimation method that calls the estimate_effect method of its calling subclass.
Can optionally also test significance and estimate effect strength for any returned estimate.
:param self: object instance of class Estimator
:returns: A CausalEstimate instance that contains point estimates of average and conditional effects. Based on the parameters provided, it optionally includes confidence intervals, standard errors,statistical significance and other statistical parameters.
"""
est = self._estimate_effect()
est.add_estimator(self)
if self._significance_test:
self.test_significance(est.value, method=self._significance_test)
if self._confidence_intervals:
self.estimate_confidence_intervals(
est.value, confidence_level=self.confidence_level, method=self._confidence_intervals
)
if self._effect_strength_eval:
effect_strength_dict = self.evaluate_effect_strength(est)
est.add_effect_strength(effect_strength_dict)
return est
def estimate_effect_naive(self):
# TODO Only works for binary treatment
df_withtreatment = self._data.loc[self._data[self._treatment_name] == 1]
df_notreatment = self._data.loc[self._data[self._treatment_name] == 0]
est = np.mean(df_withtreatment[self._outcome_name]) - np.mean(df_notreatment[self._outcome_name])
return CausalEstimate(est, None, None, control_value=0, treatment_value=1)
def _estimate_effect_fn(self, data_df):
"""Function used in conditional effect estimation. This function is to be overridden by each child estimator.
The overridden function should take in a dataframe as input and return the estimate for that data.
"""
raise NotImplementedError(
("Conditional treatment effects are " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(
self.__class__
)
)
def _estimate_conditional_effects(self, estimate_effect_fn, effect_modifier_names=None, num_quantiles=None):
"""Estimate conditional treatment effects. Common method for all estimators that utilizes a specific estimate_effect_fn implemented by each child estimator.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param estimate_effect_fn: Function that has a single parameter (a data frame) and returns the treatment effect estimate on that data.
:param effect_modifier_names: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
# Defaulting to class default values if parameters are not provided
if effect_modifier_names is None:
effect_modifier_names = self._effect_modifier_names
if num_quantiles is None:
num_quantiles = self.num_quantiles_to_discretize_cont_cols
# Checking that there is at least one effect modifier
if not effect_modifier_names:
raise ValueError("At least one effect modifier should be specified to compute conditional effects.")
# Making sure that effect_modifier_names is a list
effect_modifier_names = parse_state(effect_modifier_names)
if not all(em in self._effect_modifier_names for em in effect_modifier_names):
self.logger.warn(
"At least one of the provided effect modifiers was not included while fitting the estimator. You may get incorrect results. To resolve, fit the estimator again by providing the updated effect modifiers in estimate_effect()."
)
# Making a copy since we are going to be changing effect modifier names
effect_modifier_names = effect_modifier_names.copy()
prefix = CausalEstimator.TEMP_CAT_COLUMN_PREFIX
# For every numeric effect modifier, adding a temp categorical column
for i in range(len(effect_modifier_names)):
em = effect_modifier_names[i]
if pd.api.types.is_numeric_dtype(self._data[em].dtypes):
self._data[prefix + str(em)] = pd.qcut(self._data[em], num_quantiles, duplicates="drop")
effect_modifier_names[i] = prefix + str(em)
# Grouping by effect modifiers and computing effect separately
by_effect_mods = self._data.groupby(effect_modifier_names)
cond_est_fn = lambda x: self._do(self._treatment_value, x) - self._do(self._control_value, x)
conditional_estimates = by_effect_mods.apply(estimate_effect_fn)
# Deleting the temporary categorical columns
for em in effect_modifier_names:
if em.startswith(prefix):
self._data.pop(em)
return conditional_estimates
def _do(self, x, data_df=None):
raise NotImplementedError(
("Do-operator is " + CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG).format(self.__class__)
)
def do(self, x, data_df=None):
"""Method that implements the do-operator.
Given a value x for the treatment, returns the expected value of the outcome when the treatment is intervened to a value x.
:param x: Value of the treatment
:param data_df: Data on which the do-operator is to be applied.
:returns: Value of the outcome when treatment is intervened/set to x.
"""
est = self._do(x, data_df)
return est
def construct_symbolic_estimator(self, estimand):
raise NotImplementedError(("Symbolic estimator string is ").format(self.__class__))
def _generate_bootstrap_estimates(self, num_bootstrap_simulations, sample_size_fraction):
"""Helper function to generate causal estimates over bootstrapped samples.
:param num_bootstrap_simulations: Number of simulations for the bootstrap method.
:param sample_size_fraction: Fraction of the dataset to be resampled.
:returns: A collections.namedtuple containing a list of bootstrapped estimates and a dictionary containing parameters used for the bootstrap.
"""
# The array that stores the results of all estimations
simulation_results = np.zeros(num_bootstrap_simulations)
# Find the sample size the proportion with the population size
sample_size = int(sample_size_fraction * len(self._data))
if sample_size > len(self._data):
self.logger.warning("WARN: The sample size is greater than the data being sampled")
self.logger.info("INFO: The sample size: {}".format(sample_size))
self.logger.info("INFO: The number of simulations: {}".format(num_bootstrap_simulations))
# Perform the set number of simulations
for index in range(num_bootstrap_simulations):
new_data = resample(self._data, n_samples=sample_size)
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
self._target_estimand.outcome_variable,
# names of treatment and outcome
treatment_value=self._treatment_value,
control_value=self._control_value,
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
simulation_results[index] = new_effect.value
estimates = CausalEstimator.BootstrapEstimates(
simulation_results,
{"num_simulations": num_bootstrap_simulations, "sample_size_fraction": sample_size_fraction},
)
return estimates
def _estimate_confidence_intervals_with_bootstrap(
self, estimate_value, confidence_level=None, num_simulations=None, sample_size_fraction=None
):
"""
Method to compute confidence interval using bootstrapped sampling.
:param estimate_value: obtained estimate's value
:param confidence_level: The level for which to compute CI (e.g., 95% confidence level translates to confidence_level=0.95)
:param num_simulations: The number of simulations to be performed to get the bootstrap confidence intervals.
:param sample_size_fraction: The fraction of the dataset to be resampled.
:returns: confidence interval at the specified level.
For more details on bootstrap or resampling statistics, refer to the following links:
https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf
https://projecteuclid.org/download/pdf_1/euclid.ss/1032280214
"""
# Using class default parameters if not specified
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Checked if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
# Now use the data obtained from the simulations to get the value of the confidence estimates
bootstrap_estimates = self._bootstrap_estimates.estimates
# Get the variations of each bootstrap estimate and sort
bootstrap_variations = [bootstrap_estimate - estimate_value for bootstrap_estimate in bootstrap_estimates]
sorted_bootstrap_variations = np.sort(bootstrap_variations)
# Now we take the (1- p)th and the (p)th variations, where p is the chosen confidence level
upper_bound_index = int((1 - confidence_level) * len(sorted_bootstrap_variations))
lower_bound_index = int(confidence_level * len(sorted_bootstrap_variations))
# Get the lower and upper bounds by subtracting the variations from the estimate
lower_bound = estimate_value - sorted_bootstrap_variations[lower_bound_index]
upper_bound = estimate_value - sorted_bootstrap_variations[upper_bound_index]
return lower_bound, upper_bound
def _estimate_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a confidence interval estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating confidence intervals is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate confidence intervals."
).format(self.__class__)
)
def estimate_confidence_intervals(self, estimate_value, confidence_level=None, method=None, **kwargs):
"""Find the confidence intervals corresponding to any estimator
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param estimate_value: obtained estimate's value
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals # this is either True or methodname
else:
method = "default"
confidence_intervals = None
if confidence_level is None:
confidence_level = self.confidence_level
if method == "default" or method is True: # user has not provided any method
try:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
except NotImplementedError:
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
if method == "bootstrap":
confidence_intervals = self._estimate_confidence_intervals_with_bootstrap(
estimate_value, confidence_level, **kwargs
)
else:
confidence_intervals = self._estimate_confidence_intervals(confidence_level, method=method, **kwargs)
return confidence_intervals
def _estimate_std_error_with_bootstrap(self, num_simulations=None, sample_size_fraction=None):
"""Compute standard error using the bootstrap method. Standard error
and confidence intervals use the same parameter num_simulations for
the number of bootstrap simulations.
:param num_simulations: Number of bootstrapped samples.
:param sample_size_fraction: Fraction of data to be resampled.
:returns: Standard error of the obtained estimate.
"""
# Use existing params, if new user defined params are not present
if num_simulations is None:
num_simulations = self.num_simulations
if sample_size_fraction is None:
sample_size_fraction = self.sample_size_fraction
# Checking if bootstrap_estimates are already computed
if self._bootstrap_estimates is None:
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
elif CausalEstimator.is_bootstrap_parameter_changed(self._bootstrap_estimates.params, locals()):
# Check if any parameter is changed from the previous std error estimate
self._bootstrap_estimates = self._generate_bootstrap_estimates(num_simulations, sample_size_fraction)
std_error = np.std(self._bootstrap_estimates.estimates)
return std_error
def _estimate_std_error(self, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a standard error estimation method suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for estimating standard errors is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to estimate standard errors."
).format(self.__class__)
)
def estimate_std_error(self, method=None, **kwargs):
"""Compute standard error of an obtained causal estimate.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
if method is None:
if self._confidence_intervals:
method = self._confidence_intervals
else:
method = "default"
std_error = None
if method == "default" or method is True: # user has not provided any method
try:
std_error = self._estimate_std_error(method, **kwargs)
except NotImplementedError:
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
if method == "bootstrap":
std_error = self._estimate_std_error_with_bootstrap(**kwargs)
else:
std_error = self._estimate_std_error(method, **kwargs)
return std_error
def _test_significance_with_bootstrap(self, estimate_value, num_null_simulations=None):
"""Test statistical significance of an estimate using the bootstrap method.
:param estimate_value: Obtained estimate's value
:param num_null_simulations: Number of simulations for the null hypothesis
:returns: p-value of the statistical significance test.
"""
# Use existing params, if new user defined params are not present
if num_null_simulations is None:
num_null_simulations = self.num_null_simulations
do_retest = self._bootstrap_null_estimates is None or CausalEstimator.is_bootstrap_parameter_changed(
self._bootstrap_null_estimates.params, locals()
)
if do_retest:
null_estimates = np.zeros(num_null_simulations)
for i in range(num_null_simulations):
new_outcome = np.random.permutation(self._outcome)
new_data = self._data.assign(dummy_outcome=new_outcome)
# self._outcome = self._data["dummy_outcome"]
new_estimator = type(self)(
new_data,
self._target_estimand,
self._target_estimand.treatment_variable,
("dummy_outcome",),
test_significance=False,
evaluate_effect_strength=False,
confidence_intervals=False,
target_units=self._target_units,
effect_modifiers=self._effect_modifier_names,
**self.method_params,
)
new_effect = new_estimator.estimate_effect()
null_estimates[i] = new_effect.value
self._bootstrap_null_estimates = CausalEstimator.BootstrapEstimates(
null_estimates, {"num_null_simulations": num_null_simulations, "sample_size_fraction": 1}
)
# Processing the null hypothesis estimates
sorted_null_estimates = np.sort(self._bootstrap_null_estimates.estimates)
self.logger.debug("Null estimates: {0}".format(sorted_null_estimates))
median_estimate = sorted_null_estimates[int(num_null_simulations / 2)]
# Doing a two-sided test
if estimate_value > median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="left")
p_value = 1 - (estimate_index / num_null_simulations)
if estimate_value <= median_estimate:
# Being conservative with the p-value reported
estimate_index = np.searchsorted(sorted_null_estimates, estimate_value, side="right")
p_value = estimate_index / num_null_simulations
# If the estimate_index is 0, it depends on the number of simulations
if p_value == 0:
p_value = (0, 1 / len(sorted_null_estimates)) # a tuple determining the range.
elif p_value == 1:
p_value = (1 - 1 / len(sorted_null_estimates), 1)
signif_dict = {"p_value": p_value}
return signif_dict
def _test_significance(self, estimate_value, method=None, **kwargs):
"""
This method is to be overriden by the child classes, so that they
can run a significance test suited to the specific
causal estimator.
"""
raise NotImplementedError(
(
"This method for testing statistical significance is "
+ CausalEstimator.DEFAULT_NOTIMPLEMENTEDERROR_MSG
+ " Meanwhile, you can try the bootstrap method (method='bootstrap') to test statistical significance."
).format(self.__class__)
)
def test_significance(self, estimate_value, method=None, **kwargs):
"""Test statistical significance of obtained estimate.
By default, uses resampling to create a non-parametric significance test.
A general procedure. Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param self: object instance of class Estimator
:param estimate_value: obtained estimate's value
:param method: Method for checking statistical significance
:returns: p-value from the significance test
"""
if method is None:
if self._significance_test:
method = self._significance_test # this is either True or methodname
else:
method = "default"
signif_dict = None
if method == "default" or method is True: # user has not provided any method
try:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
except NotImplementedError:
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
if method == "bootstrap":
signif_dict = self._test_significance_with_bootstrap(estimate_value, **kwargs)
else:
signif_dict = self._test_significance(estimate_value, method, **kwargs)
return signif_dict
def evaluate_effect_strength(self, estimate):
fraction_effect_explained = self._evaluate_effect_strength(estimate, method="fraction-effect")
# Need to test r-squared before supporting
# effect_r_squared = self._evaluate_effect_strength(estimate, method="r-squared")
strength_dict = {
"fraction-effect": fraction_effect_explained
# 'r-squared': effect_r_squared
}
return strength_dict
def _evaluate_effect_strength(self, estimate, method="fraction-effect"):
supported_methods = ["fraction-effect"]
if method not in supported_methods:
raise NotImplementedError("This method is not supported for evaluating effect strength")
if method == "fraction-effect":
naive_obs_estimate = self.estimate_effect_naive()
self.logger.debug(estimate.value, naive_obs_estimate.value)
fraction_effect_explained = estimate.value / naive_obs_estimate.value
return fraction_effect_explained
# elif method == "r-squared":
# outcome_mean = np.mean(self._outcome)
# total_variance = np.sum(np.square(self._outcome - outcome_mean))
# Assuming a linear model with one variable: the treatment
# Currently only works for continuous y
# causal_model = outcome_mean + estimate.value*self._treatment
# squared_residual = np.sum(np.square(self._outcome - causal_model))
# r_squared = 1 - (squared_residual/total_variance)
# return r_squared
else:
return None
def update_input(self, treatment_value, control_value, target_units):
self._control_value = control_value
self._treatment_value = treatment_value
self._target_units = target_units
@staticmethod
def is_bootstrap_parameter_changed(bootstrap_estimates_params, given_params):
"""Check whether parameters of the bootstrap have changed.
This is an efficiency method that checks if fresh resampling of the bootstrap samples is required.
Returns True if parameters have changed and resampling should be done again.
:param bootstrap_estimates_params: A dictionary of parameters for the current bootstrap samples
:param given_params: A dictionary of parameters passed by the user
:returns: A binary flag denoting whether the parameters are different.
"""
is_any_parameter_changed = False
for prm, val in bootstrap_estimates_params.items():
given_val = given_params.get(prm, None)
if given_val is not None and given_val != val:
is_any_parameter_changed = True
break
return is_any_parameter_changed
def target_units_tostr(self):
s = ""
if type(self._target_units) is str:
s += self._target_units
elif callable(self._target_units):
s += "Data subset defined by a function"
elif isinstance(self._target_units, pd.DataFrame):
s += "Data subset provided as a data frame"
return s
def signif_results_tostr(self, signif_results):
s = ""
pval = signif_results["p_value"]
if type(pval) is tuple:
s += "[{0}, {1}]".format(pval[0], pval[1])
else:
s += "{0}".format(pval)
return s
def estimate_effect(
treatment: Union[str, List[str]],
outcome: Union[str, List[str]],
identified_estimand: IdentifiedEstimand,
identifier_name: str,
method: CausalEstimator,
control_value: int = 0,
treatment_value: int = 1,
test_significance: Optional[bool] = None,
evaluate_effect_strength: bool = False,
confidence_intervals: bool = False,
target_units: str = "ate",
effect_modifiers: List[str] = [],
fit_estimator: bool = True,
method_params: Optional[Dict] = None,
):
"""Estimate the identified causal effect.
Currently requires an explicit method name to be specified. Method names follow the convention of identification method followed by the specific estimation method: "[backdoor/iv].estimation_method_name". Following methods are supported.
* Propensity Score Matching: "backdoor.propensity_score_matching"
* Propensity Score Stratification: "backdoor.propensity_score_stratification"
* Propensity Score-based Inverse Weighting: "backdoor.propensity_score_weighting"
* Linear Regression: "backdoor.linear_regression"
* Generalized Linear Models (e.g., logistic regression): "backdoor.generalized_linear_model"
* Instrumental Variables: "iv.instrumental_variable"
* Regression Discontinuity: "iv.regression_discontinuity"
In addition, you can directly call any of the EconML estimation methods. The convention is "backdoor.econml.path-to-estimator-class". For example, for the double machine learning estimator ("DML" class) that is located inside "dml" module of EconML, you can use the method name, "backdoor.econml.dml.DML". CausalML estimators can also be called. See `this demo notebook <https://py-why.github.io/dowhy/example_notebooks/dowhy-conditional-treatment-effects.html>`_.
:param treatment: Name of the treatment
:param outcome: Name of the outcome
:param identified_estimand: a probability expression
that represents the effect to be estimated. Output of
CausalModel.identify_effect method
:param method_name: name of the estimation method to be used.
:param control_value: Value of the treatment in the control group, for effect estimation. If treatment is multi-variate, this can be a list.
:param treatment_value: Value of the treatment in the treated group, for effect estimation. If treatment is multi-variate, this can be a list.
:param test_significance: Binary flag on whether to additionally do a statistical signficance test for the estimate.
:param evaluate_effect_strength: (Experimental) Binary flag on whether to estimate the relative strength of the treatment's effect. This measure can be used to compare different treatments for the same outcome (by running this method with different treatments sequentially).
:param confidence_intervals: (Experimental) Binary flag indicating whether confidence intervals should be computed.
:param target_units: (Experimental) The units for which the treatment effect should be estimated. This can be of three types. (1) a string for common specifications of target units (namely, "ate", "att" and "atc"), (2) a lambda function that can be used as an index for the data (pandas DataFrame), or (3) a new DataFrame that contains values of the effect_modifiers and effect will be estimated only for this new data.
:param effect_modifiers: Names of effect modifier variables can be (optionally) specified here too, since they do not affect identification. If None, the effect_modifiers from the CausalModel are used.
:param fit_estimator: Boolean flag on whether to fit the estimator.
Setting it to False is useful to estimate the effect on new data using a previously fitted estimator.
:param method_params: Dictionary containing any method-specific parameters. These are passed directly to the estimating method. See the docs for each estimation method for allowed method-specific params.
:returns: An instance of the CausalEstimate class, containing the causal effect estimate
and other method-dependent information
"""
treatment = parse_state(treatment)
outcome = parse_state(outcome)
causal_estimator_class = method.__class__
identified_estimand.set_identifier_method(identifier_name)
if identified_estimand.no_directed_path:
logger.warning("No directed path from {0} to {1}.".format(treatment, outcome))
return CausalEstimate(
0, identified_estimand, None, control_value=control_value, treatment_value=treatment_value
)
# Check if estimator's target estimand is identified
elif identified_estimand.estimands[identifier_name] is None:
logger.error("No valid identified estimand available.")
return CausalEstimate(None, None, None, control_value=control_value, treatment_value=treatment_value)
method.update_input(treatment_value, control_value, target_units)
estimate = method.estimate_effect()
# Store parameters inside estimate object for refutation methods
# TODO: This add_params needs to move to the estimator class
# inside estimate_effect and estimate_conditional_effect
estimate.add_params(
estimand_type=identified_estimand.estimand_type,
estimator_class=causal_estimator_class,
test_significance=test_significance,
evaluate_effect_strength=evaluate_effect_strength,
confidence_intervals=confidence_intervals,
target_units=target_units,
effect_modifiers=effect_modifiers,
method_params=method_params,
)
return estimate
class CausalEstimate:
"""Class for the estimate object that every causal estimator returns"""
def __init__(
self,
estimate,
target_estimand,
realized_estimand_expr,
control_value,
treatment_value,
conditional_estimates=None,
**kwargs,
):
self.value = estimate
self.target_estimand = target_estimand
self.realized_estimand_expr = realized_estimand_expr
self.control_value = control_value
self.treatment_value = treatment_value
self.conditional_estimates = conditional_estimates
self.params = kwargs
if self.params is not None:
for key, value in self.params.items():
setattr(self, key, value)
self.effect_strength = None
def add_estimator(self, estimator_instance):
self.estimator = estimator_instance
def add_effect_strength(self, strength_dict):
self.effect_strength = strength_dict
def add_params(self, **kwargs):
self.params.update(kwargs)
def get_confidence_intervals(self, confidence_level=None, method=None, **kwargs):
"""Get confidence intervals of the obtained estimate.
By default, this is done with the help of bootstrapped confidence intervals
but can be overridden if the specific estimator implements other methods of estimating confidence intervals.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for estimating confidence intervals.
:param confidence_level: The confidence level of the confidence intervals of the estimate.
:param kwargs: Other optional args to be passed to the CI method.
:returns: The obtained confidence interval.
"""
confidence_intervals = self.estimator.estimate_confidence_intervals(
estimate_value=self.value, confidence_level=confidence_level, method=method, **kwargs
)
return confidence_intervals
def get_standard_error(self, method=None, **kwargs):
"""Get standard error of the obtained estimate.
By default, this is done with the help of bootstrapped standard errors
but can be overridden if the specific estimator implements other methods of estimating standard error.
If the method provided is not bootstrap, this function calls the implementation of the specific estimator.
:param method: Method for computing the standard error.
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: Standard error of the causal estimate.
"""
std_error = self.estimator.estimate_std_error(method=method, **kwargs)
return std_error
def test_stat_significance(self, method=None, **kwargs):
"""Test statistical significance of the estimate obtained.
By default, uses resampling to create a non-parametric significance test.
Individual child estimators can implement different methods.
If the method name is different from "bootstrap", this function calls the
implementation of the child estimator.
:param method: Method for checking statistical significance
:param kwargs: Other optional parameters to be passed to the estimating method.
:returns: p-value from the significance test
"""
signif_results = self.estimator.test_significance(self.value, method=method, **kwargs)
return {"p_value": signif_results["p_value"]}
def estimate_conditional_effects(
self, effect_modifiers=None, num_quantiles=CausalEstimator.NUM_QUANTILES_TO_DISCRETIZE_CONT_COLS
):
"""Estimate treatment effect conditioned on given variables.
If a numeric effect modifier is provided, it is discretized into quantile bins. If you would like a custom discretization, you can do so yourself: create a new column containing the discretized effect modifier and then include that column's name in the effect_modifier_names argument.
:param effect_modifiers: Names of effect modifier variables over which the conditional effects will be estimated. If not provided, defaults to the effect modifiers specified during creation of the CausalEstimator object.
:param num_quantiles: The number of quantiles into which a numeric effect modifier variable is discretized. Does not affect any categorical effect modifiers.
:returns: A (multi-index) dataframe that provides separate effects for each value of the (discretized) effect modifiers.
"""
return self.estimator._estimate_conditional_effects(
self.estimator._estimate_effect_fn, effect_modifiers, num_quantiles
)
def interpret(self, method_name=None, **kwargs):
"""Interpret the causal estimate.
:param method_name: Method used (string) or a list of methods. If None, then the default for the specific estimator is used.
:param kwargs:: Optional parameters that are directly passed to the interpreter method.
:returns: None
"""
if method_name is None:
method_name = self.estimator.interpret_method
method_name_arr = parse_state(method_name)
for method in method_name_arr:
interpreter = interpreters.get_class_object(method)
interpreter(self, **kwargs).interpret()
def __str__(self):
s = "*** Causal Estimate ***\n"
# No estimand was identified (identification failed)
if self.target_estimand is None:
return "Estimation failed! No relevant identified estimand available for this estimation method."
s += "\n## Identified estimand\n{0}".format(self.target_estimand.__str__(only_target_estimand=True))
s += "\n## Realized estimand\n{0}".format(self.realized_estimand_expr)
if hasattr(self, "estimator"):
s += "\nTarget units: {0}\n".format(self.estimator.target_units_tostr())
s += "\n## Estimate\n"
s += "Mean value: {0}\n".format(self.value)
s += ""
if hasattr(self, "cate_estimates"):
s += "Effect estimates: {0}\n".format(self.cate_estimates)
if hasattr(self, "estimator"):
if self.estimator._significance_test:
s += "p-value: {0}\n".format(self.estimator.signif_results_tostr(self.test_stat_significance()))
if self.estimator._confidence_intervals:
s += "{0}% confidence interval: {1}\n".format(
100 * self.estimator.confidence_level, self.get_confidence_intervals()
)
if self.conditional_estimates is not None:
s += "### Conditional Estimates\n"
s += str(self.conditional_estimates)
if self.effect_strength is not None:
s += "\n## Effect Strength\n"
s += "Change in outcome attributable to treatment: {}\n".format(self.effect_strength["fraction-effect"])
# s += "Variance in outcome explained by treatment: {}\n".format(self.effect_strength["r-squared"])
return s
class RealizedEstimand(object):
def __init__(self, identified_estimand, estimator_name):
self.treatment_variable = identified_estimand.treatment_variable
self.outcome_variable = identified_estimand.outcome_variable
self.backdoor_variables = identified_estimand.get_backdoor_variables()
self.instrumental_variables = identified_estimand.instrumental_variables
self.estimand_type = identified_estimand.estimand_type
self.estimand_expression = None
self.assumptions = None
self.estimator_name = estimator_name
def update_assumptions(self, estimator_assumptions):
self.assumptions = estimator_assumptions
def update_estimand_expression(self, estimand_expression):
self.estimand_expression = estimand_expression
def __str__(self):
s = "Realized estimand: {0}\n".format(self.estimator_name)
s += "Realized estimand type: {0}\n".format(self.estimand_type)
s += "Estimand expression:\n{0}\n".format(sp.pretty(self.estimand_expression))
j = 1
for ass_name, ass_str in self.assumptions.items():
s += "Estimand assumption {0}, {1}: {2}\n".format(j, ass_name, ass_str)
j += 1
return s
| andresmor-ms | 2044d216c322a4b32c6eadce5da7d83463f19c2f | 05bfa49dacf0061988c96c6f3e3756219df5422a | I'd prefer if we leave it as this and change it when I refactor the actual estimator objects in the next PR | andresmor-ms | 283 |