code
stringlengths
26
870k
docstring
stringlengths
1
65.6k
func_name
stringlengths
1
194
language
stringclasses
1 value
repo
stringlengths
8
68
path
stringlengths
5
194
url
stringlengths
46
254
license
stringclasses
4 values
def quantile(self, probs, return_pandas=True): """ Compute quantiles for a weighted sample. Parameters ---------- probs : array_like A vector of probability points at which to calculate the quantiles. Each element of `probs` should fall in [0, 1]. return_pandas : bool If True, return value is a Pandas DataFrame or Series. Otherwise returns a ndarray. Returns ------- quantiles : Series, DataFrame, or ndarray If `return_pandas` = True, returns one of the following: * data are 1d, `return_pandas` = True: a Series indexed by the probability points. * data are 2d, `return_pandas` = True: a DataFrame with the probability points as row index and the variables as column index. If `return_pandas` = False, returns an ndarray containing the same values as the Series/DataFrame. Notes ----- To compute the quantiles, first, the weights are summed over exact ties yielding distinct data values y_1 < y_2 < ..., and corresponding weights w_1, w_2, .... Let s_j denote the sum of the first j weights, and let W denote the sum of all the weights. For a probability point p, if pW falls strictly between s_j and s_{j+1} then the estimated quantile is y_{j+1}. If pW = s_j then the estimated quantile is (y_j + y_{j+1})/2. If pW < p_1 then the estimated quantile is y_1. References ---------- SAS documentation for weighted quantiles: https://support.sas.com/documentation/cdl/en/procstat/63104/HTML/default/viewer.htm#procstat_univariate_sect028.htm """ import pandas as pd probs = np.asarray(probs) probs = np.atleast_1d(probs) if self.data.ndim == 1: rslt = self._quantile(self.data, probs) if return_pandas: rslt = pd.Series(rslt, index=probs) else: rslt = [] for vec in self.data.T: rslt.append(self._quantile(vec, probs)) rslt = np.column_stack(rslt) if return_pandas: columns = ["col%d" % (j + 1) for j in range(rslt.shape[1])] rslt = pd.DataFrame(data=rslt, columns=columns, index=probs) if return_pandas: rslt.index.name = "p" return rslt
Compute quantiles for a weighted sample. Parameters ---------- probs : array_like A vector of probability points at which to calculate the quantiles. Each element of `probs` should fall in [0, 1]. return_pandas : bool If True, return value is a Pandas DataFrame or Series. Otherwise returns a ndarray. Returns ------- quantiles : Series, DataFrame, or ndarray If `return_pandas` = True, returns one of the following: * data are 1d, `return_pandas` = True: a Series indexed by the probability points. * data are 2d, `return_pandas` = True: a DataFrame with the probability points as row index and the variables as column index. If `return_pandas` = False, returns an ndarray containing the same values as the Series/DataFrame. Notes ----- To compute the quantiles, first, the weights are summed over exact ties yielding distinct data values y_1 < y_2 < ..., and corresponding weights w_1, w_2, .... Let s_j denote the sum of the first j weights, and let W denote the sum of all the weights. For a probability point p, if pW falls strictly between s_j and s_{j+1} then the estimated quantile is y_{j+1}. If pW = s_j then the estimated quantile is (y_j + y_{j+1})/2. If pW < p_1 then the estimated quantile is y_1. References ---------- SAS documentation for weighted quantiles: https://support.sas.com/documentation/cdl/en/procstat/63104/HTML/default/viewer.htm#procstat_univariate_sect028.htm
quantile
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def tconfint_mean(self, alpha=0.05, alternative="two-sided"): """two-sided confidence interval for weighted mean of data If the data is 2d, then these are separate confidence intervals for each column. Parameters ---------- alpha : float significance level for the confidence interval, coverage is ``1-alpha`` alternative : str This specifies the alternative hypothesis for the test that corresponds to the confidence interval. The alternative hypothesis, H1, has to be one of the following 'two-sided': H1: mean not equal to value (default) 'larger' : H1: mean larger than value 'smaller' : H1: mean smaller than value Returns ------- lower, upper : floats or ndarrays lower and upper bound of confidence interval Notes ----- In a previous version, statsmodels 0.4, alpha was the confidence level, e.g. 0.95 """ # TODO: add asymmetric dof = self.sum_weights - 1 ci = _tconfint_generic( self.mean, self.std_mean, dof, alpha, alternative ) return ci
two-sided confidence interval for weighted mean of data If the data is 2d, then these are separate confidence intervals for each column. Parameters ---------- alpha : float significance level for the confidence interval, coverage is ``1-alpha`` alternative : str This specifies the alternative hypothesis for the test that corresponds to the confidence interval. The alternative hypothesis, H1, has to be one of the following 'two-sided': H1: mean not equal to value (default) 'larger' : H1: mean larger than value 'smaller' : H1: mean smaller than value Returns ------- lower, upper : floats or ndarrays lower and upper bound of confidence interval Notes ----- In a previous version, statsmodels 0.4, alpha was the confidence level, e.g. 0.95
tconfint_mean
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def zconfint_mean(self, alpha=0.05, alternative="two-sided"): """two-sided confidence interval for weighted mean of data Confidence interval is based on normal distribution. If the data is 2d, then these are separate confidence intervals for each column. Parameters ---------- alpha : float significance level for the confidence interval, coverage is ``1-alpha`` alternative : str This specifies the alternative hypothesis for the test that corresponds to the confidence interval. The alternative hypothesis, H1, has to be one of the following 'two-sided': H1: mean not equal to value (default) 'larger' : H1: mean larger than value 'smaller' : H1: mean smaller than value Returns ------- lower, upper : floats or ndarrays lower and upper bound of confidence interval Notes ----- In a previous version, statsmodels 0.4, alpha was the confidence level, e.g. 0.95 """ return _zconfint_generic(self.mean, self.std_mean, alpha, alternative)
two-sided confidence interval for weighted mean of data Confidence interval is based on normal distribution. If the data is 2d, then these are separate confidence intervals for each column. Parameters ---------- alpha : float significance level for the confidence interval, coverage is ``1-alpha`` alternative : str This specifies the alternative hypothesis for the test that corresponds to the confidence interval. The alternative hypothesis, H1, has to be one of the following 'two-sided': H1: mean not equal to value (default) 'larger' : H1: mean larger than value 'smaller' : H1: mean smaller than value Returns ------- lower, upper : floats or ndarrays lower and upper bound of confidence interval Notes ----- In a previous version, statsmodels 0.4, alpha was the confidence level, e.g. 0.95
zconfint_mean
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def ttest_mean(self, value=0, alternative="two-sided"): """ttest of Null hypothesis that mean is equal to value. The alternative hypothesis H1 is defined by the following - 'two-sided': H1: mean not equal to value - 'larger' : H1: mean larger than value - 'smaller' : H1: mean smaller than value Parameters ---------- value : float or array the hypothesized value for the mean alternative : str The alternative hypothesis, H1, has to be one of the following: - 'two-sided': H1: mean not equal to value (default) - 'larger' : H1: mean larger than value - 'smaller' : H1: mean smaller than value Returns ------- tstat : float test statistic pvalue : float pvalue of the t-test df : int or float """ # TODO: check direction with R, smaller=less, larger=greater tstat = (self.mean - value) / self.std_mean dof = self.sum_weights - 1 # TODO: use outsourced if alternative == "two-sided": pvalue = stats.t.sf(np.abs(tstat), dof) * 2 elif alternative == "larger": pvalue = stats.t.sf(tstat, dof) elif alternative == "smaller": pvalue = stats.t.cdf(tstat, dof) else: raise ValueError("alternative not recognized") return tstat, pvalue, dof
ttest of Null hypothesis that mean is equal to value. The alternative hypothesis H1 is defined by the following - 'two-sided': H1: mean not equal to value - 'larger' : H1: mean larger than value - 'smaller' : H1: mean smaller than value Parameters ---------- value : float or array the hypothesized value for the mean alternative : str The alternative hypothesis, H1, has to be one of the following: - 'two-sided': H1: mean not equal to value (default) - 'larger' : H1: mean larger than value - 'smaller' : H1: mean smaller than value Returns ------- tstat : float test statistic pvalue : float pvalue of the t-test df : int or float
ttest_mean
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def ttost_mean(self, low, upp): """test of (non-)equivalence of one sample TOST: two one-sided t tests null hypothesis: m < low or m > upp alternative hypothesis: low < m < upp where m is the expected value of the sample (mean of the population). If the pvalue is smaller than a threshold, say 0.05, then we reject the hypothesis that the expected value of the sample (mean of the population) is outside of the interval given by thresholds low and upp. Parameters ---------- low, upp : float equivalence interval low < mean < upp Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1, df1 : tuple test statistic, pvalue and degrees of freedom for lower threshold test t2, pv2, df2 : tuple test statistic, pvalue and degrees of freedom for upper threshold test """ t1, pv1, df1 = self.ttest_mean(low, alternative="larger") t2, pv2, df2 = self.ttest_mean(upp, alternative="smaller") return np.maximum(pv1, pv2), (t1, pv1, df1), (t2, pv2, df2)
test of (non-)equivalence of one sample TOST: two one-sided t tests null hypothesis: m < low or m > upp alternative hypothesis: low < m < upp where m is the expected value of the sample (mean of the population). If the pvalue is smaller than a threshold, say 0.05, then we reject the hypothesis that the expected value of the sample (mean of the population) is outside of the interval given by thresholds low and upp. Parameters ---------- low, upp : float equivalence interval low < mean < upp Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1, df1 : tuple test statistic, pvalue and degrees of freedom for lower threshold test t2, pv2, df2 : tuple test statistic, pvalue and degrees of freedom for upper threshold test
ttost_mean
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def ztest_mean(self, value=0, alternative="two-sided"): """z-test of Null hypothesis that mean is equal to value. The alternative hypothesis H1 is defined by the following 'two-sided': H1: mean not equal to value 'larger' : H1: mean larger than value 'smaller' : H1: mean smaller than value Parameters ---------- value : float or array the hypothesized value for the mean alternative : str The alternative hypothesis, H1, has to be one of the following 'two-sided': H1: mean not equal to value (default) 'larger' : H1: mean larger than value 'smaller' : H1: mean smaller than value Returns ------- tstat : float test statistic pvalue : float pvalue of the z-test Notes ----- This uses the same degrees of freedom correction as the t-test in the calculation of the standard error of the mean, i.e it uses `(sum_weights - 1)` instead of `sum_weights` in the denominator. See Examples below for the difference. Examples -------- z-test on a proportion, with 20 observations, 15 of those are our event >>> import statsmodels.api as sm >>> x1 = [0, 1] >>> w1 = [5, 15] >>> d1 = sm.stats.DescrStatsW(x1, w1) >>> d1.ztest_mean(0.5) (2.5166114784235836, 0.011848940928347452) This differs from the proportions_ztest because of the degrees of freedom correction: >>> sm.stats.proportions_ztest(15, 20.0, value=0.5) (2.5819888974716112, 0.009823274507519247). We can replicate the results from ``proportions_ztest`` if we increase the weights to have artificially one more observation: >>> sm.stats.DescrStatsW(x1, np.array(w1)*21./20).ztest_mean(0.5) (2.5819888974716116, 0.0098232745075192366) """ tstat = (self.mean - value) / self.std_mean # TODO: use outsourced if alternative == "two-sided": pvalue = stats.norm.sf(np.abs(tstat)) * 2 elif alternative == "larger": pvalue = stats.norm.sf(tstat) elif alternative == "smaller": pvalue = stats.norm.cdf(tstat) return tstat, pvalue
z-test of Null hypothesis that mean is equal to value. The alternative hypothesis H1 is defined by the following 'two-sided': H1: mean not equal to value 'larger' : H1: mean larger than value 'smaller' : H1: mean smaller than value Parameters ---------- value : float or array the hypothesized value for the mean alternative : str The alternative hypothesis, H1, has to be one of the following 'two-sided': H1: mean not equal to value (default) 'larger' : H1: mean larger than value 'smaller' : H1: mean smaller than value Returns ------- tstat : float test statistic pvalue : float pvalue of the z-test Notes ----- This uses the same degrees of freedom correction as the t-test in the calculation of the standard error of the mean, i.e it uses `(sum_weights - 1)` instead of `sum_weights` in the denominator. See Examples below for the difference. Examples -------- z-test on a proportion, with 20 observations, 15 of those are our event >>> import statsmodels.api as sm >>> x1 = [0, 1] >>> w1 = [5, 15] >>> d1 = sm.stats.DescrStatsW(x1, w1) >>> d1.ztest_mean(0.5) (2.5166114784235836, 0.011848940928347452) This differs from the proportions_ztest because of the degrees of freedom correction: >>> sm.stats.proportions_ztest(15, 20.0, value=0.5) (2.5819888974716112, 0.009823274507519247). We can replicate the results from ``proportions_ztest`` if we increase the weights to have artificially one more observation: >>> sm.stats.DescrStatsW(x1, np.array(w1)*21./20).ztest_mean(0.5) (2.5819888974716116, 0.0098232745075192366)
ztest_mean
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def ztost_mean(self, low, upp): """test of (non-)equivalence of one sample, based on z-test TOST: two one-sided z-tests null hypothesis: m < low or m > upp alternative hypothesis: low < m < upp where m is the expected value of the sample (mean of the population). If the pvalue is smaller than a threshold, say 0.05, then we reject the hypothesis that the expected value of the sample (mean of the population) is outside of the interval given by thresholds low and upp. Parameters ---------- low, upp : float equivalence interval low < mean < upp Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1 : tuple test statistic and p-value for lower threshold test t2, pv2 : tuple test statistic and p-value for upper threshold test """ t1, pv1 = self.ztest_mean(low, alternative="larger") t2, pv2 = self.ztest_mean(upp, alternative="smaller") return np.maximum(pv1, pv2), (t1, pv1), (t2, pv2)
test of (non-)equivalence of one sample, based on z-test TOST: two one-sided z-tests null hypothesis: m < low or m > upp alternative hypothesis: low < m < upp where m is the expected value of the sample (mean of the population). If the pvalue is smaller than a threshold, say 0.05, then we reject the hypothesis that the expected value of the sample (mean of the population) is outside of the interval given by thresholds low and upp. Parameters ---------- low, upp : float equivalence interval low < mean < upp Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1 : tuple test statistic and p-value for lower threshold test t2, pv2 : tuple test statistic and p-value for upper threshold test
ztost_mean
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def get_compare(self, other, weights=None): """return an instance of CompareMeans with self and other Parameters ---------- other : array_like or instance of DescrStatsW If array_like then this creates an instance of DescrStatsW with the given weights. weights : None or array weights are only used if other is not an instance of DescrStatsW Returns ------- cm : instance of CompareMeans the instance has self attached as d1 and other as d2. See Also -------- CompareMeans """ if not isinstance(other, self.__class__): d2 = DescrStatsW(other, weights) else: d2 = other return CompareMeans(self, d2)
return an instance of CompareMeans with self and other Parameters ---------- other : array_like or instance of DescrStatsW If array_like then this creates an instance of DescrStatsW with the given weights. weights : None or array weights are only used if other is not an instance of DescrStatsW Returns ------- cm : instance of CompareMeans the instance has self attached as d1 and other as d2. See Also -------- CompareMeans
get_compare
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def asrepeats(self): """get array that has repeats given by floor(weights) observations with weight=0 are dropped """ w_int = np.floor(self.weights).astype(int) return np.repeat(self.data, w_int, axis=0)
get array that has repeats given by floor(weights) observations with weight=0 are dropped
asrepeats
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def _tstat_generic(value1, value2, std_diff, dof, alternative, diff=0): """generic ttest based on summary statistic The test statistic is : tstat = (value1 - value2 - diff) / std_diff and is assumed to be t-distributed with ``dof`` degrees of freedom. Parameters ---------- value1 : float or ndarray Value, for example mean, of the first sample. value2 : float or ndarray Value, for example mean, of the second sample. std_diff : float or ndarray Standard error of the difference value1 - value2 dof : int or float Degrees of freedom alternative : str The alternative hypothesis, H1, has to be one of the following * 'two-sided' : H1: ``value1 - value2 - diff`` not equal to 0. * 'larger' : H1: ``value1 - value2 - diff > 0`` * 'smaller' : H1: ``value1 - value2 - diff < 0`` diff : float value of difference ``value1 - value2`` under the null hypothesis Returns ------- tstat : float or ndarray Test statistic. pvalue : float or ndarray P-value of the hypothesis test assuming that the test statistic is t-distributed with ``df`` degrees of freedom. """ tstat = (value1 - value2 - diff) / std_diff if alternative in ["two-sided", "2-sided", "2s"]: pvalue = stats.t.sf(np.abs(tstat), dof) * 2 elif alternative in ["larger", "l"]: pvalue = stats.t.sf(tstat, dof) elif alternative in ["smaller", "s"]: pvalue = stats.t.cdf(tstat, dof) else: raise ValueError("invalid alternative") return tstat, pvalue
generic ttest based on summary statistic The test statistic is : tstat = (value1 - value2 - diff) / std_diff and is assumed to be t-distributed with ``dof`` degrees of freedom. Parameters ---------- value1 : float or ndarray Value, for example mean, of the first sample. value2 : float or ndarray Value, for example mean, of the second sample. std_diff : float or ndarray Standard error of the difference value1 - value2 dof : int or float Degrees of freedom alternative : str The alternative hypothesis, H1, has to be one of the following * 'two-sided' : H1: ``value1 - value2 - diff`` not equal to 0. * 'larger' : H1: ``value1 - value2 - diff > 0`` * 'smaller' : H1: ``value1 - value2 - diff < 0`` diff : float value of difference ``value1 - value2`` under the null hypothesis Returns ------- tstat : float or ndarray Test statistic. pvalue : float or ndarray P-value of the hypothesis test assuming that the test statistic is t-distributed with ``df`` degrees of freedom.
_tstat_generic
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def _tconfint_generic(mean, std_mean, dof, alpha, alternative): """generic t-confint based on summary statistic Parameters ---------- mean : float or ndarray Value, for example mean, of the first sample. std_mean : float or ndarray Standard error of the difference value1 - value2 dof : int or float Degrees of freedom alpha : float Significance level for the confidence interval, coverage is ``1-alpha``. alternative : str The alternative hypothesis, H1, has to be one of the following * 'two-sided' : H1: ``value1 - value2 - diff`` not equal to 0. * 'larger' : H1: ``value1 - value2 - diff > 0`` * 'smaller' : H1: ``value1 - value2 - diff < 0`` Returns ------- lower : float or ndarray Lower confidence limit. This is -inf for the one-sided alternative "smaller". upper : float or ndarray Upper confidence limit. This is inf for the one-sided alternative "larger". """ if alternative in ["two-sided", "2-sided", "2s"]: tcrit = stats.t.ppf(1 - alpha / 2.0, dof) lower = mean - tcrit * std_mean upper = mean + tcrit * std_mean elif alternative in ["larger", "l"]: tcrit = stats.t.ppf(alpha, dof) lower = mean + tcrit * std_mean upper = np.inf elif alternative in ["smaller", "s"]: tcrit = stats.t.ppf(1 - alpha, dof) lower = -np.inf upper = mean + tcrit * std_mean else: raise ValueError("invalid alternative") return lower, upper
generic t-confint based on summary statistic Parameters ---------- mean : float or ndarray Value, for example mean, of the first sample. std_mean : float or ndarray Standard error of the difference value1 - value2 dof : int or float Degrees of freedom alpha : float Significance level for the confidence interval, coverage is ``1-alpha``. alternative : str The alternative hypothesis, H1, has to be one of the following * 'two-sided' : H1: ``value1 - value2 - diff`` not equal to 0. * 'larger' : H1: ``value1 - value2 - diff > 0`` * 'smaller' : H1: ``value1 - value2 - diff < 0`` Returns ------- lower : float or ndarray Lower confidence limit. This is -inf for the one-sided alternative "smaller". upper : float or ndarray Upper confidence limit. This is inf for the one-sided alternative "larger".
_tconfint_generic
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def _zstat_generic(value1, value2, std_diff, alternative, diff=0): """generic (normal) z-test based on summary statistic The test statistic is : tstat = (value1 - value2 - diff) / std_diff and is assumed to be normally distributed. Parameters ---------- value1 : float or ndarray Value, for example mean, of the first sample. value2 : float or ndarray Value, for example mean, of the second sample. std_diff : float or ndarray Standard error of the difference value1 - value2 alternative : str The alternative hypothesis, H1, has to be one of the following * 'two-sided' : H1: ``value1 - value2 - diff`` not equal to 0. * 'larger' : H1: ``value1 - value2 - diff > 0`` * 'smaller' : H1: ``value1 - value2 - diff < 0`` diff : float value of difference ``value1 - value2`` under the null hypothesis Returns ------- tstat : float or ndarray Test statistic. pvalue : float or ndarray P-value of the hypothesis test assuming that the test statistic is t-distributed with ``df`` degrees of freedom. """ zstat = (value1 - value2 - diff) / std_diff if alternative in ["two-sided", "2-sided", "2s"]: pvalue = stats.norm.sf(np.abs(zstat)) * 2 elif alternative in ["larger", "l"]: pvalue = stats.norm.sf(zstat) elif alternative in ["smaller", "s"]: pvalue = stats.norm.cdf(zstat) else: raise ValueError("invalid alternative") return zstat, pvalue
generic (normal) z-test based on summary statistic The test statistic is : tstat = (value1 - value2 - diff) / std_diff and is assumed to be normally distributed. Parameters ---------- value1 : float or ndarray Value, for example mean, of the first sample. value2 : float or ndarray Value, for example mean, of the second sample. std_diff : float or ndarray Standard error of the difference value1 - value2 alternative : str The alternative hypothesis, H1, has to be one of the following * 'two-sided' : H1: ``value1 - value2 - diff`` not equal to 0. * 'larger' : H1: ``value1 - value2 - diff > 0`` * 'smaller' : H1: ``value1 - value2 - diff < 0`` diff : float value of difference ``value1 - value2`` under the null hypothesis Returns ------- tstat : float or ndarray Test statistic. pvalue : float or ndarray P-value of the hypothesis test assuming that the test statistic is t-distributed with ``df`` degrees of freedom.
_zstat_generic
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def _zstat_generic2(value, std, alternative): """generic (normal) z-test based on summary statistic The test statistic is : zstat = value / std and is assumed to be normally distributed with standard deviation ``std``. Parameters ---------- value : float or ndarray Value of a sample statistic, for example mean. value2 : float or ndarray Value, for example mean, of the second sample. std : float or ndarray Standard error of the sample statistic value. alternative : str The alternative hypothesis, H1, has to be one of the following * 'two-sided' : H1: ``value1 - value2 - diff`` not equal to 0. * 'larger' : H1: ``value1 - value2 - diff > 0`` * 'smaller' : H1: ``value1 - value2 - diff < 0`` Returns ------- zstat : float or ndarray Test statistic. pvalue : float or ndarray P-value of the hypothesis test assuming that the test statistic is normally distributed. """ zstat = value / std if alternative in ["two-sided", "2-sided", "2s"]: pvalue = stats.norm.sf(np.abs(zstat)) * 2 elif alternative in ["larger", "l"]: pvalue = stats.norm.sf(zstat) elif alternative in ["smaller", "s"]: pvalue = stats.norm.cdf(zstat) else: raise ValueError("invalid alternative") return zstat, pvalue
generic (normal) z-test based on summary statistic The test statistic is : zstat = value / std and is assumed to be normally distributed with standard deviation ``std``. Parameters ---------- value : float or ndarray Value of a sample statistic, for example mean. value2 : float or ndarray Value, for example mean, of the second sample. std : float or ndarray Standard error of the sample statistic value. alternative : str The alternative hypothesis, H1, has to be one of the following * 'two-sided' : H1: ``value1 - value2 - diff`` not equal to 0. * 'larger' : H1: ``value1 - value2 - diff > 0`` * 'smaller' : H1: ``value1 - value2 - diff < 0`` Returns ------- zstat : float or ndarray Test statistic. pvalue : float or ndarray P-value of the hypothesis test assuming that the test statistic is normally distributed.
_zstat_generic2
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def _zconfint_generic(mean, std_mean, alpha, alternative): """generic normal-confint based on summary statistic Parameters ---------- mean : float or ndarray Value, for example mean, of the first sample. std_mean : float or ndarray Standard error of the difference value1 - value2 alpha : float Significance level for the confidence interval, coverage is ``1-alpha`` alternative : str The alternative hypothesis, H1, has to be one of the following * 'two-sided' : H1: ``value1 - value2 - diff`` not equal to 0. * 'larger' : H1: ``value1 - value2 - diff > 0`` * 'smaller' : H1: ``value1 - value2 - diff < 0`` Returns ------- lower : float or ndarray Lower confidence limit. This is -inf for the one-sided alternative "smaller". upper : float or ndarray Upper confidence limit. This is inf for the one-sided alternative "larger". """ if alternative in ["two-sided", "2-sided", "2s"]: zcrit = stats.norm.ppf(1 - alpha / 2.0) lower = mean - zcrit * std_mean upper = mean + zcrit * std_mean elif alternative in ["larger", "l"]: zcrit = stats.norm.ppf(alpha) lower = mean + zcrit * std_mean upper = np.inf elif alternative in ["smaller", "s"]: zcrit = stats.norm.ppf(1 - alpha) lower = -np.inf upper = mean + zcrit * std_mean else: raise ValueError("invalid alternative") return lower, upper
generic normal-confint based on summary statistic Parameters ---------- mean : float or ndarray Value, for example mean, of the first sample. std_mean : float or ndarray Standard error of the difference value1 - value2 alpha : float Significance level for the confidence interval, coverage is ``1-alpha`` alternative : str The alternative hypothesis, H1, has to be one of the following * 'two-sided' : H1: ``value1 - value2 - diff`` not equal to 0. * 'larger' : H1: ``value1 - value2 - diff > 0`` * 'smaller' : H1: ``value1 - value2 - diff < 0`` Returns ------- lower : float or ndarray Lower confidence limit. This is -inf for the one-sided alternative "smaller". upper : float or ndarray Upper confidence limit. This is inf for the one-sided alternative "larger".
_zconfint_generic
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def __init__(self, d1, d2): """assume d1, d2 hold the relevant attributes """ self.d1 = d1 self.d2 = d2
assume d1, d2 hold the relevant attributes
__init__
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def from_data( cls, data1, data2, weights1=None, weights2=None, ddof1=0, ddof2=0 ): """construct a CompareMeans object from data Parameters ---------- data1, data2 : array_like, 1-D or 2-D compared datasets weights1, weights2 : None or 1-D ndarray weights for each observation of data1 and data2 respectively, with same length as zero axis of corresponding dataset. ddof1, ddof2 : int default ddof1=0, ddof2=0, degrees of freedom for data1, data2 respectively. Returns ------- A CompareMeans instance. """ return cls( DescrStatsW(data1, weights=weights1, ddof=ddof1), DescrStatsW(data2, weights=weights2, ddof=ddof2), )
construct a CompareMeans object from data Parameters ---------- data1, data2 : array_like, 1-D or 2-D compared datasets weights1, weights2 : None or 1-D ndarray weights for each observation of data1 and data2 respectively, with same length as zero axis of corresponding dataset. ddof1, ddof2 : int default ddof1=0, ddof2=0, degrees of freedom for data1, data2 respectively. Returns ------- A CompareMeans instance.
from_data
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def summary(self, use_t=True, alpha=0.05, usevar="pooled", value=0): """summarize the results of the hypothesis test Parameters ---------- use_t : bool, optional if use_t is True, then t test results are returned if use_t is False, then z test results are returned alpha : float significance level for the confidence interval, coverage is ``1-alpha`` usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then the variance of Welch ttest will be used, and the degrees of freedom are those of Satterthwaite if ``use_t`` is True. value : float difference between the means under the Null hypothesis. Returns ------- smry : SimpleTable """ d1 = self.d1 d2 = self.d2 if use_t: tstat, pvalue, _ = self.ttest_ind(usevar=usevar, value=value) lower, upper = self.tconfint_diff(alpha=alpha, usevar=usevar) else: tstat, pvalue = self.ztest_ind(usevar=usevar, value=value) lower, upper = self.zconfint_diff(alpha=alpha, usevar=usevar) if usevar == "pooled": std_err = self.std_meandiff_pooledvar else: std_err = self.std_meandiff_separatevar std_err = np.atleast_1d(std_err) tstat = np.atleast_1d(tstat) pvalue = np.atleast_1d(pvalue) lower = np.atleast_1d(lower) upper = np.atleast_1d(upper) conf_int = np.column_stack((lower, upper)) params = np.atleast_1d(d1.mean - d2.mean - value) title = "Test for equality of means" yname = "y" # not used in params_frame xname = ["subset #%d" % (ii + 1) for ii in range(tstat.shape[0])] from statsmodels.iolib.summary import summary_params return summary_params( (None, params, std_err, tstat, pvalue, conf_int), alpha=alpha, use_t=use_t, yname=yname, xname=xname, title=title, )
summarize the results of the hypothesis test Parameters ---------- use_t : bool, optional if use_t is True, then t test results are returned if use_t is False, then z test results are returned alpha : float significance level for the confidence interval, coverage is ``1-alpha`` usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then the variance of Welch ttest will be used, and the degrees of freedom are those of Satterthwaite if ``use_t`` is True. value : float difference between the means under the Null hypothesis. Returns ------- smry : SimpleTable
summary
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def std_meandiff_pooledvar(self): """variance assuming equal variance in both data sets """ # this uses ``_var`` to use ddof=0 for formula d1 = self.d1 d2 = self.d2 # could make var_pooled into attribute var_pooled = ( (d1.sumsquares + d2.sumsquares) / # (d1.nobs - d1.ddof + d2.nobs - d2.ddof)) (d1.nobs - 1 + d2.nobs - 1) ) return np.sqrt(var_pooled * (1.0 / d1.nobs + 1.0 / d2.nobs))
variance assuming equal variance in both data sets
std_meandiff_pooledvar
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def dof_satt(self): """degrees of freedom of Satterthwaite for unequal variance """ d1 = self.d1 d2 = self.d2 # this follows blindly the SPSS manual # except I use ``_var`` which has ddof=0 sem1 = d1._var / (d1.nobs - 1) sem2 = d2._var / (d2.nobs - 1) semsum = sem1 + sem2 z1 = (sem1 / semsum) ** 2 / (d1.nobs - 1) z2 = (sem2 / semsum) ** 2 / (d2.nobs - 1) dof = 1.0 / (z1 + z2) return dof
degrees of freedom of Satterthwaite for unequal variance
dof_satt
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def ttest_ind(self, alternative="two-sided", usevar="pooled", value=0): """ttest for the null hypothesis of identical means this should also be the same as onewaygls, except for ddof differences Parameters ---------- x1 : array_like, 1-D or 2-D first of the two independent samples, see notes for 2-D case x2 : array_like, 1-D or 2-D second of the two independent samples, see notes for 2-D case alternative : str The alternative hypothesis, H1, has to be one of the following 'two-sided': H1: difference in means not equal to value (default) 'larger' : H1: difference in means larger than value 'smaller' : H1: difference in means smaller than value usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then Welch ttest with Satterthwait degrees of freedom is used value : float difference between the means under the Null hypothesis. Returns ------- tstat : float test statistic pvalue : float pvalue of the t-test df : int or float degrees of freedom used in the t-test Notes ----- The result is independent of the user specified ddof. """ d1 = self.d1 d2 = self.d2 if usevar == "pooled": stdm = self.std_meandiff_pooledvar dof = d1.nobs - 1 + d2.nobs - 1 elif usevar == "unequal": stdm = self.std_meandiff_separatevar dof = self.dof_satt() else: raise ValueError('usevar can only be "pooled" or "unequal"') tstat, pval = _tstat_generic( d1.mean, d2.mean, stdm, dof, alternative, diff=value ) return tstat, pval, dof
ttest for the null hypothesis of identical means this should also be the same as onewaygls, except for ddof differences Parameters ---------- x1 : array_like, 1-D or 2-D first of the two independent samples, see notes for 2-D case x2 : array_like, 1-D or 2-D second of the two independent samples, see notes for 2-D case alternative : str The alternative hypothesis, H1, has to be one of the following 'two-sided': H1: difference in means not equal to value (default) 'larger' : H1: difference in means larger than value 'smaller' : H1: difference in means smaller than value usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then Welch ttest with Satterthwait degrees of freedom is used value : float difference between the means under the Null hypothesis. Returns ------- tstat : float test statistic pvalue : float pvalue of the t-test df : int or float degrees of freedom used in the t-test Notes ----- The result is independent of the user specified ddof.
ttest_ind
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def ztest_ind(self, alternative="two-sided", usevar="pooled", value=0): """z-test for the null hypothesis of identical means Parameters ---------- x1 : array_like, 1-D or 2-D first of the two independent samples, see notes for 2-D case x2 : array_like, 1-D or 2-D second of the two independent samples, see notes for 2-D case alternative : str The alternative hypothesis, H1, has to be one of the following 'two-sided': H1: difference in means not equal to value (default) 'larger' : H1: difference in means larger than value 'smaller' : H1: difference in means smaller than value usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then the standard deviations of the samples may be different. value : float difference between the means under the Null hypothesis. Returns ------- tstat : float test statistic pvalue : float pvalue of the z-test """ d1 = self.d1 d2 = self.d2 if usevar == "pooled": stdm = self.std_meandiff_pooledvar elif usevar == "unequal": stdm = self.std_meandiff_separatevar else: raise ValueError('usevar can only be "pooled" or "unequal"') tstat, pval = _zstat_generic( d1.mean, d2.mean, stdm, alternative, diff=value ) return tstat, pval
z-test for the null hypothesis of identical means Parameters ---------- x1 : array_like, 1-D or 2-D first of the two independent samples, see notes for 2-D case x2 : array_like, 1-D or 2-D second of the two independent samples, see notes for 2-D case alternative : str The alternative hypothesis, H1, has to be one of the following 'two-sided': H1: difference in means not equal to value (default) 'larger' : H1: difference in means larger than value 'smaller' : H1: difference in means smaller than value usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then the standard deviations of the samples may be different. value : float difference between the means under the Null hypothesis. Returns ------- tstat : float test statistic pvalue : float pvalue of the z-test
ztest_ind
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def tconfint_diff( self, alpha=0.05, alternative="two-sided", usevar="pooled" ): """confidence interval for the difference in means Parameters ---------- alpha : float significance level for the confidence interval, coverage is ``1-alpha`` alternative : str This specifies the alternative hypothesis for the test that corresponds to the confidence interval. The alternative hypothesis, H1, has to be one of the following : 'two-sided': H1: difference in means not equal to value (default) 'larger' : H1: difference in means larger than value 'smaller' : H1: difference in means smaller than value usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then Welch ttest with Satterthwait degrees of freedom is used Returns ------- lower, upper : floats lower and upper limits of the confidence interval Notes ----- The result is independent of the user specified ddof. """ d1 = self.d1 d2 = self.d2 diff = d1.mean - d2.mean if usevar == "pooled": std_diff = self.std_meandiff_pooledvar dof = d1.nobs - 1 + d2.nobs - 1 elif usevar == "unequal": std_diff = self.std_meandiff_separatevar dof = self.dof_satt() else: raise ValueError('usevar can only be "pooled" or "unequal"') res = _tconfint_generic( diff, std_diff, dof, alpha=alpha, alternative=alternative ) return res
confidence interval for the difference in means Parameters ---------- alpha : float significance level for the confidence interval, coverage is ``1-alpha`` alternative : str This specifies the alternative hypothesis for the test that corresponds to the confidence interval. The alternative hypothesis, H1, has to be one of the following : 'two-sided': H1: difference in means not equal to value (default) 'larger' : H1: difference in means larger than value 'smaller' : H1: difference in means smaller than value usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then Welch ttest with Satterthwait degrees of freedom is used Returns ------- lower, upper : floats lower and upper limits of the confidence interval Notes ----- The result is independent of the user specified ddof.
tconfint_diff
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def zconfint_diff( self, alpha=0.05, alternative="two-sided", usevar="pooled" ): """confidence interval for the difference in means Parameters ---------- alpha : float significance level for the confidence interval, coverage is ``1-alpha`` alternative : str This specifies the alternative hypothesis for the test that corresponds to the confidence interval. The alternative hypothesis, H1, has to be one of the following : 'two-sided': H1: difference in means not equal to value (default) 'larger' : H1: difference in means larger than value 'smaller' : H1: difference in means smaller than value usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then Welch ttest with Satterthwait degrees of freedom is used Returns ------- lower, upper : floats lower and upper limits of the confidence interval Notes ----- The result is independent of the user specified ddof. """ d1 = self.d1 d2 = self.d2 diff = d1.mean - d2.mean if usevar == "pooled": std_diff = self.std_meandiff_pooledvar elif usevar == "unequal": std_diff = self.std_meandiff_separatevar else: raise ValueError('usevar can only be "pooled" or "unequal"') res = _zconfint_generic( diff, std_diff, alpha=alpha, alternative=alternative ) return res
confidence interval for the difference in means Parameters ---------- alpha : float significance level for the confidence interval, coverage is ``1-alpha`` alternative : str This specifies the alternative hypothesis for the test that corresponds to the confidence interval. The alternative hypothesis, H1, has to be one of the following : 'two-sided': H1: difference in means not equal to value (default) 'larger' : H1: difference in means larger than value 'smaller' : H1: difference in means smaller than value usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then Welch ttest with Satterthwait degrees of freedom is used Returns ------- lower, upper : floats lower and upper limits of the confidence interval Notes ----- The result is independent of the user specified ddof.
zconfint_diff
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def ttost_ind(self, low, upp, usevar="pooled"): """ test of equivalence for two independent samples, base on t-test Parameters ---------- low, upp : float equivalence interval low < m1 - m2 < upp usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then Welch ttest with Satterthwait degrees of freedom is used Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1 : tuple of floats test statistic and pvalue for lower threshold test t2, pv2 : tuple of floats test statistic and pvalue for upper threshold test """ tt1 = self.ttest_ind(alternative="larger", usevar=usevar, value=low) tt2 = self.ttest_ind(alternative="smaller", usevar=usevar, value=upp) # TODO: remove tuple return, use same as for function tost_ind return np.maximum(tt1[1], tt2[1]), (tt1, tt2)
test of equivalence for two independent samples, base on t-test Parameters ---------- low, upp : float equivalence interval low < m1 - m2 < upp usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then Welch ttest with Satterthwait degrees of freedom is used Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1 : tuple of floats test statistic and pvalue for lower threshold test t2, pv2 : tuple of floats test statistic and pvalue for upper threshold test
ttost_ind
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def ztost_ind(self, low, upp, usevar="pooled"): """ test of equivalence for two independent samples, based on z-test Parameters ---------- low, upp : float equivalence interval low < m1 - m2 < upp usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then Welch ttest with Satterthwait degrees of freedom is used Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1 : tuple of floats test statistic and pvalue for lower threshold test t2, pv2 : tuple of floats test statistic and pvalue for upper threshold test """ tt1 = self.ztest_ind(alternative="larger", usevar=usevar, value=low) tt2 = self.ztest_ind(alternative="smaller", usevar=usevar, value=upp) # TODO: remove tuple return, use same as for function tost_ind return np.maximum(tt1[1], tt2[1]), tt1, tt2
test of equivalence for two independent samples, based on z-test Parameters ---------- low, upp : float equivalence interval low < m1 - m2 < upp usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then Welch ttest with Satterthwait degrees of freedom is used Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1 : tuple of floats test statistic and pvalue for lower threshold test t2, pv2 : tuple of floats test statistic and pvalue for upper threshold test
ztost_ind
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def ttest_ind( x1, x2, alternative="two-sided", usevar="pooled", weights=(None, None), value=0, ): """ttest independent sample Convenience function that uses the classes and throws away the intermediate results, compared to scipy stats: drops axis option, adds alternative, usevar, and weights option. Parameters ---------- x1 : array_like, 1-D or 2-D first of the two independent samples, see notes for 2-D case x2 : array_like, 1-D or 2-D second of the two independent samples, see notes for 2-D case alternative : str The alternative hypothesis, H1, has to be one of the following * 'two-sided' (default): H1: difference in means not equal to value * 'larger' : H1: difference in means larger than value * 'smaller' : H1: difference in means smaller than value usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then Welch ttest with Satterthwait degrees of freedom is used weights : tuple of None or ndarrays Case weights for the two samples. For details on weights see ``DescrStatsW`` value : float difference between the means under the Null hypothesis. Returns ------- tstat : float test statistic pvalue : float pvalue of the t-test df : int or float degrees of freedom used in the t-test """ cm = CompareMeans( DescrStatsW(x1, weights=weights[0], ddof=0), DescrStatsW(x2, weights=weights[1], ddof=0), ) tstat, pval, dof = cm.ttest_ind( alternative=alternative, usevar=usevar, value=value ) return tstat, pval, dof
ttest independent sample Convenience function that uses the classes and throws away the intermediate results, compared to scipy stats: drops axis option, adds alternative, usevar, and weights option. Parameters ---------- x1 : array_like, 1-D or 2-D first of the two independent samples, see notes for 2-D case x2 : array_like, 1-D or 2-D second of the two independent samples, see notes for 2-D case alternative : str The alternative hypothesis, H1, has to be one of the following * 'two-sided' (default): H1: difference in means not equal to value * 'larger' : H1: difference in means larger than value * 'smaller' : H1: difference in means smaller than value usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then Welch ttest with Satterthwait degrees of freedom is used weights : tuple of None or ndarrays Case weights for the two samples. For details on weights see ``DescrStatsW`` value : float difference between the means under the Null hypothesis. Returns ------- tstat : float test statistic pvalue : float pvalue of the t-test df : int or float degrees of freedom used in the t-test
ttest_ind
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def ttost_ind( x1, x2, low, upp, usevar="pooled", weights=(None, None), transform=None ): """test of (non-)equivalence for two independent samples TOST: two one-sided t tests null hypothesis: m1 - m2 < low or m1 - m2 > upp alternative hypothesis: low < m1 - m2 < upp where m1, m2 are the means, expected values of the two samples. If the pvalue is smaller than a threshold, say 0.05, then we reject the hypothesis that the difference between the two samples is larger than the the thresholds given by low and upp. Parameters ---------- x1 : array_like, 1-D or 2-D first of the two independent samples, see notes for 2-D case x2 : array_like, 1-D or 2-D second of the two independent samples, see notes for 2-D case low, upp : float equivalence interval low < m1 - m2 < upp usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then Welch ttest with Satterthwait degrees of freedom is used weights : tuple of None or ndarrays Case weights for the two samples. For details on weights see ``DescrStatsW`` transform : None or function If None (default), then the data is not transformed. Given a function, sample data and thresholds are transformed. If transform is log, then the equivalence interval is in ratio: low < m1 / m2 < upp Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1 : tuple of floats test statistic and pvalue for lower threshold test t2, pv2 : tuple of floats test statistic and pvalue for upper threshold test Notes ----- The test rejects if the 2*alpha confidence interval for the difference is contained in the ``(low, upp)`` interval. This test works also for multi-endpoint comparisons: If d1 and d2 have the same number of columns, then each column of the data in d1 is compared with the corresponding column in d2. This is the same as comparing each of the corresponding columns separately. Currently no multi-comparison correction is used. The raw p-values reported here can be correction with the functions in ``multitest``. """ if transform: if transform is np.log: # avoid hstack in special case x1 = transform(x1) x2 = transform(x2) else: # for transforms like rankdata that will need both datasets # concatenate works for stacking 1d and 2d arrays xx = transform(np.concatenate((x1, x2), 0)) x1 = xx[: len(x1)] x2 = xx[len(x1) :] low = transform(low) upp = transform(upp) cm = CompareMeans( DescrStatsW(x1, weights=weights[0], ddof=0), DescrStatsW(x2, weights=weights[1], ddof=0), ) pval, res = cm.ttost_ind(low, upp, usevar=usevar) return pval, res[0], res[1]
test of (non-)equivalence for two independent samples TOST: two one-sided t tests null hypothesis: m1 - m2 < low or m1 - m2 > upp alternative hypothesis: low < m1 - m2 < upp where m1, m2 are the means, expected values of the two samples. If the pvalue is smaller than a threshold, say 0.05, then we reject the hypothesis that the difference between the two samples is larger than the the thresholds given by low and upp. Parameters ---------- x1 : array_like, 1-D or 2-D first of the two independent samples, see notes for 2-D case x2 : array_like, 1-D or 2-D second of the two independent samples, see notes for 2-D case low, upp : float equivalence interval low < m1 - m2 < upp usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then Welch ttest with Satterthwait degrees of freedom is used weights : tuple of None or ndarrays Case weights for the two samples. For details on weights see ``DescrStatsW`` transform : None or function If None (default), then the data is not transformed. Given a function, sample data and thresholds are transformed. If transform is log, then the equivalence interval is in ratio: low < m1 / m2 < upp Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1 : tuple of floats test statistic and pvalue for lower threshold test t2, pv2 : tuple of floats test statistic and pvalue for upper threshold test Notes ----- The test rejects if the 2*alpha confidence interval for the difference is contained in the ``(low, upp)`` interval. This test works also for multi-endpoint comparisons: If d1 and d2 have the same number of columns, then each column of the data in d1 is compared with the corresponding column in d2. This is the same as comparing each of the corresponding columns separately. Currently no multi-comparison correction is used. The raw p-values reported here can be correction with the functions in ``multitest``.
ttost_ind
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def ttost_paired(x1, x2, low, upp, transform=None, weights=None): """test of (non-)equivalence for two dependent, paired sample TOST: two one-sided t tests null hypothesis: md < low or md > upp alternative hypothesis: low < md < upp where md is the mean, expected value of the difference x1 - x2 If the pvalue is smaller than a threshold,say 0.05, then we reject the hypothesis that the difference between the two samples is larger than the the thresholds given by low and upp. Parameters ---------- x1 : array_like first of the two independent samples x2 : array_like second of the two independent samples low, upp : float equivalence interval low < mean of difference < upp weights : None or ndarray case weights for the two samples. For details on weights see ``DescrStatsW`` transform : None or function If None (default), then the data is not transformed. Given a function sample data and thresholds are transformed. If transform is log the the equivalence interval is in ratio: low < x1 / x2 < upp Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1, df1 : tuple test statistic, pvalue and degrees of freedom for lower threshold test t2, pv2, df2 : tuple test statistic, pvalue and degrees of freedom for upper threshold test """ if transform: if transform is np.log: # avoid hstack in special case x1 = transform(x1) x2 = transform(x2) else: # for transforms like rankdata that will need both datasets # concatenate works for stacking 1d and 2d arrays xx = transform(np.concatenate((x1, x2), 0)) x1 = xx[: len(x1)] x2 = xx[len(x1) :] low = transform(low) upp = transform(upp) dd = DescrStatsW(x1 - x2, weights=weights, ddof=0) t1, pv1, df1 = dd.ttest_mean(low, alternative="larger") t2, pv2, df2 = dd.ttest_mean(upp, alternative="smaller") return np.maximum(pv1, pv2), (t1, pv1, df1), (t2, pv2, df2)
test of (non-)equivalence for two dependent, paired sample TOST: two one-sided t tests null hypothesis: md < low or md > upp alternative hypothesis: low < md < upp where md is the mean, expected value of the difference x1 - x2 If the pvalue is smaller than a threshold,say 0.05, then we reject the hypothesis that the difference between the two samples is larger than the the thresholds given by low and upp. Parameters ---------- x1 : array_like first of the two independent samples x2 : array_like second of the two independent samples low, upp : float equivalence interval low < mean of difference < upp weights : None or ndarray case weights for the two samples. For details on weights see ``DescrStatsW`` transform : None or function If None (default), then the data is not transformed. Given a function sample data and thresholds are transformed. If transform is log the the equivalence interval is in ratio: low < x1 / x2 < upp Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1, df1 : tuple test statistic, pvalue and degrees of freedom for lower threshold test t2, pv2, df2 : tuple test statistic, pvalue and degrees of freedom for upper threshold test
ttost_paired
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def ztest( x1, x2=None, value=0, alternative="two-sided", usevar="pooled", ddof=1.0 ): """test for mean based on normal distribution, one or two samples In the case of two samples, the samples are assumed to be independent. Parameters ---------- x1 : array_like, 1-D or 2-D first of the two independent samples x2 : array_like, 1-D or 2-D second of the two independent samples value : float In the one sample case, value is the mean of x1 under the Null hypothesis. In the two sample case, value is the difference between mean of x1 and mean of x2 under the Null hypothesis. The test statistic is `x1_mean - x2_mean - value`. alternative : str The alternative hypothesis, H1, has to be one of the following 'two-sided': H1: difference in means not equal to value (default) 'larger' : H1: difference in means larger than value 'smaller' : H1: difference in means smaller than value usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then the standard deviation of the sample is assumed to be different. ddof : int Degrees of freedom use in the calculation of the variance of the mean estimate. In the case of comparing means this is one, however it can be adjusted for testing other statistics (proportion, correlation) Returns ------- tstat : float test statistic pvalue : float pvalue of the z-test Notes ----- usevar can be pooled or unequal in two sample case """ # TODO: this should delegate to CompareMeans like ttest_ind # However that does not implement ddof # usevar can be pooled or unequal if usevar not in {"pooled", "unequal"}: raise NotImplementedError('usevar can only be "pooled" or "unequal"') x1 = np.asarray(x1) nobs1 = x1.shape[0] x1_mean = x1.mean(0) x1_var = x1.var(0) if x2 is not None: x2 = np.asarray(x2) nobs2 = x2.shape[0] x2_mean = x2.mean(0) x2_var = x2.var(0) if usevar == "pooled": var = nobs1 * x1_var + nobs2 * x2_var var /= nobs1 + nobs2 - 2 * ddof var *= 1.0 / nobs1 + 1.0 / nobs2 elif usevar == "unequal": var = x1_var / (nobs1 - ddof) + x2_var / (nobs2 - ddof) else: var = x1_var / (nobs1 - ddof) x2_mean = 0 std_diff = np.sqrt(var) # stat = x1_mean - x2_mean - value return _zstat_generic(x1_mean, x2_mean, std_diff, alternative, diff=value)
test for mean based on normal distribution, one or two samples In the case of two samples, the samples are assumed to be independent. Parameters ---------- x1 : array_like, 1-D or 2-D first of the two independent samples x2 : array_like, 1-D or 2-D second of the two independent samples value : float In the one sample case, value is the mean of x1 under the Null hypothesis. In the two sample case, value is the difference between mean of x1 and mean of x2 under the Null hypothesis. The test statistic is `x1_mean - x2_mean - value`. alternative : str The alternative hypothesis, H1, has to be one of the following 'two-sided': H1: difference in means not equal to value (default) 'larger' : H1: difference in means larger than value 'smaller' : H1: difference in means smaller than value usevar : str, 'pooled' or 'unequal' If ``pooled``, then the standard deviation of the samples is assumed to be the same. If ``unequal``, then the standard deviation of the sample is assumed to be different. ddof : int Degrees of freedom use in the calculation of the variance of the mean estimate. In the case of comparing means this is one, however it can be adjusted for testing other statistics (proportion, correlation) Returns ------- tstat : float test statistic pvalue : float pvalue of the z-test Notes ----- usevar can be pooled or unequal in two sample case
ztest
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def zconfint( x1, x2=None, value=0, alpha=0.05, alternative="two-sided", usevar="pooled", ddof=1.0, ): """confidence interval based on normal distribution z-test Parameters ---------- x1 : array_like, 1-D or 2-D first of the two independent samples, see notes for 2-D case x2 : array_like, 1-D or 2-D second of the two independent samples, see notes for 2-D case value : float In the one sample case, value is the mean of x1 under the Null hypothesis. In the two sample case, value is the difference between mean of x1 and mean of x2 under the Null hypothesis. The test statistic is `x1_mean - x2_mean - value`. usevar : str, 'pooled' Currently, only 'pooled' is implemented. If ``pooled``, then the standard deviation of the samples is assumed to be the same. see CompareMeans.ztest_ind for different options. ddof : int Degrees of freedom use in the calculation of the variance of the mean estimate. In the case of comparing means this is one, however it can be adjusted for testing other statistics (proportion, correlation) Notes ----- checked only for 1 sample case usevar not implemented, is always pooled in two sample case ``value`` shifts the confidence interval so it is centered at `x1_mean - x2_mean - value` See Also -------- ztest CompareMeans """ # usevar is not used, always pooled # mostly duplicate code from ztest if usevar != "pooled": raise NotImplementedError('only usevar="pooled" is implemented') x1 = np.asarray(x1) nobs1 = x1.shape[0] x1_mean = x1.mean(0) x1_var = x1.var(0) if x2 is not None: x2 = np.asarray(x2) nobs2 = x2.shape[0] x2_mean = x2.mean(0) x2_var = x2.var(0) var_pooled = nobs1 * x1_var + nobs2 * x2_var var_pooled /= nobs1 + nobs2 - 2 * ddof var_pooled *= 1.0 / nobs1 + 1.0 / nobs2 else: var_pooled = x1_var / (nobs1 - ddof) x2_mean = 0 std_diff = np.sqrt(var_pooled) ci = _zconfint_generic( x1_mean - x2_mean - value, std_diff, alpha, alternative ) return ci
confidence interval based on normal distribution z-test Parameters ---------- x1 : array_like, 1-D or 2-D first of the two independent samples, see notes for 2-D case x2 : array_like, 1-D or 2-D second of the two independent samples, see notes for 2-D case value : float In the one sample case, value is the mean of x1 under the Null hypothesis. In the two sample case, value is the difference between mean of x1 and mean of x2 under the Null hypothesis. The test statistic is `x1_mean - x2_mean - value`. usevar : str, 'pooled' Currently, only 'pooled' is implemented. If ``pooled``, then the standard deviation of the samples is assumed to be the same. see CompareMeans.ztest_ind for different options. ddof : int Degrees of freedom use in the calculation of the variance of the mean estimate. In the case of comparing means this is one, however it can be adjusted for testing other statistics (proportion, correlation) Notes ----- checked only for 1 sample case usevar not implemented, is always pooled in two sample case ``value`` shifts the confidence interval so it is centered at `x1_mean - x2_mean - value` See Also -------- ztest CompareMeans
zconfint
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def ztost(x1, low, upp, x2=None, usevar="pooled", ddof=1.0): """Equivalence test based on normal distribution Parameters ---------- x1 : array_like one sample or first sample for 2 independent samples low, upp : float equivalence interval low < m1 - m2 < upp x1 : array_like or None second sample for 2 independent samples test. If None, then a one-sample test is performed. usevar : str, 'pooled' If `pooled`, then the standard deviation of the samples is assumed to be the same. Only `pooled` is currently implemented. Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1 : tuple of floats test statistic and pvalue for lower threshold test t2, pv2 : tuple of floats test statistic and pvalue for upper threshold test Notes ----- checked only for 1 sample case """ tt1 = ztest( x1, x2, alternative="larger", usevar=usevar, value=low, ddof=ddof ) tt2 = ztest( x1, x2, alternative="smaller", usevar=usevar, value=upp, ddof=ddof ) return ( np.maximum(tt1[1], tt2[1]), tt1, tt2, )
Equivalence test based on normal distribution Parameters ---------- x1 : array_like one sample or first sample for 2 independent samples low, upp : float equivalence interval low < m1 - m2 < upp x1 : array_like or None second sample for 2 independent samples test. If None, then a one-sample test is performed. usevar : str, 'pooled' If `pooled`, then the standard deviation of the samples is assumed to be the same. Only `pooled` is currently implemented. Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1 : tuple of floats test statistic and pvalue for lower threshold test t2, pv2 : tuple of floats test statistic and pvalue for upper threshold test Notes ----- checked only for 1 sample case
ztost
python
statsmodels/statsmodels
statsmodels/stats/weightstats.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py
BSD-3-Clause
def outlier_test(model_results, method='bonf', alpha=.05, labels=None, order=False, cutoff=None): """ Outlier Tests for RegressionResults instances. Parameters ---------- model_results : RegressionResults Linear model results method : str - `bonferroni` : one-step correction - `sidak` : one-step correction - `holm-sidak` : - `holm` : - `simes-hochberg` : - `hommel` : - `fdr_bh` : Benjamini/Hochberg - `fdr_by` : Benjamini/Yekutieli See `statsmodels.stats.multitest.multipletests` for details. alpha : float familywise error rate labels : None or array_like If `labels` is not None, then it will be used as index to the returned pandas DataFrame. See also Returns below order : bool Whether or not to order the results by the absolute value of the studentized residuals. If labels are provided they will also be sorted. cutoff : None or float in [0, 1] If cutoff is not None, then the return only includes observations with multiple testing corrected p-values strictly below the cutoff. The returned array or dataframe can be empty if there are no outlier candidates at the specified cutoff. Returns ------- table : ndarray or DataFrame Returns either an ndarray or a DataFrame if labels is not None. Will attempt to get labels from model_results if available. The columns are the Studentized residuals, the unadjusted p-value, and the corrected p-value according to method. Notes ----- The unadjusted p-value is stats.t.sf(abs(resid), df) where df = df_resid - 1. """ from scipy import stats # lazy import if labels is None: labels = getattr(model_results.model.data, 'row_labels', None) infl = getattr(model_results, 'get_influence', None) if infl is None: results = maybe_unwrap_results(model_results) raise AttributeError("model_results object %s does not have a " "get_influence " "method." % results.__class__.__name__) resid = infl().resid_studentized_external if order: idx = np.abs(resid).argsort()[::-1] resid = resid[idx] if labels is not None: labels = np.asarray(labels)[idx] df = model_results.df_resid - 1 unadj_p = stats.t.sf(np.abs(resid), df) * 2 adj_p = multipletests(unadj_p, alpha=alpha, method=method) data = np.c_[resid, unadj_p, adj_p[1]] if cutoff is not None: mask = data[:, -1] < cutoff data = data[mask] else: mask = slice(None) if labels is not None: from pandas import DataFrame return DataFrame(data, columns=['student_resid', 'unadj_p', method + "(p)"], index=np.asarray(labels)[mask]) return data
Outlier Tests for RegressionResults instances. Parameters ---------- model_results : RegressionResults Linear model results method : str - `bonferroni` : one-step correction - `sidak` : one-step correction - `holm-sidak` : - `holm` : - `simes-hochberg` : - `hommel` : - `fdr_bh` : Benjamini/Hochberg - `fdr_by` : Benjamini/Yekutieli See `statsmodels.stats.multitest.multipletests` for details. alpha : float familywise error rate labels : None or array_like If `labels` is not None, then it will be used as index to the returned pandas DataFrame. See also Returns below order : bool Whether or not to order the results by the absolute value of the studentized residuals. If labels are provided they will also be sorted. cutoff : None or float in [0, 1] If cutoff is not None, then the return only includes observations with multiple testing corrected p-values strictly below the cutoff. The returned array or dataframe can be empty if there are no outlier candidates at the specified cutoff. Returns ------- table : ndarray or DataFrame Returns either an ndarray or a DataFrame if labels is not None. Will attempt to get labels from model_results if available. The columns are the Studentized residuals, the unadjusted p-value, and the corrected p-value according to method. Notes ----- The unadjusted p-value is stats.t.sf(abs(resid), df) where df = df_resid - 1.
outlier_test
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def reset_ramsey(res, degree=5): """Ramsey's RESET specification test for linear models This is a general specification test, for additional non-linear effects in a model. Parameters ---------- degree : int Maximum power to include in the RESET test. Powers 0 and 1 are excluded, so that degree tests powers 2, ..., degree of the fitted values. Notes ----- The test fits an auxiliary OLS regression where the design matrix, exog, is augmented by powers 2 to degree of the fitted values. Then it performs an F-test whether these additional terms are significant. If the p-value of the f-test is below a threshold, e.g. 0.1, then this indicates that there might be additional non-linear effects in the model and that the linear model is mis-specified. References ---------- https://en.wikipedia.org/wiki/Ramsey_RESET_test """ order = degree + 1 k_vars = res.model.exog.shape[1] # vander without constant and x, and drop constant norm_values = np.asarray(res.fittedvalues) norm_values = norm_values / np.sqrt((norm_values ** 2).mean()) y_fitted_vander = np.vander(norm_values, order)[:, :-2] exog = np.column_stack((res.model.exog, y_fitted_vander)) exog /= np.sqrt((exog ** 2).mean(0)) endog = res.model.endog / (res.model.endog ** 2).mean() res_aux = OLS(endog, exog).fit() # r_matrix = np.eye(degree, exog.shape[1], k_vars) r_matrix = np.eye(degree - 1, exog.shape[1], k_vars) # df1 = degree - 1 # df2 = exog.shape[0] - degree - res.df_model (without constant) return res_aux.f_test(r_matrix) # , r_matrix, res_aux
Ramsey's RESET specification test for linear models This is a general specification test, for additional non-linear effects in a model. Parameters ---------- degree : int Maximum power to include in the RESET test. Powers 0 and 1 are excluded, so that degree tests powers 2, ..., degree of the fitted values. Notes ----- The test fits an auxiliary OLS regression where the design matrix, exog, is augmented by powers 2 to degree of the fitted values. Then it performs an F-test whether these additional terms are significant. If the p-value of the f-test is below a threshold, e.g. 0.1, then this indicates that there might be additional non-linear effects in the model and that the linear model is mis-specified. References ---------- https://en.wikipedia.org/wiki/Ramsey_RESET_test
reset_ramsey
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def variance_inflation_factor(exog, exog_idx): """ Variance inflation factor, VIF, for one exogenous variable The variance inflation factor is a measure for the increase of the variance of the parameter estimates if an additional variable, given by exog_idx is added to the linear regression. It is a measure for multicollinearity of the design matrix, exog. One recommendation is that if VIF is greater than 5, then the explanatory variable given by exog_idx is highly collinear with the other explanatory variables, and the parameter estimates will have large standard errors because of this. Parameters ---------- exog : {ndarray, DataFrame} design matrix with all explanatory variables, as for example used in regression exog_idx : int index of the exogenous variable in the columns of exog Returns ------- float variance inflation factor Notes ----- This function does not save the auxiliary regression. See Also -------- xxx : class for regression diagnostics TODO: does not exist yet References ---------- https://en.wikipedia.org/wiki/Variance_inflation_factor """ k_vars = exog.shape[1] exog = np.asarray(exog) x_i = exog[:, exog_idx] mask = np.arange(k_vars) != exog_idx x_noti = exog[:, mask] r_squared_i = OLS(x_i, x_noti).fit().rsquared vif = 1. / (1. - r_squared_i) return vif
Variance inflation factor, VIF, for one exogenous variable The variance inflation factor is a measure for the increase of the variance of the parameter estimates if an additional variable, given by exog_idx is added to the linear regression. It is a measure for multicollinearity of the design matrix, exog. One recommendation is that if VIF is greater than 5, then the explanatory variable given by exog_idx is highly collinear with the other explanatory variables, and the parameter estimates will have large standard errors because of this. Parameters ---------- exog : {ndarray, DataFrame} design matrix with all explanatory variables, as for example used in regression exog_idx : int index of the exogenous variable in the columns of exog Returns ------- float variance inflation factor Notes ----- This function does not save the auxiliary regression. See Also -------- xxx : class for regression diagnostics TODO: does not exist yet References ---------- https://en.wikipedia.org/wiki/Variance_inflation_factor
variance_inflation_factor
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def plot_index(self, y_var='cooks', threshold=None, title=None, ax=None, idx=None, **kwds): """index plot for influence attributes Parameters ---------- y_var : str Name of attribute or shortcut for predefined attributes that will be plotted on the y-axis. threshold : None or float Threshold for adding annotation with observation labels. Observations for which the absolute value of the y_var is larger than the threshold will be annotated. Set to a negative number to label all observations or to a large number to have no annotation. title : str If provided, the title will replace the default "Index Plot" title. ax : matplolib axis instance The plot will be added to the `ax` if provided, otherwise a new figure is created. idx : {None, int} Some attributes require an additional index to select the y-var. In dfbetas this refers to the column indes. kwds : optional keywords Keywords will be used in the call to matplotlib scatter function. """ criterion = y_var # alias if threshold is None: # TODO: criterion specific defaults threshold = 'all' if criterion == 'dfbeta': y = self.dfbetas[:, idx] ylabel = 'DFBETA for ' + self.results.model.exog_names[idx] elif criterion.startswith('cook'): y = self.cooks_distance[0] ylabel = "Cook's distance" elif criterion.startswith('hat') or criterion.startswith('lever'): y = self.hat_matrix_diag ylabel = "Leverage (diagonal of hat matrix)" elif criterion.startswith('cook'): y = self.cooks_distance[0] ylabel = "Cook's distance" elif criterion.startswith('resid_stu'): y = self.resid_studentized ylabel = "Internally Studentized Residuals" else: # assume we have the name of an attribute y = getattr(self, y_var) if idx is not None: y = y[idx] ylabel = y_var fig = self._plot_index(y, ylabel, threshold=threshold, title=title, ax=ax, **kwds) return fig
index plot for influence attributes Parameters ---------- y_var : str Name of attribute or shortcut for predefined attributes that will be plotted on the y-axis. threshold : None or float Threshold for adding annotation with observation labels. Observations for which the absolute value of the y_var is larger than the threshold will be annotated. Set to a negative number to label all observations or to a large number to have no annotation. title : str If provided, the title will replace the default "Index Plot" title. ax : matplolib axis instance The plot will be added to the `ax` if provided, otherwise a new figure is created. idx : {None, int} Some attributes require an additional index to select the y-var. In dfbetas this refers to the column indes. kwds : optional keywords Keywords will be used in the call to matplotlib scatter function.
plot_index
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def hat_matrix_diag(self): """Diagonal of the generalized leverage This is the analogue of the hat matrix diagonal for general MLE. """ if hasattr(self, '_hat_matrix_diag'): return self._hat_matrix_diag try: dsdy = self.results.model._deriv_score_obs_dendog( self.results.params) except NotImplementedError: dsdy = None if dsdy is None: warnings.warn("hat matrix is not available, missing derivatives", UserWarning) return None dmu_dp = self.results.model._deriv_mean_dparams(self.results.params) # dmu_dp = 1 / # self.results.model.family.link.deriv(self.results.fittedvalues) h = (dmu_dp * np.linalg.solve(-self.hessian, dsdy.T).T).sum(1) return h
Diagonal of the generalized leverage This is the analogue of the hat matrix diagonal for general MLE.
hat_matrix_diag
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def hat_matrix_exog_diag(self): """Diagonal of the hat_matrix using only exog as in OLS """ get_exogs = getattr(self.results.model, "_get_exogs", None) if get_exogs is not None: exog = np.column_stack(get_exogs()) else: exog = self.exog return (exog * np.linalg.pinv(exog).T).sum(1)
Diagonal of the hat_matrix using only exog as in OLS
hat_matrix_exog_diag
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def d_params(self): """Approximate change in parameter estimates when dropping observation. This uses one-step approximation of the parameter change to deleting one observation. """ so_noti = self.score_obs.sum(0) - self.score_obs beta_i = np.linalg.solve(self.hessian, so_noti.T).T if self.hat_matrix_diag is not None: beta_i /= (1 - self.hat_matrix_diag)[:, None] return beta_i
Approximate change in parameter estimates when dropping observation. This uses one-step approximation of the parameter change to deleting one observation.
d_params
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def dfbetas(self): """Scaled change in parameter estimates. The one-step change of parameters in d_params is rescaled by dividing by the standard error of the parameter estimate given by results.bse. """ beta_i = self.d_params / self.results.bse return beta_i
Scaled change in parameter estimates. The one-step change of parameters in d_params is rescaled by dividing by the standard error of the parameter estimate given by results.bse.
dfbetas
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def params_one(self): """Parameter estimate based on one-step approximation. This the one step parameter estimate computed as ``params`` from the full sample minus ``d_params``. """ return self.results.params - self.d_params
Parameter estimate based on one-step approximation. This the one step parameter estimate computed as ``params`` from the full sample minus ``d_params``.
params_one
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def cooks_distance(self): """Cook's distance and p-values. Based on one step approximation d_params and on results.cov_params Cook's distance divides by the number of explanatory variables. p-values are based on the F-distribution which are only approximate outside of linear Gaussian models. Warning: The definition of p-values might change if we switch to using chi-square distribution instead of F-distribution, or if we make it dependent on the fit keyword use_t. """ cooks_d2 = (self.d_params * np.linalg.solve(self.cov_params, self.d_params.T).T).sum(1) cooks_d2 /= self.k_params from scipy import stats # alpha = 0.1 # print stats.f.isf(1-alpha, n_params, res.df_modelwc) # TODO use chi2 # use_f option pvals = stats.f.sf(cooks_d2, self.k_params, self.results.df_resid) return cooks_d2, pvals
Cook's distance and p-values. Based on one step approximation d_params and on results.cov_params Cook's distance divides by the number of explanatory variables. p-values are based on the F-distribution which are only approximate outside of linear Gaussian models. Warning: The definition of p-values might change if we switch to using chi-square distribution instead of F-distribution, or if we make it dependent on the fit keyword use_t.
cooks_distance
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def resid_studentized(self): """studentized default residuals. This uses the residual in `resid` attribute, which is by default resid_pearson and studentizes is using the generalized leverage. self.resid / np.sqrt(1 - self.hat_matrix_diag) Studentized residuals are not available if hat_matrix_diag is None. """ return self.resid / np.sqrt(1 - self.hat_matrix_diag)
studentized default residuals. This uses the residual in `resid` attribute, which is by default resid_pearson and studentizes is using the generalized leverage. self.resid / np.sqrt(1 - self.hat_matrix_diag) Studentized residuals are not available if hat_matrix_diag is None.
resid_studentized
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def resid_score_factor(self): """Score residual divided by sqrt of hessian factor. experimental, agrees with GLMInfluence for Binomial and Gaussian. This corresponds to considering the linear predictors as parameters of the model. Note: Nhis might have nan values if second derivative, hessian_factor, is positive, i.e. loglikelihood is not globally concave w.r.t. linear predictor. (This occured in an example for GeneralizedPoisson) """ from statsmodels.genmod.generalized_linear_model import GLM sf = self.results.model.score_factor(self.results.params) hf = self.results.model.hessian_factor(self.results.params) if isinstance(sf, tuple): sf = sf[0] if isinstance(hf, tuple): hf = hf[0] if not isinstance(self.results.model, GLM): # hessian_factor in GLM has wrong sign, is already positive hf = -hf return sf / np.sqrt(hf) / np.sqrt(1 - self.hat_matrix_diag)
Score residual divided by sqrt of hessian factor. experimental, agrees with GLMInfluence for Binomial and Gaussian. This corresponds to considering the linear predictors as parameters of the model. Note: Nhis might have nan values if second derivative, hessian_factor, is positive, i.e. loglikelihood is not globally concave w.r.t. linear predictor. (This occured in an example for GeneralizedPoisson)
resid_score_factor
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def resid_score(self, joint=True, index=None, studentize=False): """Score observations scaled by inverse hessian. Score residual in resid_score are defined in analogy to a score test statistic for each observation. Parameters ---------- joint : bool If joint is true, then a quadratic form similar to score_test is returned for each observation. If joint is false, then standardized score_obs are returned. The returned array is two-dimensional index : ndarray (optional) Optional index to select a subset of score_obs columns. By default, all columns of score_obs will be used. studentize : bool If studentize is true, the the scaled residuals are also studentized using the generalized leverage. Returns ------- array : 1-D or 2-D residuals Notes ----- Status: experimental Because of the one srep approacimation of d_params, score residuals are identical to cooks_distance, except for - cooks_distance is normalized by the number of parameters - cooks_distance uses cov_params, resid_score is based on Hessian. This will make them differ in the case of robust cov_params. """ # currently no caching score_obs = self.results.model.score_obs(self.results.params) hess = self.results.model.hessian(self.results.params) if index is not None: score_obs = score_obs[:, index] hess = hess[index[:, None], index] if joint: resid = (score_obs.T * np.linalg.solve(-hess, score_obs.T)).sum(0) else: resid = score_obs / np.sqrt(np.diag(-hess)) if studentize: if joint: resid /= np.sqrt(1 - self.hat_matrix_diag) else: # 2-dim resid resid /= np.sqrt(1 - self.hat_matrix_diag[:, None]) return resid
Score observations scaled by inverse hessian. Score residual in resid_score are defined in analogy to a score test statistic for each observation. Parameters ---------- joint : bool If joint is true, then a quadratic form similar to score_test is returned for each observation. If joint is false, then standardized score_obs are returned. The returned array is two-dimensional index : ndarray (optional) Optional index to select a subset of score_obs columns. By default, all columns of score_obs will be used. studentize : bool If studentize is true, the the scaled residuals are also studentized using the generalized leverage. Returns ------- array : 1-D or 2-D residuals Notes ----- Status: experimental Because of the one srep approacimation of d_params, score residuals are identical to cooks_distance, except for - cooks_distance is normalized by the number of parameters - cooks_distance uses cov_params, resid_score is based on Hessian. This will make them differ in the case of robust cov_params.
resid_score
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def d_fittedvalues(self): """Change in expected response, fittedvalues. Local change of expected mean given the change in the parameters as computed in d_params. Notes ----- This uses the one-step approximation of the parameter change to deleting one observation ``d_params``. """ # results.params might be a pandas.Series params = np.asarray(self.results.params) deriv = self.results.model._deriv_mean_dparams(params) return (deriv * self.d_params).sum(1)
Change in expected response, fittedvalues. Local change of expected mean given the change in the parameters as computed in d_params. Notes ----- This uses the one-step approximation of the parameter change to deleting one observation ``d_params``.
d_fittedvalues
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def d_fittedvalues_scaled(self): """ Change in fittedvalues scaled by standard errors. This uses one-step approximation of the parameter change to deleting one observation ``d_params``, and divides by the standard errors for the predicted mean provided by results.get_prediction. """ # Note: this and the previous methods are for the response # and not for a weighted response, i.e. not the self.exog, self.endog # this will be relevant for WLS comparing fitted endog versus wendog return self.d_fittedvalues / self._get_prediction.se
Change in fittedvalues scaled by standard errors. This uses one-step approximation of the parameter change to deleting one observation ``d_params``, and divides by the standard errors for the predicted mean provided by results.get_prediction.
d_fittedvalues_scaled
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def summary_frame(self): """ Creates a DataFrame with influence results. Returns ------- frame : pandas DataFrame A DataFrame with selected results for each observation. The index will be the same as provided to the model. Notes ----- The resultant DataFrame contains six variables in addition to the ``dfbetas``. These are: * cooks_d : Cook's Distance defined in ``cooks_distance`` * standard_resid : Standardized residuals defined in `resid_studentizedl` * hat_diag : The diagonal of the projection, or hat, matrix defined in `hat_matrix_diag`. Not included if None. * dffits_internal : DFFITS statistics using internally Studentized residuals defined in `d_fittedvalues_scaled` """ from pandas import DataFrame # row and column labels data = self.results.model.data row_labels = data.row_labels beta_labels = ['dfb_' + i for i in data.xnames] # grab the results if self.hat_matrix_diag is not None: summary_data = DataFrame(dict( cooks_d=self.cooks_distance[0], standard_resid=self.resid_studentized, hat_diag=self.hat_matrix_diag, dffits_internal=self.d_fittedvalues_scaled), index=row_labels) else: summary_data = DataFrame(dict( cooks_d=self.cooks_distance[0], # standard_resid=self.resid_studentized, # hat_diag=self.hat_matrix_diag, dffits_internal=self.d_fittedvalues_scaled), index=row_labels) # NOTE: if we do not give columns, order of above will be arbitrary dfbeta = DataFrame(self.dfbetas, columns=beta_labels, index=row_labels) return dfbeta.join(summary_data)
Creates a DataFrame with influence results. Returns ------- frame : pandas DataFrame A DataFrame with selected results for each observation. The index will be the same as provided to the model. Notes ----- The resultant DataFrame contains six variables in addition to the ``dfbetas``. These are: * cooks_d : Cook's Distance defined in ``cooks_distance`` * standard_resid : Standardized residuals defined in `resid_studentizedl` * hat_diag : The diagonal of the projection, or hat, matrix defined in `hat_matrix_diag`. Not included if None. * dffits_internal : DFFITS statistics using internally Studentized residuals defined in `d_fittedvalues_scaled`
summary_frame
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def hat_matrix_diag(self): """Diagonal of the hat_matrix for OLS Notes ----- temporarily calculated here, this should go to model class """ return (self.exog * self.results.model.pinv_wexog.T).sum(1)
Diagonal of the hat_matrix for OLS Notes ----- temporarily calculated here, this should go to model class
hat_matrix_diag
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def resid_press(self): """PRESS residuals """ hii = self.hat_matrix_diag return self.resid / (1 - hii)
PRESS residuals
resid_press
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def influence(self): """Influence measure matches the influence measure that gretl reports u * h / (1 - h) where u are the residuals and h is the diagonal of the hat_matrix """ hii = self.hat_matrix_diag return self.resid * hii / (1 - hii)
Influence measure matches the influence measure that gretl reports u * h / (1 - h) where u are the residuals and h is the diagonal of the hat_matrix
influence
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def hat_diag_factor(self): """Factor of diagonal of hat_matrix used in influence this might be useful for internal reuse h / (1 - h) """ hii = self.hat_matrix_diag return hii / (1 - hii)
Factor of diagonal of hat_matrix used in influence this might be useful for internal reuse h / (1 - h)
hat_diag_factor
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def ess_press(self): """Error sum of squares of PRESS residuals """ return np.dot(self.resid_press, self.resid_press)
Error sum of squares of PRESS residuals
ess_press
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def resid_studentized(self): """Studentized residuals using variance from OLS alias for resid_studentized_internal for compatibility with MLEInfluence this uses sigma from original estimate and does not require leave one out loop """ return self.resid_studentized_internal
Studentized residuals using variance from OLS alias for resid_studentized_internal for compatibility with MLEInfluence this uses sigma from original estimate and does not require leave one out loop
resid_studentized
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def resid_studentized_internal(self): """Studentized residuals using variance from OLS this uses sigma from original estimate does not require leave one out loop """ return self.get_resid_studentized_external(sigma=None)
Studentized residuals using variance from OLS this uses sigma from original estimate does not require leave one out loop
resid_studentized_internal
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def resid_studentized_external(self): """Studentized residuals using LOOO variance this uses sigma from leave-one-out estimates requires leave one out loop for observations """ sigma_looo = np.sqrt(self.sigma2_not_obsi) return self.get_resid_studentized_external(sigma=sigma_looo)
Studentized residuals using LOOO variance this uses sigma from leave-one-out estimates requires leave one out loop for observations
resid_studentized_external
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def get_resid_studentized_external(self, sigma=None): """calculate studentized residuals Parameters ---------- sigma : None or float estimate of the standard deviation of the residuals. If None, then the estimate from the regression results is used. Returns ------- stzd_resid : ndarray studentized residuals Notes ----- studentized residuals are defined as :: resid / sigma / np.sqrt(1 - hii) where resid are the residuals from the regression, sigma is an estimate of the standard deviation of the residuals, and hii is the diagonal of the hat_matrix. """ hii = self.hat_matrix_diag if sigma is None: sigma2_est = self.scale # can be replace by different estimators of sigma sigma = np.sqrt(sigma2_est) return self.resid / sigma / np.sqrt(1 - hii)
calculate studentized residuals Parameters ---------- sigma : None or float estimate of the standard deviation of the residuals. If None, then the estimate from the regression results is used. Returns ------- stzd_resid : ndarray studentized residuals Notes ----- studentized residuals are defined as :: resid / sigma / np.sqrt(1 - hii) where resid are the residuals from the regression, sigma is an estimate of the standard deviation of the residuals, and hii is the diagonal of the hat_matrix.
get_resid_studentized_external
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def cooks_distance(self): """ Cooks distance Uses original results, no nobs loop References ---------- .. [*] Eubank, R. L. (1999). Nonparametric regression and spline smoothing. CRC press. .. [*] Cook's distance. (n.d.). In Wikipedia. July 2019, from https://en.wikipedia.org/wiki/Cook%27s_distance """ hii = self.hat_matrix_diag # Eubank p.93, 94 cooks_d2 = self.resid_studentized ** 2 / self.k_vars cooks_d2 *= hii / (1 - hii) from scipy import stats # alpha = 0.1 # print stats.f.isf(1-alpha, n_params, res.df_modelwc) pvals = stats.f.sf(cooks_d2, self.k_vars, self.results.df_resid) return cooks_d2, pvals
Cooks distance Uses original results, no nobs loop References ---------- .. [*] Eubank, R. L. (1999). Nonparametric regression and spline smoothing. CRC press. .. [*] Cook's distance. (n.d.). In Wikipedia. July 2019, from https://en.wikipedia.org/wiki/Cook%27s_distance
cooks_distance
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def dffits_internal(self): """dffits measure for influence of an observation based on resid_studentized_internal uses original results, no nobs loop """ # TODO: do I want to use different sigma estimate in # resid_studentized_external # -> move definition of sigma_error to the __init__ hii = self.hat_matrix_diag dffits_ = self.resid_studentized_internal * np.sqrt(hii / (1 - hii)) dffits_threshold = 2 * np.sqrt(self.k_vars * 1. / self.nobs) return dffits_, dffits_threshold
dffits measure for influence of an observation based on resid_studentized_internal uses original results, no nobs loop
dffits_internal
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def dffits(self): """ dffits measure for influence of an observation based on resid_studentized_external, uses results from leave-one-observation-out loop It is recommended that observations with dffits large than a threshold of 2 sqrt{k / n} where k is the number of parameters, should be investigated. Returns ------- dffits : float dffits_threshold : float References ---------- `Wikipedia <https://en.wikipedia.org/wiki/DFFITS>`_ """ # TODO: do I want to use different sigma estimate in # resid_studentized_external # -> move definition of sigma_error to the __init__ hii = self.hat_matrix_diag dffits_ = self.resid_studentized_external * np.sqrt(hii / (1 - hii)) dffits_threshold = 2 * np.sqrt(self.k_vars * 1. / self.nobs) return dffits_, dffits_threshold
dffits measure for influence of an observation based on resid_studentized_external, uses results from leave-one-observation-out loop It is recommended that observations with dffits large than a threshold of 2 sqrt{k / n} where k is the number of parameters, should be investigated. Returns ------- dffits : float dffits_threshold : float References ---------- `Wikipedia <https://en.wikipedia.org/wiki/DFFITS>`_
dffits
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def dfbetas(self): """dfbetas uses results from leave-one-observation-out loop """ dfbetas = self.results.params - self.params_not_obsi # [None,:] dfbetas /= np.sqrt(self.sigma2_not_obsi[:, None]) dfbetas /= np.sqrt(np.diag(self.results.normalized_cov_params)) return dfbetas
dfbetas uses results from leave-one-observation-out loop
dfbetas
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def dfbeta(self): """dfbetas uses results from leave-one-observation-out loop """ dfbeta = self.results.params - self.params_not_obsi return dfbeta
dfbetas uses results from leave-one-observation-out loop
dfbeta
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def sigma2_not_obsi(self): """error variance for all LOOO regressions This is 'mse_resid' from each auxiliary regression. uses results from leave-one-observation-out loop """ return np.asarray(self._res_looo['mse_resid'])
error variance for all LOOO regressions This is 'mse_resid' from each auxiliary regression. uses results from leave-one-observation-out loop
sigma2_not_obsi
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def params_not_obsi(self): """parameter estimates for all LOOO regressions uses results from leave-one-observation-out loop """ return np.asarray(self._res_looo['params'])
parameter estimates for all LOOO regressions uses results from leave-one-observation-out loop
params_not_obsi
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def det_cov_params_not_obsi(self): """determinant of cov_params of all LOOO regressions uses results from leave-one-observation-out loop """ return np.asarray(self._res_looo['det_cov_params'])
determinant of cov_params of all LOOO regressions uses results from leave-one-observation-out loop
det_cov_params_not_obsi
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def cov_ratio(self): """covariance ratio between LOOO and original This uses determinant of the estimate of the parameter covariance from leave-one-out estimates. requires leave one out loop for observations """ # do not use inplace division / because then we change original cov_ratio = (self.det_cov_params_not_obsi / np.linalg.det(self.results.cov_params())) return cov_ratio
covariance ratio between LOOO and original This uses determinant of the estimate of the parameter covariance from leave-one-out estimates. requires leave one out loop for observations
cov_ratio
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def resid_var(self): """estimate of variance of the residuals :: sigma2 = sigma2_OLS * (1 - hii) where hii is the diagonal of the hat matrix """ # TODO:check if correct outside of ols return self.scale * (1 - self.hat_matrix_diag)
estimate of variance of the residuals :: sigma2 = sigma2_OLS * (1 - hii) where hii is the diagonal of the hat matrix
resid_var
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def resid_std(self): """estimate of standard deviation of the residuals See Also -------- resid_var """ return np.sqrt(self.resid_var)
estimate of standard deviation of the residuals See Also -------- resid_var
resid_std
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def _ols_xnoti(self, drop_idx, endog_idx='endog', store=True): """regression results from LOVO auxiliary regression with cache The result instances are stored, which could use a large amount of memory if the datasets are large. There are too many combinations to store them all, except for small problems. Parameters ---------- drop_idx : int index of exog that is dropped from the regression endog_idx : 'endog' or int If 'endog', then the endogenous variable of the result instance is regressed on the exogenous variables, excluding the one at drop_idx. If endog_idx is an integer, then the exog with that index is regressed with OLS on all other exogenous variables. (The latter is the auxiliary regression for the variance inflation factor.) this needs more thought, memory versus speed not yet used in any other parts, not sufficiently tested """ # reverse the structure, access store, if fail calculate ? # this creates keys in store even if store = false ! bug if endog_idx == 'endog': stored = self.aux_regression_endog if hasattr(stored, drop_idx): return stored[drop_idx] x_i = self.results.model.endog else: # nested dictionary try: self.aux_regression_exog[endog_idx][drop_idx] except KeyError: pass stored = self.aux_regression_exog[endog_idx] stored = {} x_i = self.exog[:, endog_idx] k_vars = self.exog.shape[1] mask = np.arange(k_vars) != drop_idx x_noti = self.exog[:, mask] res = OLS(x_i, x_noti).fit() if store: stored[drop_idx] = res return res
regression results from LOVO auxiliary regression with cache The result instances are stored, which could use a large amount of memory if the datasets are large. There are too many combinations to store them all, except for small problems. Parameters ---------- drop_idx : int index of exog that is dropped from the regression endog_idx : 'endog' or int If 'endog', then the endogenous variable of the result instance is regressed on the exogenous variables, excluding the one at drop_idx. If endog_idx is an integer, then the exog with that index is regressed with OLS on all other exogenous variables. (The latter is the auxiliary regression for the variance inflation factor.) this needs more thought, memory versus speed not yet used in any other parts, not sufficiently tested
_ols_xnoti
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def _get_drop_vari(self, attributes): """ regress endog on exog without one of the variables This uses a k_vars loop, only attributes of the OLS instance are stored. Parameters ---------- attributes : list[str] These are the names of the attributes of the auxiliary OLS results instance that are stored and returned. not yet used """ from statsmodels.sandbox.tools.cross_val import LeaveOneOut endog = self.results.model.endog exog = self.exog cv_iter = LeaveOneOut(self.k_vars) res_loo = defaultdict(list) for inidx, outidx in cv_iter: for att in attributes: res_i = self.model_class(endog, exog[:, inidx]).fit() res_loo[att].append(getattr(res_i, att)) return res_loo
regress endog on exog without one of the variables This uses a k_vars loop, only attributes of the OLS instance are stored. Parameters ---------- attributes : list[str] These are the names of the attributes of the auxiliary OLS results instance that are stored and returned. not yet used
_get_drop_vari
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def _res_looo(self): """collect required results from the LOOO loop all results will be attached. currently only 'params', 'mse_resid', 'det_cov_params' are stored regresses endog on exog dropping one observation at a time this uses a nobs loop, only attributes of the OLS instance are stored. """ from statsmodels.sandbox.tools.cross_val import LeaveOneOut def get_det_cov_params(res): return np.linalg.det(res.cov_params()) endog = self.results.model.endog exog = self.results.model.exog params = np.zeros(exog.shape, dtype=float) mse_resid = np.zeros(endog.shape, dtype=float) det_cov_params = np.zeros(endog.shape, dtype=float) cv_iter = LeaveOneOut(self.nobs) for inidx, outidx in cv_iter: res_i = self.model_class(endog[inidx], exog[inidx]).fit() params[outidx] = res_i.params mse_resid[outidx] = res_i.mse_resid det_cov_params[outidx] = get_det_cov_params(res_i) return dict(params=params, mse_resid=mse_resid, det_cov_params=det_cov_params)
collect required results from the LOOO loop all results will be attached. currently only 'params', 'mse_resid', 'det_cov_params' are stored regresses endog on exog dropping one observation at a time this uses a nobs loop, only attributes of the OLS instance are stored.
_res_looo
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def summary_frame(self): """ Creates a DataFrame with all available influence results. Returns ------- frame : DataFrame A DataFrame with all results. Notes ----- The resultant DataFrame contains six variables in addition to the DFBETAS. These are: * cooks_d : Cook's Distance defined in `Influence.cooks_distance` * standard_resid : Standardized residuals defined in `Influence.resid_studentized_internal` * hat_diag : The diagonal of the projection, or hat, matrix defined in `Influence.hat_matrix_diag` * dffits_internal : DFFITS statistics using internally Studentized residuals defined in `Influence.dffits_internal` * dffits : DFFITS statistics using externally Studentized residuals defined in `Influence.dffits` * student_resid : Externally Studentized residuals defined in `Influence.resid_studentized_external` """ from pandas import DataFrame # row and column labels data = self.results.model.data row_labels = data.row_labels beta_labels = ['dfb_' + i for i in data.xnames] # grab the results summary_data = DataFrame(dict( cooks_d=self.cooks_distance[0], standard_resid=self.resid_studentized_internal, hat_diag=self.hat_matrix_diag, dffits_internal=self.dffits_internal[0], student_resid=self.resid_studentized_external, dffits=self.dffits[0], ), index=row_labels) # NOTE: if we do not give columns, order of above will be arbitrary dfbeta = DataFrame(self.dfbetas, columns=beta_labels, index=row_labels) return dfbeta.join(summary_data)
Creates a DataFrame with all available influence results. Returns ------- frame : DataFrame A DataFrame with all results. Notes ----- The resultant DataFrame contains six variables in addition to the DFBETAS. These are: * cooks_d : Cook's Distance defined in `Influence.cooks_distance` * standard_resid : Standardized residuals defined in `Influence.resid_studentized_internal` * hat_diag : The diagonal of the projection, or hat, matrix defined in `Influence.hat_matrix_diag` * dffits_internal : DFFITS statistics using internally Studentized residuals defined in `Influence.dffits_internal` * dffits : DFFITS statistics using externally Studentized residuals defined in `Influence.dffits` * student_resid : Externally Studentized residuals defined in `Influence.resid_studentized_external`
summary_frame
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def summary_table(self, float_fmt="%6.3f"): """create a summary table with all influence and outlier measures This does currently not distinguish between statistics that can be calculated from the original regression results and for which a leave-one-observation-out loop is needed Returns ------- res : SimpleTable SimpleTable instance with the results, can be printed Notes ----- This also attaches table_data to the instance. """ # print self.dfbetas # table_raw = [ np.arange(self.nobs), # self.endog, # self.fittedvalues, # self.cooks_distance(), # self.resid_studentized_internal, # self.hat_matrix_diag, # self.dffits_internal, # self.resid_studentized_external, # self.dffits, # self.dfbetas # ] table_raw = [('obs', np.arange(self.nobs)), ('endog', self.endog), ('fitted\nvalue', self.results.fittedvalues), ("Cook's\nd", self.cooks_distance[0]), ("student.\nresidual", self.resid_studentized_internal), ('hat diag', self.hat_matrix_diag), ('dffits \ninternal', self.dffits_internal[0]), ("ext.stud.\nresidual", self.resid_studentized_external), ('dffits', self.dffits[0]) ] colnames, data = lzip(*table_raw) # unzip data = np.column_stack(data) self.table_data = data from copy import deepcopy from statsmodels.iolib.table import SimpleTable, default_html_fmt from statsmodels.iolib.tableformatting import fmt_base fmt = deepcopy(fmt_base) fmt_html = deepcopy(default_html_fmt) fmt['data_fmts'] = ["%4d"] + [float_fmt] * (data.shape[1] - 1) # fmt_html['data_fmts'] = fmt['data_fmts'] return SimpleTable(data, headers=colnames, txt_fmt=fmt, html_fmt=fmt_html)
create a summary table with all influence and outlier measures This does currently not distinguish between statistics that can be calculated from the original regression results and for which a leave-one-observation-out loop is needed Returns ------- res : SimpleTable SimpleTable instance with the results, can be printed Notes ----- This also attaches table_data to the instance.
summary_table
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def summary_table(res, alpha=0.05): """ Generate summary table of outlier and influence similar to SAS Parameters ---------- alpha : float significance level for confidence interval Returns ------- st : SimpleTable table with results that can be printed data : ndarray calculated measures and statistics for the table ss2 : list[str] column_names for table (Note: rows of table are observations) """ from scipy import stats from statsmodels.sandbox.regression.predstd import wls_prediction_std infl = OLSInfluence(res) # standard error for predicted mean # Note: using hat_matrix only works for fitted values predict_mean_se = np.sqrt(infl.hat_matrix_diag * res.mse_resid) tppf = stats.t.isf(alpha / 2., res.df_resid) predict_mean_ci = np.column_stack([ res.fittedvalues - tppf * predict_mean_se, res.fittedvalues + tppf * predict_mean_se]) # standard error for predicted observation tmp = wls_prediction_std(res, alpha=alpha) predict_se, predict_ci_low, predict_ci_upp = tmp predict_ci = np.column_stack((predict_ci_low, predict_ci_upp)) # standard deviation of residual resid_se = np.sqrt(res.mse_resid * (1 - infl.hat_matrix_diag)) table_sm = np.column_stack([ np.arange(res.nobs) + 1, res.model.endog, res.fittedvalues, predict_mean_se, predict_mean_ci[:, 0], predict_mean_ci[:, 1], predict_ci[:, 0], predict_ci[:, 1], res.resid, resid_se, infl.resid_studentized_internal, infl.cooks_distance[0] ]) # colnames, data = lzip(*table_raw) #unzip data = table_sm ss2 = ['Obs', 'Dep Var\nPopulation', 'Predicted\nValue', 'Std Error\nMean Predict', 'Mean ci\n95% low', 'Mean ci\n95% upp', 'Predict ci\n95% low', 'Predict ci\n95% upp', 'Residual', 'Std Error\nResidual', 'Student\nResidual', "Cook's\nD"] colnames = ss2 # self.table_data = data # data = np.column_stack(data) from copy import deepcopy from statsmodels.iolib.table import SimpleTable, default_html_fmt from statsmodels.iolib.tableformatting import fmt_base fmt = deepcopy(fmt_base) fmt_html = deepcopy(default_html_fmt) fmt['data_fmts'] = ["%4d"] + ["%6.3f"] * (data.shape[1] - 1) # fmt_html['data_fmts'] = fmt['data_fmts'] st = SimpleTable(data, headers=colnames, txt_fmt=fmt, html_fmt=fmt_html) return st, data, ss2
Generate summary table of outlier and influence similar to SAS Parameters ---------- alpha : float significance level for confidence interval Returns ------- st : SimpleTable table with results that can be printed data : ndarray calculated measures and statistics for the table ss2 : list[str] column_names for table (Note: rows of table are observations)
summary_table
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def hat_matrix_diag(self): """ Diagonal of the hat_matrix for GLM Notes ----- This returns the diagonal of the hat matrix that was provided as argument to GLMInfluence or computes it using the results method `get_hat_matrix`. """ if hasattr(self, '_hat_matrix_diag'): return self._hat_matrix_diag else: return self.results.get_hat_matrix()
Diagonal of the hat_matrix for GLM Notes ----- This returns the diagonal of the hat matrix that was provided as argument to GLMInfluence or computes it using the results method `get_hat_matrix`.
hat_matrix_diag
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def d_params(self): """Change in parameter estimates Notes ----- This uses one-step approximation of the parameter change to deleting one observation. """ beta_i = np.linalg.pinv(self.exog) * self.resid_studentized beta_i /= np.sqrt(1 - self.hat_matrix_diag) return beta_i.T
Change in parameter estimates Notes ----- This uses one-step approximation of the parameter change to deleting one observation.
d_params
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def resid_studentized(self): """ Internally studentized pearson residuals Notes ----- residuals / sqrt( scale * (1 - hii)) where residuals are those provided to GLMInfluence which are pearson residuals by default, and hii is the diagonal of the hat matrix. """ # redundant with scaled resid_pearson, keep for docstring for now return super().resid_studentized
Internally studentized pearson residuals Notes ----- residuals / sqrt( scale * (1 - hii)) where residuals are those provided to GLMInfluence which are pearson residuals by default, and hii is the diagonal of the hat matrix.
resid_studentized
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def cooks_distance(self): """Cook's distance Notes ----- Based on one step approximation using resid_studentized and hat_matrix_diag for the computation. Cook's distance divides by the number of explanatory variables. Computed using formulas for GLM and does not use results.cov_params. It includes p-values based on the F-distribution which are only approximate outside of linear Gaussian models. """ hii = self.hat_matrix_diag # Eubank p.93, 94 cooks_d2 = self.resid_studentized ** 2 / self.k_vars cooks_d2 *= hii / (1 - hii) from scipy import stats # alpha = 0.1 # print stats.f.isf(1-alpha, n_params, res.df_modelwc) pvals = stats.f.sf(cooks_d2, self.k_vars, self.results.df_resid) return cooks_d2, pvals
Cook's distance Notes ----- Based on one step approximation using resid_studentized and hat_matrix_diag for the computation. Cook's distance divides by the number of explanatory variables. Computed using formulas for GLM and does not use results.cov_params. It includes p-values based on the F-distribution which are only approximate outside of linear Gaussian models.
cooks_distance
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def d_linpred(self): """ Change in linear prediction This uses one-step approximation of the parameter change to deleting one observation ``d_params``. """ # TODO: This will need adjustment for extra params in Poisson # use original model exog not transformed influence exog exog = self.results.model.exog return (exog * self.d_params).sum(1)
Change in linear prediction This uses one-step approximation of the parameter change to deleting one observation ``d_params``.
d_linpred
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def d_linpred_scaled(self): """ Change in linpred scaled by standard errors This uses one-step approximation of the parameter change to deleting one observation ``d_params``, and divides by the standard errors for linpred provided by results.get_prediction. """ # Note: this and the previous methods are for the response # and not for a weighted response, i.e. not the self.exog, self.endog # this will be relevant for WLS comparing fitted endog versus wendog return self.d_linpred / self._get_prediction.linpred.se
Change in linpred scaled by standard errors This uses one-step approximation of the parameter change to deleting one observation ``d_params``, and divides by the standard errors for linpred provided by results.get_prediction.
d_linpred_scaled
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def _fittedvalues_one(self): """experimental code """ warnings.warn('this ignores offset and exposure', UserWarning) # TODO: we need to handle offset, exposure and weights # use original model exog not transformed influence exog exog = self.results.model.exog fitted = np.array([self.results.model.predict(pi, exog[i]) for i, pi in enumerate(self.params_one)]) return fitted.squeeze()
experimental code
_fittedvalues_one
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def _diff_fittedvalues_one(self): """experimental code """ # in discrete we cannot reuse results.fittedvalues return self.results.predict() - self._fittedvalues_one
experimental code
_diff_fittedvalues_one
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def _res_looo(self): """collect required results from the LOOO loop all results will be attached. currently only 'params', 'mse_resid', 'det_cov_params' are stored Reestimates the model with endog and exog dropping one observation at a time This uses a nobs loop, only attributes of the results instance are stored. Warning: This will need refactoring and API changes to be able to add options. """ from statsmodels.sandbox.tools.cross_val import LeaveOneOut def get_det_cov_params(res): return np.linalg.det(res.cov_params()) endog = self.results.model.endog exog = self.results.model.exog init_kwds = self.results.model._get_init_kwds() # We need to drop obs also from extra arrays freq_weights = init_kwds.pop('freq_weights') var_weights = init_kwds.pop('var_weights') offset = offset_ = init_kwds.pop('offset') exposure = exposure_ = init_kwds.pop('exposure') n_trials = init_kwds.pop('n_trials', None) # family Binomial creates `n` i.e. `n_trials` # we need to reset it # TODO: figure out how to do this properly if hasattr(init_kwds['family'], 'initialize'): # assume we have Binomial is_binomial = True else: is_binomial = False params = np.zeros(exog.shape, dtype=float) scale = np.zeros(endog.shape, dtype=float) det_cov_params = np.zeros(endog.shape, dtype=float) cv_iter = LeaveOneOut(self.nobs) for inidx, outidx in cv_iter: if offset is not None: offset_ = offset[inidx] if exposure is not None: exposure_ = exposure[inidx] if n_trials is not None: init_kwds['n_trials'] = n_trials[inidx] mod_i = self.model_class(endog[inidx], exog[inidx], offset=offset_, exposure=exposure_, freq_weights=freq_weights[inidx], var_weights=var_weights[inidx], **init_kwds) if is_binomial: mod_i.family.n = init_kwds['n_trials'] res_i = mod_i.fit(start_params=self.results.params, method='newton') params[outidx] = res_i.params.copy() scale[outidx] = res_i.scale det_cov_params[outidx] = get_det_cov_params(res_i) return dict(params=params, scale=scale, mse_resid=scale, # alias for now det_cov_params=det_cov_params)
collect required results from the LOOO loop all results will be attached. currently only 'params', 'mse_resid', 'det_cov_params' are stored Reestimates the model with endog and exog dropping one observation at a time This uses a nobs loop, only attributes of the results instance are stored. Warning: This will need refactoring and API changes to be able to add options.
_res_looo
python
statsmodels/statsmodels
statsmodels/stats/outliers_influence.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/outliers_influence.py
BSD-3-Clause
def _simulate_params(self, result): """ Simulate model parameters from fitted sampling distribution. """ mn = result.params cov = result.cov_params() return np.random.multivariate_normal(mn, cov)
Simulate model parameters from fitted sampling distribution.
_simulate_params
python
statsmodels/statsmodels
statsmodels/stats/mediation.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/mediation.py
BSD-3-Clause
def _get_mediator_exog(self, exposure): """ Return the mediator exog matrix with exposure set to the given value. Set values of moderated variables as needed. """ mediator_exog = self._mediator_exog if not hasattr(self.mediator_model, 'formula'): mediator_exog[:, self._exp_pos_mediator] = exposure for ix in self.moderators: v = self.moderators[ix] mediator_exog[:, ix[1]] = v else: # Need to regenerate the model exog df = self.mediator_model.data.frame.copy() df[self.exposure] = exposure for vname in self.moderators: v = self.moderators[vname] df.loc[:, vname] = v klass = self.mediator_model.__class__ init_kwargs = self.mediator_model._get_init_kwds() model = klass.from_formula(data=df, **init_kwargs) mediator_exog = model.exog return mediator_exog
Return the mediator exog matrix with exposure set to the given value. Set values of moderated variables as needed.
_get_mediator_exog
python
statsmodels/statsmodels
statsmodels/stats/mediation.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/mediation.py
BSD-3-Clause
def _get_outcome_exog(self, exposure, mediator): """ Retun the exog design matrix with mediator and exposure set to the given values. Set values of moderated variables as needed. """ outcome_exog = self._outcome_exog if not hasattr(self.outcome_model, 'formula'): outcome_exog[:, self._med_pos_outcome] = mediator outcome_exog[:, self._exp_pos_outcome] = exposure for ix in self.moderators: v = self.moderators[ix] outcome_exog[:, ix[0]] = v else: # Need to regenerate the model exog df = self.outcome_model.data.frame.copy() df[self.exposure] = exposure df[self.mediator] = mediator for vname in self.moderators: v = self.moderators[vname] df[vname] = v klass = self.outcome_model.__class__ init_kwargs = self.outcome_model._get_init_kwds() model = klass.from_formula(data=df, **init_kwargs) outcome_exog = model.exog return outcome_exog
Retun the exog design matrix with mediator and exposure set to the given values. Set values of moderated variables as needed.
_get_outcome_exog
python
statsmodels/statsmodels
statsmodels/stats/mediation.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/mediation.py
BSD-3-Clause
def fit(self, method="parametric", n_rep=1000): """ Fit a regression model to assess mediation. Parameters ---------- method : str Either 'parametric' or 'bootstrap'. n_rep : int The number of simulation replications. Returns a MediationResults object. """ if method.startswith("para"): # Initial fit to unperturbed data. outcome_result = self._fit_model(self.outcome_model, self._outcome_fit_kwargs) mediator_result = self._fit_model(self.mediator_model, self._mediator_fit_kwargs) elif not method.startswith("boot"): raise ValueError( "method must be either 'parametric' or 'bootstrap'" ) indirect_effects = [[], []] direct_effects = [[], []] for iter in range(n_rep): if method == "parametric": # Realization of outcome model parameters from sampling distribution outcome_params = self._simulate_params(outcome_result) # Realization of mediation model parameters from sampling distribution mediation_params = self._simulate_params(mediator_result) else: outcome_result = self._fit_model(self.outcome_model, self._outcome_fit_kwargs, boot=True) outcome_params = outcome_result.params mediator_result = self._fit_model(self.mediator_model, self._mediator_fit_kwargs, boot=True) mediation_params = mediator_result.params # predicted outcomes[tm][te] is the outcome when the # mediator is set to tm and the outcome/exposure is set to # te. predicted_outcomes = [[None, None], [None, None]] for tm in 0, 1: mex = self._get_mediator_exog(tm) kwargs = {"exog": mex} if hasattr(mediator_result, "scale"): kwargs["scale"] = mediator_result.scale gen = self.mediator_model.get_distribution(mediation_params, **kwargs) potential_mediator = gen.rvs(mex.shape[0]) for te in 0, 1: oex = self._get_outcome_exog(te, potential_mediator) po = self.outcome_model.predict(outcome_params, oex, **self._outcome_predict_kwargs) predicted_outcomes[tm][te] = po for t in 0, 1: indirect_effects[t].append(predicted_outcomes[1][t] - predicted_outcomes[0][t]) direct_effects[t].append(predicted_outcomes[t][1] - predicted_outcomes[t][0]) for t in 0, 1: indirect_effects[t] = np.asarray(indirect_effects[t]).T direct_effects[t] = np.asarray(direct_effects[t]).T self.indirect_effects = indirect_effects self.direct_effects = direct_effects rslt = MediationResults(self.indirect_effects, self.direct_effects) rslt.method = method return rslt
Fit a regression model to assess mediation. Parameters ---------- method : str Either 'parametric' or 'bootstrap'. n_rep : int The number of simulation replications. Returns a MediationResults object.
fit
python
statsmodels/statsmodels
statsmodels/stats/mediation.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/mediation.py
BSD-3-Clause
def summary(self, alpha=0.05): """ Provide a summary of a mediation analysis. """ columns = ["Estimate", "Lower CI bound", "Upper CI bound", "P-value"] index = ["ACME (control)", "ACME (treated)", "ADE (control)", "ADE (treated)", "Total effect", "Prop. mediated (control)", "Prop. mediated (treated)", "ACME (average)", "ADE (average)", "Prop. mediated (average)"] smry = pd.DataFrame(columns=columns, index=index) for i, vec in enumerate([self.ACME_ctrl, self.ACME_tx, self.ADE_ctrl, self.ADE_tx, self.total_effect, self.prop_med_ctrl, self.prop_med_tx, self.ACME_avg, self.ADE_avg, self.prop_med_avg]): if ((vec is self.prop_med_ctrl) or (vec is self.prop_med_tx) or (vec is self.prop_med_avg)): smry.iloc[i, 0] = np.median(vec) else: smry.iloc[i, 0] = vec.mean() smry.iloc[i, 1] = np.percentile(vec, 100 * alpha / 2) smry.iloc[i, 2] = np.percentile(vec, 100 * (1 - alpha / 2)) smry.iloc[i, 3] = _pvalue(vec) smry = smry.apply(pd.to_numeric, errors='coerce') return smry
Provide a summary of a mediation analysis.
summary
python
statsmodels/statsmodels
statsmodels/stats/mediation.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/mediation.py
BSD-3-Clause
def test_single_factor_repeated_measures_anova(): """ Testing single factor repeated measures anova Results reproduces R `ezANOVA` function from library ez """ df = AnovaRM(data.iloc[:16, :], 'DV', 'id', within=['B']).fit() a = [[1, 7, 22.4, 0.002125452]] assert_array_almost_equal(df.anova_table.iloc[:, [1, 2, 0, 3]].values, a, decimal=5)
Testing single factor repeated measures anova Results reproduces R `ezANOVA` function from library ez
test_single_factor_repeated_measures_anova
python
statsmodels/statsmodels
statsmodels/stats/tests/test_anova_rm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tests/test_anova_rm.py
BSD-3-Clause
def test_two_factors_repeated_measures_anova(): """ Testing two factors repeated measures anova Results reproduces R `ezANOVA` function from library ez """ df = AnovaRM(data.iloc[:48, :], 'DV', 'id', within=['A', 'B']).fit() a = [[1, 7, 40.14159, 3.905263e-04], [2, 14, 29.21739, 1.007549e-05], [2, 14, 17.10545, 1.741322e-04]] assert_array_almost_equal(df.anova_table.iloc[:, [1, 2, 0, 3]].values, a, decimal=5)
Testing two factors repeated measures anova Results reproduces R `ezANOVA` function from library ez
test_two_factors_repeated_measures_anova
python
statsmodels/statsmodels
statsmodels/stats/tests/test_anova_rm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tests/test_anova_rm.py
BSD-3-Clause
def test_three_factors_repeated_measures_anova(): """ Testing three factors repeated measures anova Results reproduces R `ezANOVA` function from library ez """ df = AnovaRM(data, 'DV', 'id', within=['A', 'B', 'D']).fit() a = [[1, 7, 8.7650709, 0.021087505], [2, 14, 8.4985785, 0.003833921], [1, 7, 20.5076546, 0.002704428], [2, 14, 0.8457797, 0.450021759], [1, 7, 21.7593382, 0.002301792], [2, 14, 6.2416695, 0.011536846], [2, 14, 5.4253359, 0.018010647]] assert_array_almost_equal(df.anova_table.iloc[:, [1, 2, 0, 3]].values, a, decimal=5)
Testing three factors repeated measures anova Results reproduces R `ezANOVA` function from library ez
test_three_factors_repeated_measures_anova
python
statsmodels/statsmodels
statsmodels/stats/tests/test_anova_rm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tests/test_anova_rm.py
BSD-3-Clause
def test_repeated_measures_invalid_factor_name(): """ Test with a factor name of 'C', which conflicts with patsy. """ assert_raises(ValueError, AnovaRM, data.iloc[:16, :], 'DV', 'id', within=['C'])
Test with a factor name of 'C', which conflicts with patsy.
test_repeated_measures_invalid_factor_name
python
statsmodels/statsmodels
statsmodels/stats/tests/test_anova_rm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tests/test_anova_rm.py
BSD-3-Clause
def _expand_table(table): '''expand a 2 by 2 contingency table to observations ''' return np.repeat([[1, 1], [1, 0], [0, 1], [0, 0]], table.ravel(), axis=0)
expand a 2 by 2 contingency table to observations
_expand_table
python
statsmodels/statsmodels
statsmodels/stats/tests/test_nonparametric.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tests/test_nonparametric.py
BSD-3-Clause
def test_compute_rank_placements(test_cases): """ Test the `_compute_rank_placements` helper for computing ranks and placements based on two input samples. Data validation logic is assumed to be handled by the caller and is not necessary to test here. """ x1, x2, expected_holder = test_cases res = _compute_rank_placements(x1, x2) assert_allclose(res.n_1, expected_holder.n_1) assert_allclose(res.n_2, expected_holder.n_2) assert_allclose( res.overall_ranks_pooled, expected_holder.overall_ranks_pooled ) assert_allclose( res.overall_ranks_1, expected_holder.overall_ranks_1 ) assert_allclose( res.overall_ranks_2, expected_holder.overall_ranks_2 ) assert_allclose( res.within_group_ranks_1, expected_holder.within_group_ranks_1 ) assert_allclose( res.within_group_ranks_2, expected_holder.within_group_ranks_2 ) assert_allclose(res.placements_1, expected_holder.placements_1) assert_allclose(res.placements_2, expected_holder.placements_2)
Test the `_compute_rank_placements` helper for computing ranks and placements based on two input samples. Data validation logic is assumed to be handled by the caller and is not necessary to test here.
test_compute_rank_placements
python
statsmodels/statsmodels
statsmodels/stats/tests/test_nonparametric.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tests/test_nonparametric.py
BSD-3-Clause
def reference_implementation_results(): """ Results from R's rankFD::WMWSSP function. """ parent_dir = Path(__file__).resolve().parent results = pd.read_csv( parent_dir / "results/results_samplesize_rank_compare_onetail.csv" ) return results
Results from R's rankFD::WMWSSP function.
reference_implementation_results
python
statsmodels/statsmodels
statsmodels/stats/tests/test_nonparametric.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tests/test_nonparametric.py
BSD-3-Clause
def test_samplesize_rank_compare_onetail(reference_implementation_results): """ Test the `samplesize_rank_compare_onetail` function against the reference implementation from R's rankFD package. Examples are taken from the reference paper directly. The reference implementation results are generated using the `generate_results_samplesize_rank_compare_onetail.R` script. """ for _, r_result in reference_implementation_results.iterrows(): synthetic_sample = np.array( r_result["synthetic_sample"].split(","), dtype=np.float64 ) reference_sample = np.array( r_result["reference_sample"].split(","), dtype=np.float64 ) # Convert `prop_reference` to `nobs_ratio` nobs_ratio = r_result["prop_reference"] / (1 - r_result["prop_reference"]) py_result = samplesize_rank_compare_onetail( synthetic_sample=synthetic_sample, reference_sample=reference_sample, alpha=r_result["alpha"], power=r_result["power"], nobs_ratio=nobs_ratio, alternative=r_result["alternative"], ) # Integers can be compared directly assert_allclose( py_result.nobs_total, r_result["nobs_total"], rtol=1e-9, atol=0, ) assert_allclose( py_result.nobs_ref, r_result["nobs_ref"], rtol=1e-9, ) assert_allclose( py_result.nobs_treat, r_result["nobs_treat"], rtol=1e-9, ) assert_allclose( py_result.relative_effect, r_result["relative_effect"], rtol=1e-9, atol=0, )
Test the `samplesize_rank_compare_onetail` function against the reference implementation from R's rankFD package. Examples are taken from the reference paper directly. The reference implementation results are generated using the `generate_results_samplesize_rank_compare_onetail.R` script.
test_samplesize_rank_compare_onetail
python
statsmodels/statsmodels
statsmodels/stats/tests/test_nonparametric.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tests/test_nonparametric.py
BSD-3-Clause
def test_samplesize_rank_compare_onetail_invalid( synthetic_sample, reference_sample, alpha, power, nobs_ratio, alternative, expected_exception, exception_msg, ): """ Test the samplesize_rank_compare_onetail function with various invalid inputs. """ with pytest.raises(expected_exception, match=exception_msg): samplesize_rank_compare_onetail( synthetic_sample=synthetic_sample, reference_sample=reference_sample, alpha=alpha, power=power, nobs_ratio=nobs_ratio, alternative=alternative, )
Test the samplesize_rank_compare_onetail function with various invalid inputs.
test_samplesize_rank_compare_onetail_invalid
python
statsmodels/statsmodels
statsmodels/stats/tests/test_nonparametric.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tests/test_nonparametric.py
BSD-3-Clause
def norm_f(x, y): '''Frobenious norm (squared sum) of difference between two arrays ''' d = ((x - y)**2).sum() return np.sqrt(d)
Frobenious norm (squared sum) of difference between two arrays
norm_f
python
statsmodels/statsmodels
statsmodels/stats/tests/test_corrpsd.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tests/test_corrpsd.py
BSD-3-Clause
def setup_class(cls): """ Values were obtained via the R `energy` package. R code: ------ > dcov.test(x, y, R=200) dCov independence test (permutation test) data: index 1, replicates 200 nV^2 = 45829, p-value = 0.004975 sample estimates: dCov 47.86925 > DCOR(x, y) $dCov [1] 47.86925 $dCor [1] 0.9999704 $dVarX [1] 47.28702 $dVarY [1] 48.46151 """ np.random.seed(3) cls.x = np.array(range(1, 101)).reshape((20, 5)) cls.y = cls.x + np.log(cls.x) cls.dcor_exp = 0.9999704 cls.dcov_exp = 47.86925 cls.dvar_x_exp = 47.28702 cls.dvar_y_exp = 48.46151 cls.pval_emp_exp = 0.004975 cls.test_stat_emp_exp = 45829 # The values above are functions of the following values, and # therefore when the above group of variables is computed correctly # it means this group of variables was also correctly calculated. cls.S_exp = 5686.03162 cls.test_stat_asym_exp = 2.8390102 cls.pval_asym_exp = 0.00452
Values were obtained via the R `energy` package. R code: ------ > dcov.test(x, y, R=200) dCov independence test (permutation test) data: index 1, replicates 200 nV^2 = 45829, p-value = 0.004975 sample estimates: dCov 47.86925 > DCOR(x, y) $dCov [1] 47.86925 $dCor [1] 0.9999704 $dVarX [1] 47.28702 $dVarY [1] 48.46151
setup_class
python
statsmodels/statsmodels
statsmodels/stats/tests/test_dist_dependant_measures.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tests/test_dist_dependant_measures.py
BSD-3-Clause
def test_results_on_the_iris_dataset(self): """ R code example from the `energy` package documentation for `energy::distance_covariance.test`: > x <- iris[1:50, 1:4] > y <- iris[51:100, 1:4] > set.seed(1) > dcov.test(x, y, R=200) dCov independence test (permutation test) data: index 1, replicates 200 nV^2 = 0.5254, p-value = 0.9552 sample estimates: dCov 0.1025087 """ try: iris = get_rdataset("iris").data.values[:, :4] except IGNORED_EXCEPTIONS: pytest.skip('Failed with HTTPError or URLError, these are random') x = np.asarray(iris[:50], dtype=float) y = np.asarray(iris[50:100], dtype=float) stats = ddm.distance_statistics(x, y) assert_almost_equal(stats.test_statistic, 0.5254, 4) assert_almost_equal(stats.distance_correlation, 0.3060479, 4) assert_almost_equal(stats.distance_covariance, 0.1025087, 4) assert_almost_equal(stats.dvar_x, 0.2712927, 4) assert_almost_equal(stats.dvar_y, 0.4135274, 4) assert_almost_equal(stats.S, 0.667456, 4) test_statistic, _, method = ddm.distance_covariance_test(x, y, B=199) assert_almost_equal(test_statistic, 0.5254, 4) assert method == "emp"
R code example from the `energy` package documentation for `energy::distance_covariance.test`: > x <- iris[1:50, 1:4] > y <- iris[51:100, 1:4] > set.seed(1) > dcov.test(x, y, R=200) dCov independence test (permutation test) data: index 1, replicates 200 nV^2 = 0.5254, p-value = 0.9552 sample estimates: dCov 0.1025087
test_results_on_the_iris_dataset
python
statsmodels/statsmodels
statsmodels/stats/tests/test_dist_dependant_measures.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tests/test_dist_dependant_measures.py
BSD-3-Clause
def test_results_on_the_quakes_dataset(self): """ R code: ------ > data("quakes") > x = quakes[1:50, 1:3] > y = quakes[51:100, 1:3] > dcov.test(x, y, R=200) dCov independence test (permutation test) data: index 1, replicates 200 nV^2 = 45046, p-value = 0.4577 sample estimates: dCov 30.01526 """ try: quakes = get_rdataset("quakes").data.values[:, :3] except IGNORED_EXCEPTIONS: pytest.skip('Failed with HTTPError or URLError, these are random') x = np.asarray(quakes[:50], dtype=float) y = np.asarray(quakes[50:100], dtype=float) stats = ddm.distance_statistics(x, y) assert_almost_equal(np.round(stats.test_statistic), 45046, 0) assert_almost_equal(stats.distance_correlation, 0.1894193, 4) assert_almost_equal(stats.distance_covariance, 30.01526, 4) assert_almost_equal(stats.dvar_x, 170.1702, 4) assert_almost_equal(stats.dvar_y, 147.5545, 4) assert_almost_equal(stats.S, 52265, 0) test_statistic, _, method = ddm.distance_covariance_test(x, y, B=199) assert_almost_equal(np.round(test_statistic), 45046, 0) assert method == "emp"
R code: ------ > data("quakes") > x = quakes[1:50, 1:3] > y = quakes[51:100, 1:3] > dcov.test(x, y, R=200) dCov independence test (permutation test) data: index 1, replicates 200 nV^2 = 45046, p-value = 0.4577 sample estimates: dCov 30.01526
test_results_on_the_quakes_dataset
python
statsmodels/statsmodels
statsmodels/stats/tests/test_dist_dependant_measures.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tests/test_dist_dependant_measures.py
BSD-3-Clause