code
stringlengths
26
870k
docstring
stringlengths
1
65.6k
func_name
stringlengths
1
194
language
stringclasses
1 value
repo
stringlengths
8
68
path
stringlengths
5
182
url
stringlengths
46
251
license
stringclasses
4 values
def solve_power(self, effect_size=None, nobs1=None, alpha=None, power=None, ratio=1., alternative='two-sided'): '''solve for any one parameter of the power of a two sample t-test for t-test the keywords are: effect_size, nobs1, alpha, power, ratio exactly one needs to be ``None``, all others need numeric values Parameters ---------- effect_size : float standardized effect size, difference between the two means divided by the standard deviation. `effect_size` has to be positive. nobs1 : int or float number of observations of sample 1. The number of observations of sample two is ratio times the size of sample 1, i.e. ``nobs2 = nobs1 * ratio`` alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. power : float in interval (0,1) power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. ratio : float ratio of the number of observations in sample 2 relative to sample 1. see description of nobs1 The default for ratio is 1; to solve for ratio given the other arguments it has to be explicitly set to None. alternative : str, 'two-sided' (default), 'larger', 'smaller' extra argument to choose whether the power is calculated for a two-sided (default) or one sided test. The one-sided test can be either 'larger', 'smaller'. Returns ------- value : float The value of the parameter that was set to None in the call. The value solves the power equation given the remaining parameters. Notes ----- The function uses scipy.optimize for finding the value that satisfies the power equation. It first uses ``brentq`` with a prior search for bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve`` also fails, then, for ``alpha``, ``power`` and ``effect_size``, ``brentq`` with fixed bounds is used. However, there can still be cases where this fails. ''' return super().solve_power(effect_size=effect_size, nobs1=nobs1, alpha=alpha, power=power, ratio=ratio, alternative=alternative)
solve for any one parameter of the power of a two sample t-test for t-test the keywords are: effect_size, nobs1, alpha, power, ratio exactly one needs to be ``None``, all others need numeric values Parameters ---------- effect_size : float standardized effect size, difference between the two means divided by the standard deviation. `effect_size` has to be positive. nobs1 : int or float number of observations of sample 1. The number of observations of sample two is ratio times the size of sample 1, i.e. ``nobs2 = nobs1 * ratio`` alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. power : float in interval (0,1) power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. ratio : float ratio of the number of observations in sample 2 relative to sample 1. see description of nobs1 The default for ratio is 1; to solve for ratio given the other arguments it has to be explicitly set to None. alternative : str, 'two-sided' (default), 'larger', 'smaller' extra argument to choose whether the power is calculated for a two-sided (default) or one sided test. The one-sided test can be either 'larger', 'smaller'. Returns ------- value : float The value of the parameter that was set to None in the call. The value solves the power equation given the remaining parameters. Notes ----- The function uses scipy.optimize for finding the value that satisfies the power equation. It first uses ``brentq`` with a prior search for bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve`` also fails, then, for ``alpha``, ``power`` and ``effect_size``, ``brentq`` with fixed bounds is used. However, there can still be cases where this fails.
solve_power
python
statsmodels/statsmodels
statsmodels/stats/power.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py
BSD-3-Clause
def power(self, effect_size, nobs1, alpha, ratio=1, alternative='two-sided'): '''Calculate the power of a z-test for two independent sample Parameters ---------- effect_size : float standardized effect size, difference between the two means divided by the standard deviation. effect size has to be positive. nobs1 : int or float number of observations of sample 1. The number of observations of sample two is ratio times the size of sample 1, i.e. ``nobs2 = nobs1 * ratio`` ``ratio`` can be set to zero in order to get the power for a one sample test. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. ratio : float ratio of the number of observations in sample 2 relative to sample 1. see description of nobs1 alternative : str, 'two-sided' (default), 'larger', 'smaller' extra argument to choose whether the power is calculated for a two-sided (default) or one sided test. The one-sided test can be either 'larger', 'smaller'. Returns ------- power : float Power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. ''' ddof = self.ddof # for correlation, ddof=3 # get effective nobs, factor for std of test statistic if ratio > 0: nobs2 = nobs1*ratio #equivalent to nobs = n1*n2/(n1+n2)=n1*ratio/(1+ratio) nobs = 1./ (1. / (nobs1 - ddof) + 1. / (nobs2 - ddof)) else: nobs = nobs1 - ddof return normal_power(effect_size, nobs, alpha, alternative=alternative)
Calculate the power of a z-test for two independent sample Parameters ---------- effect_size : float standardized effect size, difference between the two means divided by the standard deviation. effect size has to be positive. nobs1 : int or float number of observations of sample 1. The number of observations of sample two is ratio times the size of sample 1, i.e. ``nobs2 = nobs1 * ratio`` ``ratio`` can be set to zero in order to get the power for a one sample test. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. ratio : float ratio of the number of observations in sample 2 relative to sample 1. see description of nobs1 alternative : str, 'two-sided' (default), 'larger', 'smaller' extra argument to choose whether the power is calculated for a two-sided (default) or one sided test. The one-sided test can be either 'larger', 'smaller'. Returns ------- power : float Power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true.
power
python
statsmodels/statsmodels
statsmodels/stats/power.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py
BSD-3-Clause
def solve_power(self, effect_size=None, nobs1=None, alpha=None, power=None, ratio=1., alternative='two-sided'): '''solve for any one parameter of the power of a two sample z-test for z-test the keywords are: effect_size, nobs1, alpha, power, ratio exactly one needs to be ``None``, all others need numeric values Parameters ---------- effect_size : float standardized effect size, difference between the two means divided by the standard deviation. If ratio=0, then this is the standardized mean in the one sample test. nobs1 : int or float number of observations of sample 1. The number of observations of sample two is ratio times the size of sample 1, i.e. ``nobs2 = nobs1 * ratio`` ``ratio`` can be set to zero in order to get the power for a one sample test. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. power : float in interval (0,1) power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. ratio : float ratio of the number of observations in sample 2 relative to sample 1. see description of nobs1 The default for ratio is 1; to solve for ration given the other arguments it has to be explicitly set to None. alternative : str, 'two-sided' (default), 'larger', 'smaller' extra argument to choose whether the power is calculated for a two-sided (default) or one sided test. The one-sided test can be either 'larger', 'smaller'. Returns ------- value : float The value of the parameter that was set to None in the call. The value solves the power equation given the remaining parameters. Notes ----- The function uses scipy.optimize for finding the value that satisfies the power equation. It first uses ``brentq`` with a prior search for bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve`` also fails, then, for ``alpha``, ``power`` and ``effect_size``, ``brentq`` with fixed bounds is used. However, there can still be cases where this fails. ''' return super().solve_power(effect_size=effect_size, nobs1=nobs1, alpha=alpha, power=power, ratio=ratio, alternative=alternative)
solve for any one parameter of the power of a two sample z-test for z-test the keywords are: effect_size, nobs1, alpha, power, ratio exactly one needs to be ``None``, all others need numeric values Parameters ---------- effect_size : float standardized effect size, difference between the two means divided by the standard deviation. If ratio=0, then this is the standardized mean in the one sample test. nobs1 : int or float number of observations of sample 1. The number of observations of sample two is ratio times the size of sample 1, i.e. ``nobs2 = nobs1 * ratio`` ``ratio`` can be set to zero in order to get the power for a one sample test. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. power : float in interval (0,1) power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. ratio : float ratio of the number of observations in sample 2 relative to sample 1. see description of nobs1 The default for ratio is 1; to solve for ration given the other arguments it has to be explicitly set to None. alternative : str, 'two-sided' (default), 'larger', 'smaller' extra argument to choose whether the power is calculated for a two-sided (default) or one sided test. The one-sided test can be either 'larger', 'smaller'. Returns ------- value : float The value of the parameter that was set to None in the call. The value solves the power equation given the remaining parameters. Notes ----- The function uses scipy.optimize for finding the value that satisfies the power equation. It first uses ``brentq`` with a prior search for bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve`` also fails, then, for ``alpha``, ``power`` and ``effect_size``, ``brentq`` with fixed bounds is used. However, there can still be cases where this fails.
solve_power
python
statsmodels/statsmodels
statsmodels/stats/power.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py
BSD-3-Clause
def power(self, effect_size, df_num, df_denom, alpha, ncc=1): '''Calculate the power of a F-test. The effect size is Cohen's ``f``, square root of ``f2``. The sample size is given by ``nobs = df_denom + df_num + ncc`` Warning: The meaning of df_num and df_denom is reversed. Parameters ---------- effect_size : float Standardized effect size. The effect size is here Cohen's ``f``, square root of ``f2``. df_num : int or float Warning incorrect name denominator degrees of freedom, This corresponds to the number of constraints in Wald tests. df_denom : int or float Warning incorrect name numerator degrees of freedom. This corresponds to the df_resid in Wald tests. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. ncc : int degrees of freedom correction for non-centrality parameter. see Notes Returns ------- power : float Power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. Notes ----- sample size is given implicitly by df_num set ncc=0 to match t-test, or f-test in LikelihoodModelResults. ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test ftest_power with ncc=0 should also be correct for f_test in regression models, with df_num and d_denom as defined there. (not verified yet) ''' pow_ = ftest_power(effect_size, df_num, df_denom, alpha, ncc=ncc) #print effect_size, df_num, df_denom, alpha, pow_ return pow_
Calculate the power of a F-test. The effect size is Cohen's ``f``, square root of ``f2``. The sample size is given by ``nobs = df_denom + df_num + ncc`` Warning: The meaning of df_num and df_denom is reversed. Parameters ---------- effect_size : float Standardized effect size. The effect size is here Cohen's ``f``, square root of ``f2``. df_num : int or float Warning incorrect name denominator degrees of freedom, This corresponds to the number of constraints in Wald tests. df_denom : int or float Warning incorrect name numerator degrees of freedom. This corresponds to the df_resid in Wald tests. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. ncc : int degrees of freedom correction for non-centrality parameter. see Notes Returns ------- power : float Power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. Notes ----- sample size is given implicitly by df_num set ncc=0 to match t-test, or f-test in LikelihoodModelResults. ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test ftest_power with ncc=0 should also be correct for f_test in regression models, with df_num and d_denom as defined there. (not verified yet)
power
python
statsmodels/statsmodels
statsmodels/stats/power.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py
BSD-3-Clause
def solve_power(self, effect_size=None, df_num=None, df_denom=None, alpha=None, power=None, ncc=1, **kwargs): '''solve for any one parameter of the power of a F-test for the one sample F-test the keywords are: effect_size, df_num, df_denom, alpha, power Exactly one needs to be ``None``, all others need numeric values. The effect size is Cohen's ``f``, square root of ``f2``. The sample size is given by ``nobs = df_denom + df_num + ncc``. Warning: The meaning of df_num and df_denom is reversed. Parameters ---------- effect_size : float Standardized effect size. The effect size is here Cohen's ``f``, square root of ``f2``. df_num : int or float Warning incorrect name denominator degrees of freedom, This corresponds to the number of constraints in Wald tests. Sample size is given by ``nobs = df_denom + df_num + ncc`` df_denom : int or float Warning incorrect name numerator degrees of freedom. This corresponds to the df_resid in Wald tests. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. power : float in interval (0,1) power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. ncc : int degrees of freedom correction for non-centrality parameter. see Notes kwargs : empty ``kwargs`` are not used and included for backwards compatibility. If ``nobs`` is used as keyword, then a warning is issued. All other keywords in ``kwargs`` raise a ValueError. Returns ------- value : float The value of the parameter that was set to None in the call. The value solves the power equation given the remaining parameters. Notes ----- The method uses scipy.optimize for finding the value that satisfies the power equation. It first uses ``brentq`` with a prior search for bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve`` also fails, then, for ``alpha``, ``power`` and ``effect_size``, ``brentq`` with fixed bounds is used. However, there can still be cases where this fails. ''' if kwargs: if "nobs" in kwargs: warnings.warn("nobs is not used") else: raise ValueError(f"incorrect keyword(s) {kwargs}") return super().solve_power(effect_size=effect_size, df_num=df_num, df_denom=df_denom, alpha=alpha, power=power, ncc=ncc)
solve for any one parameter of the power of a F-test for the one sample F-test the keywords are: effect_size, df_num, df_denom, alpha, power Exactly one needs to be ``None``, all others need numeric values. The effect size is Cohen's ``f``, square root of ``f2``. The sample size is given by ``nobs = df_denom + df_num + ncc``. Warning: The meaning of df_num and df_denom is reversed. Parameters ---------- effect_size : float Standardized effect size. The effect size is here Cohen's ``f``, square root of ``f2``. df_num : int or float Warning incorrect name denominator degrees of freedom, This corresponds to the number of constraints in Wald tests. Sample size is given by ``nobs = df_denom + df_num + ncc`` df_denom : int or float Warning incorrect name numerator degrees of freedom. This corresponds to the df_resid in Wald tests. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. power : float in interval (0,1) power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. ncc : int degrees of freedom correction for non-centrality parameter. see Notes kwargs : empty ``kwargs`` are not used and included for backwards compatibility. If ``nobs`` is used as keyword, then a warning is issued. All other keywords in ``kwargs`` raise a ValueError. Returns ------- value : float The value of the parameter that was set to None in the call. The value solves the power equation given the remaining parameters. Notes ----- The method uses scipy.optimize for finding the value that satisfies the power equation. It first uses ``brentq`` with a prior search for bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve`` also fails, then, for ``alpha``, ``power`` and ``effect_size``, ``brentq`` with fixed bounds is used. However, there can still be cases where this fails.
solve_power
python
statsmodels/statsmodels
statsmodels/stats/power.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py
BSD-3-Clause
def power(self, effect_size, df_num, df_denom, alpha, ncc=1): '''Calculate the power of a F-test. The effect size is Cohen's ``f^2``. The sample size is given by ``nobs = df_denom + df_num + ncc`` Parameters ---------- effect_size : float The effect size is here Cohen's ``f2``. This is equal to the noncentrality of an F-test divided by nobs. df_num : int or float Numerator degrees of freedom, This corresponds to the number of constraints in Wald tests. df_denom : int or float Denominator degrees of freedom. This corresponds to the df_resid in Wald tests. alpha : float in interval (0,1) Significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. ncc : int Degrees of freedom correction for non-centrality parameter. see Notes Returns ------- power : float Power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. Notes ----- The sample size is given implicitly by df_denom set ncc=0 to match t-test, or f-test in LikelihoodModelResults. ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test ftest_power with ncc=0 should also be correct for f_test in regression models, with df_num and d_denom as defined there. (not verified yet) ''' pow_ = ftest_power_f2(effect_size, df_num, df_denom, alpha, ncc=ncc) return pow_
Calculate the power of a F-test. The effect size is Cohen's ``f^2``. The sample size is given by ``nobs = df_denom + df_num + ncc`` Parameters ---------- effect_size : float The effect size is here Cohen's ``f2``. This is equal to the noncentrality of an F-test divided by nobs. df_num : int or float Numerator degrees of freedom, This corresponds to the number of constraints in Wald tests. df_denom : int or float Denominator degrees of freedom. This corresponds to the df_resid in Wald tests. alpha : float in interval (0,1) Significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. ncc : int Degrees of freedom correction for non-centrality parameter. see Notes Returns ------- power : float Power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. Notes ----- The sample size is given implicitly by df_denom set ncc=0 to match t-test, or f-test in LikelihoodModelResults. ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test ftest_power with ncc=0 should also be correct for f_test in regression models, with df_num and d_denom as defined there. (not verified yet)
power
python
statsmodels/statsmodels
statsmodels/stats/power.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py
BSD-3-Clause
def solve_power(self, effect_size=None, df_num=None, df_denom=None, alpha=None, power=None, ncc=1): '''Solve for any one parameter of the power of a F-test for the one sample F-test the keywords are: effect_size, df_num, df_denom, alpha, power Exactly one needs to be ``None``, all others need numeric values. The effect size is Cohen's ``f2``. The sample size is given by ``nobs = df_denom + df_num + ncc``, and can be found by solving for df_denom. Parameters ---------- effect_size : float The effect size is here Cohen's ``f2``. This is equal to the noncentrality of an F-test divided by nobs. df_num : int or float Numerator degrees of freedom, This corresponds to the number of constraints in Wald tests. df_denom : int or float Denominator degrees of freedom. This corresponds to the df_resid in Wald tests. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. power : float in interval (0,1) power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. ncc : int degrees of freedom correction for non-centrality parameter. see Notes Returns ------- value : float The value of the parameter that was set to None in the call. The value solves the power equation given the remaining parameters. Notes ----- The function uses scipy.optimize for finding the value that satisfies the power equation. It first uses ``brentq`` with a prior search for bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve`` also fails, then, for ``alpha``, ``power`` and ``effect_size``, ``brentq`` with fixed bounds is used. However, there can still be cases where this fails. ''' return super().solve_power(effect_size=effect_size, df_num=df_num, df_denom=df_denom, alpha=alpha, power=power, ncc=ncc)
Solve for any one parameter of the power of a F-test for the one sample F-test the keywords are: effect_size, df_num, df_denom, alpha, power Exactly one needs to be ``None``, all others need numeric values. The effect size is Cohen's ``f2``. The sample size is given by ``nobs = df_denom + df_num + ncc``, and can be found by solving for df_denom. Parameters ---------- effect_size : float The effect size is here Cohen's ``f2``. This is equal to the noncentrality of an F-test divided by nobs. df_num : int or float Numerator degrees of freedom, This corresponds to the number of constraints in Wald tests. df_denom : int or float Denominator degrees of freedom. This corresponds to the df_resid in Wald tests. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. power : float in interval (0,1) power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. ncc : int degrees of freedom correction for non-centrality parameter. see Notes Returns ------- value : float The value of the parameter that was set to None in the call. The value solves the power equation given the remaining parameters. Notes ----- The function uses scipy.optimize for finding the value that satisfies the power equation. It first uses ``brentq`` with a prior search for bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve`` also fails, then, for ``alpha``, ``power`` and ``effect_size``, ``brentq`` with fixed bounds is used. However, there can still be cases where this fails.
solve_power
python
statsmodels/statsmodels
statsmodels/stats/power.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py
BSD-3-Clause
def power(self, effect_size, nobs, alpha, k_groups=2): '''Calculate the power of a F-test for one factor ANOVA. Parameters ---------- effect_size : float standardized effect size. The effect size is here Cohen's f, square root of "f2". nobs : int or float sample size, number of observations. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. k_groups : int or float number of groups in the ANOVA or k-sample comparison. Default is 2. Returns ------- power : float Power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. ''' return ftest_anova_power(effect_size, nobs, alpha, k_groups=k_groups)
Calculate the power of a F-test for one factor ANOVA. Parameters ---------- effect_size : float standardized effect size. The effect size is here Cohen's f, square root of "f2". nobs : int or float sample size, number of observations. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. k_groups : int or float number of groups in the ANOVA or k-sample comparison. Default is 2. Returns ------- power : float Power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true.
power
python
statsmodels/statsmodels
statsmodels/stats/power.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py
BSD-3-Clause
def solve_power(self, effect_size=None, nobs=None, alpha=None, power=None, k_groups=2): '''solve for any one parameter of the power of a F-test for the one sample F-test the keywords are: effect_size, nobs, alpha, power Exactly one needs to be ``None``, all others need numeric values. Parameters ---------- effect_size : float standardized effect size, mean divided by the standard deviation. effect size has to be positive. nobs : int or float sample size, number of observations. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. power : float in interval (0,1) power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. Returns ------- value : float The value of the parameter that was set to None in the call. The value solves the power equation given the remaining parameters. Notes ----- The function uses scipy.optimize for finding the value that satisfies the power equation. It first uses ``brentq`` with a prior search for bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve`` also fails, then, for ``alpha``, ``power`` and ``effect_size``, ``brentq`` with fixed bounds is used. However, there can still be cases where this fails. ''' # update start values for root finding if k_groups is not None: self.start_ttp['nobs'] = k_groups * 10 self.start_bqexp['nobs'] = dict(low=k_groups * 2, start_upp=k_groups * 10) # first attempt at special casing if effect_size is None: return self._solve_effect_size(effect_size=effect_size, nobs=nobs, alpha=alpha, k_groups=k_groups, power=power) return super().solve_power(effect_size=effect_size, nobs=nobs, alpha=alpha, k_groups=k_groups, power=power)
solve for any one parameter of the power of a F-test for the one sample F-test the keywords are: effect_size, nobs, alpha, power Exactly one needs to be ``None``, all others need numeric values. Parameters ---------- effect_size : float standardized effect size, mean divided by the standard deviation. effect size has to be positive. nobs : int or float sample size, number of observations. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. power : float in interval (0,1) power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. Returns ------- value : float The value of the parameter that was set to None in the call. The value solves the power equation given the remaining parameters. Notes ----- The function uses scipy.optimize for finding the value that satisfies the power equation. It first uses ``brentq`` with a prior search for bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve`` also fails, then, for ``alpha``, ``power`` and ``effect_size``, ``brentq`` with fixed bounds is used. However, there can still be cases where this fails.
solve_power
python
statsmodels/statsmodels
statsmodels/stats/power.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py
BSD-3-Clause
def _solve_effect_size(self, effect_size=None, nobs=None, alpha=None, power=None, k_groups=2): '''experimental, test failure in solve_power for effect_size ''' def func(x): effect_size = x return self._power_identity(effect_size=effect_size, nobs=nobs, alpha=alpha, k_groups=k_groups, power=power) val, r = optimize.brentq(func, 1e-8, 1-1e-8, full_output=True) if not r.converged: print(r) return val
experimental, test failure in solve_power for effect_size
_solve_effect_size
python
statsmodels/statsmodels
statsmodels/stats/power.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py
BSD-3-Clause
def power(self, effect_size, nobs, alpha, n_bins, ddof=0):#alternative='two-sided'): '''Calculate the power of a chisquare test for one sample Only two-sided alternative is implemented Parameters ---------- effect_size : float standardized effect size, according to Cohen's definition. see :func:`statsmodels.stats.gof.chisquare_effectsize` nobs : int or float sample size, number of observations. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. n_bins : int number of bins or cells in the distribution. Returns ------- power : float Power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. ''' from statsmodels.stats.gof import chisquare_power return chisquare_power(effect_size, nobs, n_bins, alpha, ddof=0)
Calculate the power of a chisquare test for one sample Only two-sided alternative is implemented Parameters ---------- effect_size : float standardized effect size, according to Cohen's definition. see :func:`statsmodels.stats.gof.chisquare_effectsize` nobs : int or float sample size, number of observations. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. n_bins : int number of bins or cells in the distribution. Returns ------- power : float Power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true.
power
python
statsmodels/statsmodels
statsmodels/stats/power.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py
BSD-3-Clause
def solve_power(self, effect_size=None, nobs=None, alpha=None, power=None, n_bins=2): '''solve for any one parameter of the power of a one sample chisquare-test for the one sample chisquare-test the keywords are: effect_size, nobs, alpha, power Exactly one needs to be ``None``, all others need numeric values. n_bins needs to be defined, a default=2 is used. Parameters ---------- effect_size : float standardized effect size, according to Cohen's definition. see :func:`statsmodels.stats.gof.chisquare_effectsize` nobs : int or float sample size, number of observations. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. power : float in interval (0,1) power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. n_bins : int number of bins or cells in the distribution Returns ------- value : float The value of the parameter that was set to None in the call. The value solves the power equation given the remaining parameters. Notes ----- The function uses scipy.optimize for finding the value that satisfies the power equation. It first uses ``brentq`` with a prior search for bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve`` also fails, then, for ``alpha``, ``power`` and ``effect_size``, ``brentq`` with fixed bounds is used. However, there can still be cases where this fails. ''' return super().solve_power(effect_size=effect_size, nobs=nobs, n_bins=n_bins, alpha=alpha, power=power)
solve for any one parameter of the power of a one sample chisquare-test for the one sample chisquare-test the keywords are: effect_size, nobs, alpha, power Exactly one needs to be ``None``, all others need numeric values. n_bins needs to be defined, a default=2 is used. Parameters ---------- effect_size : float standardized effect size, according to Cohen's definition. see :func:`statsmodels.stats.gof.chisquare_effectsize` nobs : int or float sample size, number of observations. alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. power : float in interval (0,1) power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. n_bins : int number of bins or cells in the distribution Returns ------- value : float The value of the parameter that was set to None in the call. The value solves the power equation given the remaining parameters. Notes ----- The function uses scipy.optimize for finding the value that satisfies the power equation. It first uses ``brentq`` with a prior search for bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve`` also fails, then, for ``alpha``, ``power`` and ``effect_size``, ``brentq`` with fixed bounds is used. However, there can still be cases where this fails.
solve_power
python
statsmodels/statsmodels
statsmodels/stats/power.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py
BSD-3-Clause
def power(self, effect_size, nobs1, alpha, ratio=1, alternative='two-sided'): '''Calculate the power of a chisquare for two independent sample Parameters ---------- effect_size : float standardize effect size, difference between the two means divided by the standard deviation. effect size has to be positive. nobs1 : int or float number of observations of sample 1. The number of observations of sample two is ratio times the size of sample 1, i.e. ``nobs2 = nobs1 * ratio`` alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. ratio : float ratio of the number of observations in sample 2 relative to sample 1. see description of nobs1 The default for ratio is 1; to solve for ration given the other arguments it has to be explicitely set to None. alternative : str, 'two-sided' (default) or 'one-sided' extra argument to choose whether the power is calculated for a two-sided (default) or one sided test. 'one-sided' assumes we are in the relevant tail. Returns ------- power : float Power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. ''' from statsmodels.stats.gof import chisquare_power nobs2 = nobs1*ratio #equivalent to nobs = n1*n2/(n1+n2)=n1*ratio/(1+ratio) nobs = 1./ (1. / nobs1 + 1. / nobs2) return chisquare_power(effect_size, nobs, alpha)
Calculate the power of a chisquare for two independent sample Parameters ---------- effect_size : float standardize effect size, difference between the two means divided by the standard deviation. effect size has to be positive. nobs1 : int or float number of observations of sample 1. The number of observations of sample two is ratio times the size of sample 1, i.e. ``nobs2 = nobs1 * ratio`` alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. ratio : float ratio of the number of observations in sample 2 relative to sample 1. see description of nobs1 The default for ratio is 1; to solve for ration given the other arguments it has to be explicitely set to None. alternative : str, 'two-sided' (default) or 'one-sided' extra argument to choose whether the power is calculated for a two-sided (default) or one sided test. 'one-sided' assumes we are in the relevant tail. Returns ------- power : float Power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true.
power
python
statsmodels/statsmodels
statsmodels/stats/power.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py
BSD-3-Clause
def solve_power(self, effect_size=None, nobs1=None, alpha=None, power=None, ratio=1., alternative='two-sided'): '''solve for any one parameter of the power of a two sample z-test for z-test the keywords are: effect_size, nobs1, alpha, power, ratio exactly one needs to be ``None``, all others need numeric values Parameters ---------- effect_size : float standardize effect size, difference between the two means divided by the standard deviation. nobs1 : int or float number of observations of sample 1. The number of observations of sample two is ratio times the size of sample 1, i.e. ``nobs2 = nobs1 * ratio`` alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. power : float in interval (0,1) power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. ratio : float ratio of the number of observations in sample 2 relative to sample 1. see description of nobs1 The default for ratio is 1; to solve for ration given the other arguments it has to be explicitely set to None. alternative : str, 'two-sided' (default) or 'one-sided' extra argument to choose whether the power is calculated for a two-sided (default) or one sided test. 'one-sided' assumes we are in the relevant tail. Returns ------- value : float The value of the parameter that was set to None in the call. The value solves the power equation given the remaining parameters. Notes ----- The function uses scipy.optimize for finding the value that satisfies the power equation. It first uses ``brentq`` with a prior search for bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve`` also fails, then, for ``alpha``, ``power`` and ``effect_size``, ``brentq`` with fixed bounds is used. However, there can still be cases where this fails. ''' return super().solve_power(effect_size=effect_size, nobs1=nobs1, alpha=alpha, power=power, ratio=ratio, alternative=alternative)
solve for any one parameter of the power of a two sample z-test for z-test the keywords are: effect_size, nobs1, alpha, power, ratio exactly one needs to be ``None``, all others need numeric values Parameters ---------- effect_size : float standardize effect size, difference between the two means divided by the standard deviation. nobs1 : int or float number of observations of sample 1. The number of observations of sample two is ratio times the size of sample 1, i.e. ``nobs2 = nobs1 * ratio`` alpha : float in interval (0,1) significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. power : float in interval (0,1) power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. ratio : float ratio of the number of observations in sample 2 relative to sample 1. see description of nobs1 The default for ratio is 1; to solve for ration given the other arguments it has to be explicitely set to None. alternative : str, 'two-sided' (default) or 'one-sided' extra argument to choose whether the power is calculated for a two-sided (default) or one sided test. 'one-sided' assumes we are in the relevant tail. Returns ------- value : float The value of the parameter that was set to None in the call. The value solves the power equation given the remaining parameters. Notes ----- The function uses scipy.optimize for finding the value that satisfies the power equation. It first uses ``brentq`` with a prior search for bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve`` also fails, then, for ``alpha``, ``power`` and ``effect_size``, ``brentq`` with fixed bounds is used. However, there can still be cases where this fails.
solve_power
python
statsmodels/statsmodels
statsmodels/stats/power.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py
BSD-3-Clause
def _bound_proportion_confint( func: Callable[[float], float], qi: float, lower: bool = True ) -> float: """ Try hard to find a bound different from eps/1 - eps in proportion_confint Parameters ---------- func : callable Callable function to use as the objective of the search qi : float The empirical success rate lower : bool Whether to fund a lower bound for the left side of the CI Returns ------- float The coarse bound """ default = FLOAT_INFO.eps if lower else 1.0 - FLOAT_INFO.eps def step(v): return v / 8 if lower else v + (1.0 - v) / 8 x = step(qi) w = func(x) cnt = 1 while w > 0 and cnt < 10: x = step(x) w = func(x) cnt += 1 return x if cnt < 10 else default
Try hard to find a bound different from eps/1 - eps in proportion_confint Parameters ---------- func : callable Callable function to use as the objective of the search qi : float The empirical success rate lower : bool Whether to fund a lower bound for the left side of the CI Returns ------- float The coarse bound
_bound_proportion_confint
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def _bisection_search_conservative( func: Callable[[float], float], lb: float, ub: float, steps: int = 27 ) -> tuple[float, float]: """ Private function used as a fallback by proportion_confint Used when brentq returns a non-conservative bound for the CI Parameters ---------- func : callable Callable function to use as the objective of the search lb : float Lower bound ub : float Upper bound steps : int Number of steps to use in the bisection Returns ------- est : float The estimated value. Will always produce a negative value of func func_val : float The value of the function at the estimate """ upper = func(ub) lower = func(lb) best = upper if upper < 0 else lower best_pt = ub if upper < 0 else lb if np.sign(lower) == np.sign(upper): raise ValueError("problem with signs") mp = (ub + lb) / 2 mid = func(mp) if (mid < 0) and (mid > best): best = mid best_pt = mp for _ in range(steps): if np.sign(mid) == np.sign(upper): ub = mp upper = mid else: lb = mp mp = (ub + lb) / 2 mid = func(mp) if (mid < 0) and (mid > best): best = mid best_pt = mp return best_pt, best
Private function used as a fallback by proportion_confint Used when brentq returns a non-conservative bound for the CI Parameters ---------- func : callable Callable function to use as the objective of the search lb : float Lower bound ub : float Upper bound steps : int Number of steps to use in the bisection Returns ------- est : float The estimated value. Will always produce a negative value of func func_val : float The value of the function at the estimate
_bisection_search_conservative
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def proportion_confint(count, nobs, alpha:float=0.05, method="normal", alternative:str='two-sided'): """ Confidence interval for a binomial proportion Parameters ---------- count : {int or float, array_like} number of successes, can be pandas Series or DataFrame. Arrays must contain integer values if method is "binom_test". nobs : {int or float, array_like} total number of trials. Arrays must contain integer values if method is "binom_test". alpha : float Significance level, default 0.05. Must be in (0, 1) method : {"normal", "agresti_coull", "beta", "wilson", "binom_test"} default: "normal" method to use for confidence interval. Supported methods: - `normal` : asymptotic normal approximation - `agresti_coull` : Agresti-Coull interval - `beta` : Clopper-Pearson interval based on Beta distribution - `wilson` : Wilson Score interval - `jeffreys` : Jeffreys Bayesian Interval - `binom_test` : Numerical inversion of binom_test alternative : {"two-sided", "larger", "smaller"} default: "two-sided" specifies whether to calculate a two-sided or one-sided confidence interval. Returns ------- ci_low, ci_upp : {float, ndarray, Series DataFrame} larger and smaller confidence level with coverage (approximately) 1-alpha. When a pandas object is returned, then the index is taken from `count`. When side is not "two-sided", lower or upper bound is set to 0 or 1 respectively. Notes ----- Beta, the Clopper-Pearson exact interval has coverage at least 1-alpha, but is in general conservative. Most of the other methods have average coverage equal to 1-alpha, but will have smaller coverage in some cases. The "beta" and "jeffreys" interval are central, they use alpha/2 in each tail, and alpha is not adjusted at the boundaries. In the extreme case when `count` is zero or equal to `nobs`, then the coverage will be only 1 - alpha/2 in the case of "beta". The confidence intervals are clipped to be in the [0, 1] interval in the case of "normal" and "agresti_coull". Method "binom_test" directly inverts the binomial test in scipy.stats. which has discrete steps. TODO: binom_test intervals raise an exception in small samples if one interval bound is close to zero or one. References ---------- .. [*] https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval .. [*] Brown, Lawrence D.; Cai, T. Tony; DasGupta, Anirban (2001). "Interval Estimation for a Binomial Proportion", Statistical Science 16 (2): 101–133. doi:10.1214/ss/1009213286. """ is_scalar = np.isscalar(count) and np.isscalar(nobs) is_pandas = isinstance(count, (pd.Series, pd.DataFrame)) count_a = array_like(count, "count", optional=False, ndim=None) nobs_a = array_like(nobs, "nobs", optional=False, ndim=None) def _check(x: np.ndarray, name: str) -> np.ndarray: if np.issubdtype(x.dtype, np.integer): return x y = x.astype(np.int64, casting="unsafe") if np.any(y != x): raise ValueError( f"{name} must have an integral dtype. Found data with " f"dtype {x.dtype}" ) return y if method == "binom_test": count_a = _check(np.asarray(count_a), "count") nobs_a = _check(np.asarray(nobs_a), "count") q_ = count_a / nobs_a if alternative == 'two-sided': if method != "binom_test": alpha = alpha / 2.0 elif alternative not in ['larger', 'smaller']: raise NotImplementedError(f"alternative {alternative} is not available") if method == "normal": std_ = np.sqrt(q_ * (1 - q_) / nobs_a) dist = stats.norm.isf(alpha) * std_ ci_low = q_ - dist ci_upp = q_ + dist elif method == "binom_test" and alternative == 'two-sided': def func_factory(count: int, nobs: int) -> Callable[[float], float]: if hasattr(stats, "binomtest"): def func(qi): return stats.binomtest(count, nobs, p=qi).pvalue - alpha else: # Remove after min SciPy >= 1.7 def func(qi): return stats.binom_test(count, nobs, p=qi) - alpha return func bcast = np.broadcast(count_a, nobs_a) ci_low = np.zeros(bcast.shape) ci_upp = np.zeros(bcast.shape) index = bcast.index for c, n in bcast: # Enforce symmetry reverse = False _q = q_.flat[index] if c > n // 2: c = n - c reverse = True _q = 1 - _q func = func_factory(c, n) if c == 0: ci_low.flat[index] = 0.0 else: lower_bnd = _bound_proportion_confint(func, _q, lower=True) val, _z = optimize.brentq( func, lower_bnd, _q, full_output=True ) if func(val) > 0: power = 10 new_lb = val - (val - lower_bnd) / 2**power while func(new_lb) > 0 and power >= 0: power -= 1 new_lb = val - (val - lower_bnd) / 2**power val, _ = _bisection_search_conservative(func, new_lb, _q) ci_low.flat[index] = val if c == n: ci_upp.flat[index] = 1.0 else: upper_bnd = _bound_proportion_confint(func, _q, lower=False) val, _z = optimize.brentq( func, _q, upper_bnd, full_output=True ) if func(val) > 0: power = 10 new_ub = val + (upper_bnd - val) / 2**power while func(new_ub) > 0 and power >= 0: power -= 1 new_ub = val - (upper_bnd - val) / 2**power val, _ = _bisection_search_conservative(func, _q, new_ub) ci_upp.flat[index] = val if reverse: temp = ci_upp.flat[index] ci_upp.flat[index] = 1 - ci_low.flat[index] ci_low.flat[index] = 1 - temp index = bcast.index elif method == "beta" or (method == "binom_test" and alternative != 'two-sided'): ci_low = stats.beta.ppf(alpha, count_a, nobs_a - count_a + 1) ci_upp = stats.beta.isf(alpha, count_a + 1, nobs_a - count_a) if np.ndim(ci_low) > 0: ci_low.flat[q_.flat == 0] = 0 ci_upp.flat[q_.flat == 1] = 1 else: ci_low = 0 if q_ == 0 else ci_low ci_upp = 1 if q_ == 1 else ci_upp elif method == "agresti_coull": crit = stats.norm.isf(alpha) nobs_c = nobs_a + crit**2 q_c = (count_a + crit**2 / 2.0) / nobs_c std_c = np.sqrt(q_c * (1.0 - q_c) / nobs_c) dist = crit * std_c ci_low = q_c - dist ci_upp = q_c + dist elif method == "wilson": crit = stats.norm.isf(alpha) crit2 = crit**2 denom = 1 + crit2 / nobs_a center = (q_ + crit2 / (2 * nobs_a)) / denom dist = crit * np.sqrt( q_ * (1.0 - q_) / nobs_a + crit2 / (4.0 * nobs_a**2) ) dist /= denom ci_low = center - dist ci_upp = center + dist # method adjusted to be more forgiving of misspellings or incorrect option name elif method[:4] == "jeff": ci_low = stats.beta.ppf(alpha, count_a + 0.5, nobs_a - count_a + 0.5) ci_upp = stats.beta.isf(alpha, count_a + 0.5, nobs_a - count_a + 0.5) else: raise NotImplementedError(f"method {method} is not available") if method in ["normal", "agresti_coull"]: ci_low = np.clip(ci_low, 0, 1) ci_upp = np.clip(ci_upp, 0, 1) if is_pandas: container = pd.Series if isinstance(count, pd.Series) else pd.DataFrame ci_low = container(ci_low, index=count.index) ci_upp = container(ci_upp, index=count.index) if alternative == 'larger': ci_low = 0 elif alternative == 'smaller': ci_upp = 1 if is_scalar: return float(ci_low), float(ci_upp) return ci_low, ci_upp
Confidence interval for a binomial proportion Parameters ---------- count : {int or float, array_like} number of successes, can be pandas Series or DataFrame. Arrays must contain integer values if method is "binom_test". nobs : {int or float, array_like} total number of trials. Arrays must contain integer values if method is "binom_test". alpha : float Significance level, default 0.05. Must be in (0, 1) method : {"normal", "agresti_coull", "beta", "wilson", "binom_test"} default: "normal" method to use for confidence interval. Supported methods: - `normal` : asymptotic normal approximation - `agresti_coull` : Agresti-Coull interval - `beta` : Clopper-Pearson interval based on Beta distribution - `wilson` : Wilson Score interval - `jeffreys` : Jeffreys Bayesian Interval - `binom_test` : Numerical inversion of binom_test alternative : {"two-sided", "larger", "smaller"} default: "two-sided" specifies whether to calculate a two-sided or one-sided confidence interval. Returns ------- ci_low, ci_upp : {float, ndarray, Series DataFrame} larger and smaller confidence level with coverage (approximately) 1-alpha. When a pandas object is returned, then the index is taken from `count`. When side is not "two-sided", lower or upper bound is set to 0 or 1 respectively. Notes ----- Beta, the Clopper-Pearson exact interval has coverage at least 1-alpha, but is in general conservative. Most of the other methods have average coverage equal to 1-alpha, but will have smaller coverage in some cases. The "beta" and "jeffreys" interval are central, they use alpha/2 in each tail, and alpha is not adjusted at the boundaries. In the extreme case when `count` is zero or equal to `nobs`, then the coverage will be only 1 - alpha/2 in the case of "beta". The confidence intervals are clipped to be in the [0, 1] interval in the case of "normal" and "agresti_coull". Method "binom_test" directly inverts the binomial test in scipy.stats. which has discrete steps. TODO: binom_test intervals raise an exception in small samples if one interval bound is close to zero or one. References ---------- .. [*] https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval .. [*] Brown, Lawrence D.; Cai, T. Tony; DasGupta, Anirban (2001). "Interval Estimation for a Binomial Proportion", Statistical Science 16 (2): 101–133. doi:10.1214/ss/1009213286.
proportion_confint
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def poisson_interval(interval, p): """ Compute P(b <= Z <= a) where Z ~ Poisson(p) and `interval = (b, a)`. """ b, a = interval prob = stats.poisson.cdf(a, p) - stats.poisson.cdf(b - 1, p) return prob
Compute P(b <= Z <= a) where Z ~ Poisson(p) and `interval = (b, a)`.
multinomial_proportions_confint.poisson_interval
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def truncated_poisson_factorial_moment(interval, r, p): """ Compute mu_r, the r-th factorial moment of a poisson random variable of parameter `p` truncated to `interval = (b, a)`. """ b, a = interval return p ** r * (1 - ((poisson_interval((a - r + 1, a), p) - poisson_interval((b - r, b - 1), p)) / poisson_interval((b, a), p)))
Compute mu_r, the r-th factorial moment of a poisson random variable of parameter `p` truncated to `interval = (b, a)`.
multinomial_proportions_confint.truncated_poisson_factorial_moment
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def edgeworth(intervals): """ Compute the Edgeworth expansion term of Sison & Glaz's formula (1) (approximated probability for multinomial proportions in a given box). """ # Compute means and central moments of the truncated poisson # variables. mu_r1, mu_r2, mu_r3, mu_r4 = ( np.array([truncated_poisson_factorial_moment(interval, r, p) for (interval, p) in zip(intervals, counts)]) for r in range(1, 5) ) mu = mu_r1 mu2 = mu_r2 + mu - mu ** 2 mu3 = mu_r3 + mu_r2 * (3 - 3 * mu) + mu - 3 * mu ** 2 + 2 * mu ** 3 mu4 = (mu_r4 + mu_r3 * (6 - 4 * mu) + mu_r2 * (7 - 12 * mu + 6 * mu ** 2) + mu - 4 * mu ** 2 + 6 * mu ** 3 - 3 * mu ** 4) # Compute expansion factors, gamma_1 and gamma_2. g1 = mu3.sum() / mu2.sum() ** 1.5 g2 = (mu4.sum() - 3 * (mu2 ** 2).sum()) / mu2.sum() ** 2 # Compute the expansion itself. x = (n - mu.sum()) / np.sqrt(mu2.sum()) phi = np.exp(- x ** 2 / 2) / np.sqrt(2 * np.pi) H3 = x ** 3 - 3 * x H4 = x ** 4 - 6 * x ** 2 + 3 H6 = x ** 6 - 15 * x ** 4 + 45 * x ** 2 - 15 f = phi * (1 + g1 * H3 / 6 + g2 * H4 / 24 + g1 ** 2 * H6 / 72) return f / np.sqrt(mu2.sum())
Compute the Edgeworth expansion term of Sison & Glaz's formula (1) (approximated probability for multinomial proportions in a given box).
multinomial_proportions_confint.edgeworth
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def approximated_multinomial_interval(intervals): """ Compute approximated probability for Multinomial(n, proportions) to be in `intervals` (Sison & Glaz's formula (1)). """ return np.exp( np.sum(np.log([poisson_interval(interval, p) for (interval, p) in zip(intervals, counts)])) + np.log(edgeworth(intervals)) - np.log(stats.poisson._pmf(n, n)) )
Compute approximated probability for Multinomial(n, proportions) to be in `intervals` (Sison & Glaz's formula (1)).
multinomial_proportions_confint.approximated_multinomial_interval
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def nu(c): """ Compute interval coverage for a given `c` (Sison & Glaz's formula (7)). """ return approximated_multinomial_interval( [(np.maximum(count - c, 0), np.minimum(count + c, n)) for count in counts])
Compute interval coverage for a given `c` (Sison & Glaz's formula (7)).
multinomial_proportions_confint.nu
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def multinomial_proportions_confint(counts, alpha=0.05, method='goodman'): """ Confidence intervals for multinomial proportions. Parameters ---------- counts : array_like of int, 1-D Number of observations in each category. alpha : float in (0, 1), optional Significance level, defaults to 0.05. method : {'goodman', 'sison-glaz'}, optional Method to use to compute the confidence intervals; available methods are: - `goodman`: based on a chi-squared approximation, valid if all values in `counts` are greater or equal to 5 [2]_ - `sison-glaz`: less conservative than `goodman`, but only valid if `counts` has 7 or more categories (``len(counts) >= 7``) [3]_ Returns ------- confint : ndarray, 2-D Array of [lower, upper] confidence levels for each category, such that overall coverage is (approximately) `1-alpha`. Raises ------ ValueError If `alpha` is not in `(0, 1)` (bounds excluded), or if the values in `counts` are not all positive or null. NotImplementedError If `method` is not kown. Exception When ``method == 'sison-glaz'``, if for some reason `c` cannot be computed; this signals a bug and should be reported. Notes ----- The `goodman` method [2]_ is based on approximating a statistic based on the multinomial as a chi-squared random variable. The usual recommendation is that this is valid if all the values in `counts` are greater than or equal to 5. There is no condition on the number of categories for this method. The `sison-glaz` method [3]_ approximates the multinomial probabilities, and evaluates that with a maximum-likelihood estimator. The first approximation is an Edgeworth expansion that converges when the number of categories goes to infinity, and the maximum-likelihood estimator converges when the number of observations (``sum(counts)``) goes to infinity. In their paper, Sison & Glaz demo their method with at least 7 categories, so ``len(counts) >= 7`` with all values in `counts` at or above 5 can be used as a rule of thumb for the validity of this method. This method is less conservative than the `goodman` method (i.e. it will yield confidence intervals closer to the desired significance level), but produces confidence intervals of uniform width over all categories (except when the intervals reach 0 or 1, in which case they are truncated), which makes it most useful when proportions are of similar magnitude. Aside from the original sources ([1]_, [2]_, and [3]_), the implementation uses the formulas (though not the code) presented in [4]_ and [5]_. References ---------- .. [1] Levin, Bruce, "A representation for multinomial cumulative distribution functions," The Annals of Statistics, Vol. 9, No. 5, 1981, pp. 1123-1126. .. [2] Goodman, L.A., "On simultaneous confidence intervals for multinomial proportions," Technometrics, Vol. 7, No. 2, 1965, pp. 247-254. .. [3] Sison, Cristina P., and Joseph Glaz, "Simultaneous Confidence Intervals and Sample Size Determination for Multinomial Proportions," Journal of the American Statistical Association, Vol. 90, No. 429, 1995, pp. 366-369. .. [4] May, Warren L., and William D. Johnson, "A SAS® macro for constructing simultaneous confidence intervals for multinomial proportions," Computer methods and programs in Biomedicine, Vol. 53, No. 3, 1997, pp. 153-162. .. [5] May, Warren L., and William D. Johnson, "Constructing two-sided simultaneous confidence intervals for multinomial proportions for small counts in a large number of cells," Journal of Statistical Software, Vol. 5, No. 6, 2000, pp. 1-24. """ if alpha <= 0 or alpha >= 1: raise ValueError('alpha must be in (0, 1), bounds excluded') counts = np.array(counts, dtype=float) if (counts < 0).any(): raise ValueError('counts must be >= 0') n = counts.sum() k = len(counts) proportions = counts / n if method == 'goodman': chi2 = stats.chi2.ppf(1 - alpha / k, 1) delta = chi2 ** 2 + (4 * n * proportions * chi2 * (1 - proportions)) region = ((2 * n * proportions + chi2 + np.array([- np.sqrt(delta), np.sqrt(delta)])) / (2 * (chi2 + n))).T elif method[:5] == 'sison': # We accept any name starting with 'sison' # Define a few functions we'll use a lot. def poisson_interval(interval, p): """ Compute P(b <= Z <= a) where Z ~ Poisson(p) and `interval = (b, a)`. """ b, a = interval prob = stats.poisson.cdf(a, p) - stats.poisson.cdf(b - 1, p) return prob def truncated_poisson_factorial_moment(interval, r, p): """ Compute mu_r, the r-th factorial moment of a poisson random variable of parameter `p` truncated to `interval = (b, a)`. """ b, a = interval return p ** r * (1 - ((poisson_interval((a - r + 1, a), p) - poisson_interval((b - r, b - 1), p)) / poisson_interval((b, a), p))) def edgeworth(intervals): """ Compute the Edgeworth expansion term of Sison & Glaz's formula (1) (approximated probability for multinomial proportions in a given box). """ # Compute means and central moments of the truncated poisson # variables. mu_r1, mu_r2, mu_r3, mu_r4 = ( np.array([truncated_poisson_factorial_moment(interval, r, p) for (interval, p) in zip(intervals, counts)]) for r in range(1, 5) ) mu = mu_r1 mu2 = mu_r2 + mu - mu ** 2 mu3 = mu_r3 + mu_r2 * (3 - 3 * mu) + mu - 3 * mu ** 2 + 2 * mu ** 3 mu4 = (mu_r4 + mu_r3 * (6 - 4 * mu) + mu_r2 * (7 - 12 * mu + 6 * mu ** 2) + mu - 4 * mu ** 2 + 6 * mu ** 3 - 3 * mu ** 4) # Compute expansion factors, gamma_1 and gamma_2. g1 = mu3.sum() / mu2.sum() ** 1.5 g2 = (mu4.sum() - 3 * (mu2 ** 2).sum()) / mu2.sum() ** 2 # Compute the expansion itself. x = (n - mu.sum()) / np.sqrt(mu2.sum()) phi = np.exp(- x ** 2 / 2) / np.sqrt(2 * np.pi) H3 = x ** 3 - 3 * x H4 = x ** 4 - 6 * x ** 2 + 3 H6 = x ** 6 - 15 * x ** 4 + 45 * x ** 2 - 15 f = phi * (1 + g1 * H3 / 6 + g2 * H4 / 24 + g1 ** 2 * H6 / 72) return f / np.sqrt(mu2.sum()) def approximated_multinomial_interval(intervals): """ Compute approximated probability for Multinomial(n, proportions) to be in `intervals` (Sison & Glaz's formula (1)). """ return np.exp( np.sum(np.log([poisson_interval(interval, p) for (interval, p) in zip(intervals, counts)])) + np.log(edgeworth(intervals)) - np.log(stats.poisson._pmf(n, n)) ) def nu(c): """ Compute interval coverage for a given `c` (Sison & Glaz's formula (7)). """ return approximated_multinomial_interval( [(np.maximum(count - c, 0), np.minimum(count + c, n)) for count in counts]) # Find the value of `c` that will give us the confidence intervals # (solving nu(c) <= 1 - alpha < nu(c + 1). c = 1.0 nuc = nu(c) nucp1 = nu(c + 1) while not (nuc <= (1 - alpha) < nucp1): if c > n: raise Exception("Couldn't find a value for `c` that " "solves nu(c) <= 1 - alpha < nu(c + 1)") c += 1 nuc = nucp1 nucp1 = nu(c + 1) # Compute gamma and the corresponding confidence intervals. g = (1 - alpha - nuc) / (nucp1 - nuc) ci_lower = np.maximum(proportions - c / n, 0) ci_upper = np.minimum(proportions + (c + 2 * g) / n, 1) region = np.array([ci_lower, ci_upper]).T else: raise NotImplementedError('method "%s" is not available' % method) return region
Confidence intervals for multinomial proportions. Parameters ---------- counts : array_like of int, 1-D Number of observations in each category. alpha : float in (0, 1), optional Significance level, defaults to 0.05. method : {'goodman', 'sison-glaz'}, optional Method to use to compute the confidence intervals; available methods are: - `goodman`: based on a chi-squared approximation, valid if all values in `counts` are greater or equal to 5 [2]_ - `sison-glaz`: less conservative than `goodman`, but only valid if `counts` has 7 or more categories (``len(counts) >= 7``) [3]_ Returns ------- confint : ndarray, 2-D Array of [lower, upper] confidence levels for each category, such that overall coverage is (approximately) `1-alpha`. Raises ------ ValueError If `alpha` is not in `(0, 1)` (bounds excluded), or if the values in `counts` are not all positive or null. NotImplementedError If `method` is not kown. Exception When ``method == 'sison-glaz'``, if for some reason `c` cannot be computed; this signals a bug and should be reported. Notes ----- The `goodman` method [2]_ is based on approximating a statistic based on the multinomial as a chi-squared random variable. The usual recommendation is that this is valid if all the values in `counts` are greater than or equal to 5. There is no condition on the number of categories for this method. The `sison-glaz` method [3]_ approximates the multinomial probabilities, and evaluates that with a maximum-likelihood estimator. The first approximation is an Edgeworth expansion that converges when the number of categories goes to infinity, and the maximum-likelihood estimator converges when the number of observations (``sum(counts)``) goes to infinity. In their paper, Sison & Glaz demo their method with at least 7 categories, so ``len(counts) >= 7`` with all values in `counts` at or above 5 can be used as a rule of thumb for the validity of this method. This method is less conservative than the `goodman` method (i.e. it will yield confidence intervals closer to the desired significance level), but produces confidence intervals of uniform width over all categories (except when the intervals reach 0 or 1, in which case they are truncated), which makes it most useful when proportions are of similar magnitude. Aside from the original sources ([1]_, [2]_, and [3]_), the implementation uses the formulas (though not the code) presented in [4]_ and [5]_. References ---------- .. [1] Levin, Bruce, "A representation for multinomial cumulative distribution functions," The Annals of Statistics, Vol. 9, No. 5, 1981, pp. 1123-1126. .. [2] Goodman, L.A., "On simultaneous confidence intervals for multinomial proportions," Technometrics, Vol. 7, No. 2, 1965, pp. 247-254. .. [3] Sison, Cristina P., and Joseph Glaz, "Simultaneous Confidence Intervals and Sample Size Determination for Multinomial Proportions," Journal of the American Statistical Association, Vol. 90, No. 429, 1995, pp. 366-369. .. [4] May, Warren L., and William D. Johnson, "A SAS® macro for constructing simultaneous confidence intervals for multinomial proportions," Computer methods and programs in Biomedicine, Vol. 53, No. 3, 1997, pp. 153-162. .. [5] May, Warren L., and William D. Johnson, "Constructing two-sided simultaneous confidence intervals for multinomial proportions for small counts in a large number of cells," Journal of Statistical Software, Vol. 5, No. 6, 2000, pp. 1-24.
multinomial_proportions_confint
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def samplesize_confint_proportion(proportion, half_length, alpha=0.05, method='normal'): """ Find sample size to get desired confidence interval length Parameters ---------- proportion : float in (0, 1) proportion or quantile half_length : float in (0, 1) desired half length of the confidence interval alpha : float in (0, 1) significance level, default 0.05, coverage of the two-sided interval is (approximately) ``1 - alpha`` method : str in ['normal'] method to use for confidence interval, currently only normal approximation Returns ------- n : float sample size to get the desired half length of the confidence interval Notes ----- this is mainly to store the formula. possible application: number of replications in bootstrap samples """ q_ = proportion if method == 'normal': n = q_ * (1 - q_) / (half_length / stats.norm.isf(alpha / 2.))**2 else: raise NotImplementedError('only "normal" is available') return n
Find sample size to get desired confidence interval length Parameters ---------- proportion : float in (0, 1) proportion or quantile half_length : float in (0, 1) desired half length of the confidence interval alpha : float in (0, 1) significance level, default 0.05, coverage of the two-sided interval is (approximately) ``1 - alpha`` method : str in ['normal'] method to use for confidence interval, currently only normal approximation Returns ------- n : float sample size to get the desired half length of the confidence interval Notes ----- this is mainly to store the formula. possible application: number of replications in bootstrap samples
samplesize_confint_proportion
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def proportion_effectsize(prop1, prop2, method='normal'): """ Effect size for a test comparing two proportions for use in power function Parameters ---------- prop1, prop2 : float or array_like The proportion value(s). Returns ------- es : float or ndarray effect size for (transformed) prop1 - prop2 Notes ----- only method='normal' is implemented to match pwr.p2.test see http://www.statmethods.net/stats/power.html Effect size for `normal` is defined as :: 2 * (arcsin(sqrt(prop1)) - arcsin(sqrt(prop2))) I think other conversions to normality can be used, but I need to check. Examples -------- >>> import statsmodels.api as sm >>> sm.stats.proportion_effectsize(0.5, 0.4) 0.20135792079033088 >>> sm.stats.proportion_effectsize([0.3, 0.4, 0.5], 0.4) array([-0.21015893, 0. , 0.20135792]) """ if method != 'normal': raise ValueError('only "normal" is implemented') es = 2 * (np.arcsin(np.sqrt(prop1)) - np.arcsin(np.sqrt(prop2))) return es
Effect size for a test comparing two proportions for use in power function Parameters ---------- prop1, prop2 : float or array_like The proportion value(s). Returns ------- es : float or ndarray effect size for (transformed) prop1 - prop2 Notes ----- only method='normal' is implemented to match pwr.p2.test see http://www.statmethods.net/stats/power.html Effect size for `normal` is defined as :: 2 * (arcsin(sqrt(prop1)) - arcsin(sqrt(prop2))) I think other conversions to normality can be used, but I need to check. Examples -------- >>> import statsmodels.api as sm >>> sm.stats.proportion_effectsize(0.5, 0.4) 0.20135792079033088 >>> sm.stats.proportion_effectsize([0.3, 0.4, 0.5], 0.4) array([-0.21015893, 0. , 0.20135792])
proportion_effectsize
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def std_prop(prop, nobs): """ Standard error for the estimate of a proportion This is just ``np.sqrt(p * (1. - p) / nobs)`` Parameters ---------- prop : array_like proportion nobs : int, array_like number of observations Returns ------- std : array_like standard error for a proportion of nobs independent observations """ return np.sqrt(prop * (1. - prop) / nobs)
Standard error for the estimate of a proportion This is just ``np.sqrt(p * (1. - p) / nobs)`` Parameters ---------- prop : array_like proportion nobs : int, array_like number of observations Returns ------- std : array_like standard error for a proportion of nobs independent observations
std_prop
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def _power_ztost(mean_low, var_low, mean_upp, var_upp, mean_alt, var_alt, alpha=0.05, discrete=True, dist='norm', nobs=None, continuity=0, critval_continuity=0): """ Generic statistical power function for normal based equivalence test This includes options to adjust the normal approximation and can use the binomial to evaluate the probability of the rejection region see power_ztost_prob for a description of the options """ # TODO: refactor structure, separate norm and binom better if not isinstance(continuity, tuple): continuity = (continuity, continuity) crit = stats.norm.isf(alpha) k_low = mean_low + np.sqrt(var_low) * crit k_upp = mean_upp - np.sqrt(var_upp) * crit if discrete or dist == 'binom': k_low = np.ceil(k_low * nobs + 0.5 * critval_continuity) k_upp = np.trunc(k_upp * nobs - 0.5 * critval_continuity) if dist == 'norm': #need proportion k_low = (k_low) * 1. / nobs #-1 to match PASS k_upp = k_upp * 1. / nobs # else: # if dist == 'binom': # #need counts # k_low *= nobs # k_upp *= nobs #print mean_low, np.sqrt(var_low), crit, var_low #print mean_upp, np.sqrt(var_upp), crit, var_upp if np.any(k_low > k_upp): #vectorize import warnings warnings.warn("no overlap, power is zero", HypothesisTestWarning) std_alt = np.sqrt(var_alt) z_low = (k_low - mean_alt - continuity[0] * 0.5 / nobs) / std_alt z_upp = (k_upp - mean_alt + continuity[1] * 0.5 / nobs) / std_alt if dist == 'norm': power = stats.norm.cdf(z_upp) - stats.norm.cdf(z_low) elif dist == 'binom': power = (stats.binom.cdf(k_upp, nobs, mean_alt) - stats.binom.cdf(k_low-1, nobs, mean_alt)) return power, (k_low, k_upp, z_low, z_upp)
Generic statistical power function for normal based equivalence test This includes options to adjust the normal approximation and can use the binomial to evaluate the probability of the rejection region see power_ztost_prob for a description of the options
_power_ztost
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def binom_tost(count, nobs, low, upp): """ Exact TOST test for one proportion using binomial distribution Parameters ---------- count : {int, array_like} the number of successes in nobs trials. nobs : int the number of trials or observations. low, upp : floats lower and upper limit of equivalence region Returns ------- pvalue : float p-value of equivalence test pval_low, pval_upp : floats p-values of lower and upper one-sided tests """ # binom_test_stat only returns pval tt1 = binom_test(count, nobs, alternative='larger', prop=low) tt2 = binom_test(count, nobs, alternative='smaller', prop=upp) return np.maximum(tt1, tt2), tt1, tt2,
Exact TOST test for one proportion using binomial distribution Parameters ---------- count : {int, array_like} the number of successes in nobs trials. nobs : int the number of trials or observations. low, upp : floats lower and upper limit of equivalence region Returns ------- pvalue : float p-value of equivalence test pval_low, pval_upp : floats p-values of lower and upper one-sided tests
binom_tost
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def binom_tost_reject_interval(low, upp, nobs, alpha=0.05): """ Rejection region for binomial TOST The interval includes the end points, `reject` if and only if `r_low <= x <= r_upp`. The interval might be empty with `r_upp < r_low`. Parameters ---------- low, upp : floats lower and upper limit of equivalence region nobs : int the number of trials or observations. Returns ------- x_low, x_upp : float lower and upper bound of rejection region """ x_low = stats.binom.isf(alpha, nobs, low) + 1 x_upp = stats.binom.ppf(alpha, nobs, upp) - 1 return x_low, x_upp
Rejection region for binomial TOST The interval includes the end points, `reject` if and only if `r_low <= x <= r_upp`. The interval might be empty with `r_upp < r_low`. Parameters ---------- low, upp : floats lower and upper limit of equivalence region nobs : int the number of trials or observations. Returns ------- x_low, x_upp : float lower and upper bound of rejection region
binom_tost_reject_interval
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def binom_test_reject_interval(value, nobs, alpha=0.05, alternative='two-sided'): """ Rejection region for binomial test for one sample proportion The interval includes the end points of the rejection region. Parameters ---------- value : float proportion under the Null hypothesis nobs : int the number of trials or observations. Returns ------- x_low, x_upp : int lower and upper bound of rejection region """ if alternative in ['2s', 'two-sided']: alternative = '2s' # normalize alternative name alpha = alpha / 2 if alternative in ['2s', 'smaller']: x_low = stats.binom.ppf(alpha, nobs, value) - 1 else: x_low = 0 if alternative in ['2s', 'larger']: x_upp = stats.binom.isf(alpha, nobs, value) + 1 else : x_upp = nobs return int(x_low), int(x_upp)
Rejection region for binomial test for one sample proportion The interval includes the end points of the rejection region. Parameters ---------- value : float proportion under the Null hypothesis nobs : int the number of trials or observations. Returns ------- x_low, x_upp : int lower and upper bound of rejection region
binom_test_reject_interval
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def binom_test(count, nobs, prop=0.5, alternative='two-sided'): """ Perform a test that the probability of success is p. This is an exact, two-sided test of the null hypothesis that the probability of success in a Bernoulli experiment is `p`. Parameters ---------- count : {int, array_like} the number of successes in nobs trials. nobs : int the number of trials or observations. prop : float, optional The probability of success under the null hypothesis, `0 <= prop <= 1`. The default value is `prop = 0.5` alternative : str in ['two-sided', 'smaller', 'larger'] alternative hypothesis, which can be two-sided or either one of the one-sided tests. Returns ------- p-value : float The p-value of the hypothesis test Notes ----- This uses scipy.stats.binom_test for the two-sided alternative. """ if np.any(prop > 1.0) or np.any(prop < 0.0): raise ValueError("p must be in range [0,1]") if alternative in ['2s', 'two-sided']: try: pval = stats.binomtest(count, n=nobs, p=prop).pvalue except AttributeError: # Remove after min SciPy >= 1.7 pval = stats.binom_test(count, n=nobs, p=prop) elif alternative in ['l', 'larger']: pval = stats.binom.sf(count-1, nobs, prop) elif alternative in ['s', 'smaller']: pval = stats.binom.cdf(count, nobs, prop) else: raise ValueError('alternative not recognized\n' 'should be two-sided, larger or smaller') return pval
Perform a test that the probability of success is p. This is an exact, two-sided test of the null hypothesis that the probability of success in a Bernoulli experiment is `p`. Parameters ---------- count : {int, array_like} the number of successes in nobs trials. nobs : int the number of trials or observations. prop : float, optional The probability of success under the null hypothesis, `0 <= prop <= 1`. The default value is `prop = 0.5` alternative : str in ['two-sided', 'smaller', 'larger'] alternative hypothesis, which can be two-sided or either one of the one-sided tests. Returns ------- p-value : float The p-value of the hypothesis test Notes ----- This uses scipy.stats.binom_test for the two-sided alternative.
binom_test
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def power_ztost_prop(low, upp, nobs, p_alt, alpha=0.05, dist='norm', variance_prop=None, discrete=True, continuity=0, critval_continuity=0): """ Power of proportions equivalence test based on normal distribution Parameters ---------- low, upp : floats lower and upper limit of equivalence region nobs : int number of observations p_alt : float in (0,1) proportion under the alternative alpha : float in (0,1) significance level of the test dist : str in ['norm', 'binom'] This defines the distribution to evaluate the power of the test. The critical values of the TOST test are always based on the normal approximation, but the distribution for the power can be either the normal (default) or the binomial (exact) distribution. variance_prop : None or float in (0,1) If this is None, then the variances for the two one sided tests are based on the proportions equal to the equivalence limits. If variance_prop is given, then it is used to calculate the variance for the TOST statistics. If this is based on an sample, then the estimated proportion can be used. discrete : bool If true, then the critical values of the rejection region are converted to integers. If dist is "binom", this is automatically assumed. If discrete is false, then the TOST critical values are used as floating point numbers, and the power is calculated based on the rejection region that is not discretized. continuity : bool or float adjust the rejection region for the normal power probability. This has and effect only if ``dist='norm'`` critval_continuity : bool or float If this is non-zero, then the critical values of the tost rejection region are adjusted before converting to integers. This affects both distributions, ``dist='norm'`` and ``dist='binom'``. Returns ------- power : float statistical power of the equivalence test. (k_low, k_upp, z_low, z_upp) : tuple of floats critical limits in intermediate steps temporary return, will be changed Notes ----- In small samples the power for the ``discrete`` version, has a sawtooth pattern as a function of the number of observations. As a consequence, small changes in the number of observations or in the normal approximation can have a large effect on the power. ``continuity`` and ``critval_continuity`` are added to match some results of PASS, and are mainly to investigate the sensitivity of the ztost power to small changes in the rejection region. From my interpretation of the equations in the SAS manual, both are zero in SAS. works vectorized **verification:** The ``dist='binom'`` results match PASS, The ``dist='norm'`` results look reasonable, but no benchmark is available. References ---------- SAS Manual: Chapter 68: The Power Procedure, Computational Resources PASS Chapter 110: Equivalence Tests for One Proportion. """ mean_low = low var_low = std_prop(low, nobs)**2 mean_upp = upp var_upp = std_prop(upp, nobs)**2 mean_alt = p_alt var_alt = std_prop(p_alt, nobs)**2 if variance_prop is not None: var_low = var_upp = std_prop(variance_prop, nobs)**2 power = _power_ztost(mean_low, var_low, mean_upp, var_upp, mean_alt, var_alt, alpha=alpha, discrete=discrete, dist=dist, nobs=nobs, continuity=continuity, critval_continuity=critval_continuity) return np.maximum(power[0], 0), power[1:]
Power of proportions equivalence test based on normal distribution Parameters ---------- low, upp : floats lower and upper limit of equivalence region nobs : int number of observations p_alt : float in (0,1) proportion under the alternative alpha : float in (0,1) significance level of the test dist : str in ['norm', 'binom'] This defines the distribution to evaluate the power of the test. The critical values of the TOST test are always based on the normal approximation, but the distribution for the power can be either the normal (default) or the binomial (exact) distribution. variance_prop : None or float in (0,1) If this is None, then the variances for the two one sided tests are based on the proportions equal to the equivalence limits. If variance_prop is given, then it is used to calculate the variance for the TOST statistics. If this is based on an sample, then the estimated proportion can be used. discrete : bool If true, then the critical values of the rejection region are converted to integers. If dist is "binom", this is automatically assumed. If discrete is false, then the TOST critical values are used as floating point numbers, and the power is calculated based on the rejection region that is not discretized. continuity : bool or float adjust the rejection region for the normal power probability. This has and effect only if ``dist='norm'`` critval_continuity : bool or float If this is non-zero, then the critical values of the tost rejection region are adjusted before converting to integers. This affects both distributions, ``dist='norm'`` and ``dist='binom'``. Returns ------- power : float statistical power of the equivalence test. (k_low, k_upp, z_low, z_upp) : tuple of floats critical limits in intermediate steps temporary return, will be changed Notes ----- In small samples the power for the ``discrete`` version, has a sawtooth pattern as a function of the number of observations. As a consequence, small changes in the number of observations or in the normal approximation can have a large effect on the power. ``continuity`` and ``critval_continuity`` are added to match some results of PASS, and are mainly to investigate the sensitivity of the ztost power to small changes in the rejection region. From my interpretation of the equations in the SAS manual, both are zero in SAS. works vectorized **verification:** The ``dist='binom'`` results match PASS, The ``dist='norm'`` results look reasonable, but no benchmark is available. References ---------- SAS Manual: Chapter 68: The Power Procedure, Computational Resources PASS Chapter 110: Equivalence Tests for One Proportion.
power_ztost_prop
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def _table_proportion(count, nobs): """ Create a k by 2 contingency table for proportion helper function for proportions_chisquare Parameters ---------- count : {int, array_like} the number of successes in nobs trials. nobs : int the number of trials or observations. Returns ------- table : ndarray (k, 2) contingency table Notes ----- recent scipy has more elaborate contingency table functions """ count = np.asarray(count) dt = np.promote_types(count.dtype, np.float64) count = np.asarray(count, dtype=dt) table = np.column_stack((count, nobs - count)) expected = table.sum(0) * table.sum(1)[:, None] * 1. / table.sum() n_rows = table.shape[0] return table, expected, n_rows
Create a k by 2 contingency table for proportion helper function for proportions_chisquare Parameters ---------- count : {int, array_like} the number of successes in nobs trials. nobs : int the number of trials or observations. Returns ------- table : ndarray (k, 2) contingency table Notes ----- recent scipy has more elaborate contingency table functions
_table_proportion
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def proportions_ztest(count, nobs, value=None, alternative='two-sided', prop_var=False): """ Test for proportions based on normal (z) test Parameters ---------- count : {int, array_like} the number of successes in nobs trials. If this is array_like, then the assumption is that this represents the number of successes for each independent sample nobs : {int, array_like} the number of trials or observations, with the same length as count. value : float, array_like or None, optional This is the value of the null hypothesis equal to the proportion in the case of a one sample test. In the case of a two-sample test, the null hypothesis is that prop[0] - prop[1] = value, where prop is the proportion in the two samples. If not provided value = 0 and the null is prop[0] = prop[1] alternative : str in ['two-sided', 'smaller', 'larger'] The alternative hypothesis can be either two-sided or one of the one- sided tests, smaller means that the alternative hypothesis is ``prop < value`` and larger means ``prop > value``. In the two sample test, smaller means that the alternative hypothesis is ``p1 < p2`` and larger means ``p1 > p2`` where ``p1`` is the proportion of the first sample and ``p2`` of the second one. prop_var : False or float in (0, 1) If prop_var is false, then the variance of the proportion estimate is calculated based on the sample proportion. Alternatively, a proportion can be specified to calculate this variance. Common use case is to use the proportion under the Null hypothesis to specify the variance of the proportion estimate. Returns ------- zstat : float test statistic for the z-test p-value : float p-value for the z-test Examples -------- >>> count = 5 >>> nobs = 83 >>> value = .05 >>> stat, pval = proportions_ztest(count, nobs, value) >>> print('{0:0.3f}'.format(pval)) 0.695 >>> import numpy as np >>> from statsmodels.stats.proportion import proportions_ztest >>> count = np.array([5, 12]) >>> nobs = np.array([83, 99]) >>> stat, pval = proportions_ztest(count, nobs) >>> print('{0:0.3f}'.format(pval)) 0.159 Notes ----- This uses a simple normal test for proportions. It should be the same as running the mean z-test on the data encoded 1 for event and 0 for no event so that the sum corresponds to the count. In the one and two sample cases with two-sided alternative, this test produces the same p-value as ``proportions_chisquare``, since the chisquare is the distribution of the square of a standard normal distribution. """ # TODO: verify that this really holds # TODO: add continuity correction or other improvements for small samples # TODO: change options similar to propotion_ztost ? count = np.asarray(count) nobs = np.asarray(nobs) if nobs.size == 1: nobs = nobs * np.ones_like(count) prop = count * 1. / nobs k_sample = np.size(prop) if value is None: if k_sample == 1: raise ValueError('value must be provided for a 1-sample test') value = 0 if k_sample == 1: diff = prop - value elif k_sample == 2: diff = prop[0] - prop[1] - value else: msg = 'more than two samples are not implemented yet' raise NotImplementedError(msg) p_pooled = np.sum(count) * 1. / np.sum(nobs) nobs_fact = np.sum(1. / nobs) if prop_var: p_pooled = prop_var var_ = p_pooled * (1 - p_pooled) * nobs_fact std_diff = np.sqrt(var_) from statsmodels.stats.weightstats import _zstat_generic2 return _zstat_generic2(diff, std_diff, alternative)
Test for proportions based on normal (z) test Parameters ---------- count : {int, array_like} the number of successes in nobs trials. If this is array_like, then the assumption is that this represents the number of successes for each independent sample nobs : {int, array_like} the number of trials or observations, with the same length as count. value : float, array_like or None, optional This is the value of the null hypothesis equal to the proportion in the case of a one sample test. In the case of a two-sample test, the null hypothesis is that prop[0] - prop[1] = value, where prop is the proportion in the two samples. If not provided value = 0 and the null is prop[0] = prop[1] alternative : str in ['two-sided', 'smaller', 'larger'] The alternative hypothesis can be either two-sided or one of the one- sided tests, smaller means that the alternative hypothesis is ``prop < value`` and larger means ``prop > value``. In the two sample test, smaller means that the alternative hypothesis is ``p1 < p2`` and larger means ``p1 > p2`` where ``p1`` is the proportion of the first sample and ``p2`` of the second one. prop_var : False or float in (0, 1) If prop_var is false, then the variance of the proportion estimate is calculated based on the sample proportion. Alternatively, a proportion can be specified to calculate this variance. Common use case is to use the proportion under the Null hypothesis to specify the variance of the proportion estimate. Returns ------- zstat : float test statistic for the z-test p-value : float p-value for the z-test Examples -------- >>> count = 5 >>> nobs = 83 >>> value = .05 >>> stat, pval = proportions_ztest(count, nobs, value) >>> print('{0:0.3f}'.format(pval)) 0.695 >>> import numpy as np >>> from statsmodels.stats.proportion import proportions_ztest >>> count = np.array([5, 12]) >>> nobs = np.array([83, 99]) >>> stat, pval = proportions_ztest(count, nobs) >>> print('{0:0.3f}'.format(pval)) 0.159 Notes ----- This uses a simple normal test for proportions. It should be the same as running the mean z-test on the data encoded 1 for event and 0 for no event so that the sum corresponds to the count. In the one and two sample cases with two-sided alternative, this test produces the same p-value as ``proportions_chisquare``, since the chisquare is the distribution of the square of a standard normal distribution.
proportions_ztest
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def proportions_ztost(count, nobs, low, upp, prop_var='sample'): """ Equivalence test based on normal distribution Parameters ---------- count : {int, array_like} the number of successes in nobs trials. If this is array_like, then the assumption is that this represents the number of successes for each independent sample nobs : int the number of trials or observations, with the same length as count. low, upp : float equivalence interval low < prop1 - prop2 < upp prop_var : str or float in (0, 1) prop_var determines which proportion is used for the calculation of the standard deviation of the proportion estimate The available options for string are 'sample' (default), 'null' and 'limits'. If prop_var is a float, then it is used directly. Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1 : tuple of floats test statistic and pvalue for lower threshold test t2, pv2 : tuple of floats test statistic and pvalue for upper threshold test Notes ----- checked only for 1 sample case """ if prop_var == 'limits': prop_var_low = low prop_var_upp = upp elif prop_var == 'sample': prop_var_low = prop_var_upp = False #ztest uses sample elif prop_var == 'null': prop_var_low = prop_var_upp = 0.5 * (low + upp) elif np.isreal(prop_var): prop_var_low = prop_var_upp = prop_var tt1 = proportions_ztest(count, nobs, alternative='larger', prop_var=prop_var_low, value=low) tt2 = proportions_ztest(count, nobs, alternative='smaller', prop_var=prop_var_upp, value=upp) return np.maximum(tt1[1], tt2[1]), tt1, tt2,
Equivalence test based on normal distribution Parameters ---------- count : {int, array_like} the number of successes in nobs trials. If this is array_like, then the assumption is that this represents the number of successes for each independent sample nobs : int the number of trials or observations, with the same length as count. low, upp : float equivalence interval low < prop1 - prop2 < upp prop_var : str or float in (0, 1) prop_var determines which proportion is used for the calculation of the standard deviation of the proportion estimate The available options for string are 'sample' (default), 'null' and 'limits'. If prop_var is a float, then it is used directly. Returns ------- pvalue : float pvalue of the non-equivalence test t1, pv1 : tuple of floats test statistic and pvalue for lower threshold test t2, pv2 : tuple of floats test statistic and pvalue for upper threshold test Notes ----- checked only for 1 sample case
proportions_ztost
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def proportions_chisquare(count, nobs, value=None): """ Test for proportions based on chisquare test Parameters ---------- count : {int, array_like} the number of successes in nobs trials. If this is array_like, then the assumption is that this represents the number of successes for each independent sample nobs : int the number of trials or observations, with the same length as count. value : None or float or array_like Returns ------- chi2stat : float test statistic for the chisquare test p-value : float p-value for the chisquare test (table, expected) table is a (k, 2) contingency table, ``expected`` is the corresponding table of counts that are expected under independence with given margins Notes ----- Recent version of scipy.stats have a chisquare test for independence in contingency tables. This function provides a similar interface to chisquare tests as ``prop.test`` in R, however without the option for Yates continuity correction. count can be the count for the number of events for a single proportion, or the counts for several independent proportions. If value is given, then all proportions are jointly tested against this value. If value is not given and count and nobs are not scalar, then the null hypothesis is that all samples have the same proportion. """ nobs = np.atleast_1d(nobs) table, expected, n_rows = _table_proportion(count, nobs) if value is not None: expected = np.column_stack((nobs * value, nobs * (1 - value))) ddof = n_rows - 1 else: ddof = n_rows #print table, expected chi2stat, pval = stats.chisquare(table.ravel(), expected.ravel(), ddof=ddof) return chi2stat, pval, (table, expected)
Test for proportions based on chisquare test Parameters ---------- count : {int, array_like} the number of successes in nobs trials. If this is array_like, then the assumption is that this represents the number of successes for each independent sample nobs : int the number of trials or observations, with the same length as count. value : None or float or array_like Returns ------- chi2stat : float test statistic for the chisquare test p-value : float p-value for the chisquare test (table, expected) table is a (k, 2) contingency table, ``expected`` is the corresponding table of counts that are expected under independence with given margins Notes ----- Recent version of scipy.stats have a chisquare test for independence in contingency tables. This function provides a similar interface to chisquare tests as ``prop.test`` in R, however without the option for Yates continuity correction. count can be the count for the number of events for a single proportion, or the counts for several independent proportions. If value is given, then all proportions are jointly tested against this value. If value is not given and count and nobs are not scalar, then the null hypothesis is that all samples have the same proportion.
proportions_chisquare
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def proportions_chisquare_allpairs(count, nobs, multitest_method='hs'): """ Chisquare test of proportions for all pairs of k samples Performs a chisquare test for proportions for all pairwise comparisons. The alternative is two-sided Parameters ---------- count : {int, array_like} the number of successes in nobs trials. nobs : int the number of trials or observations. multitest_method : str This chooses the method for the multiple testing p-value correction, that is used as default in the results. It can be any method that is available in ``multipletesting``. The default is Holm-Sidak 'hs'. Returns ------- result : AllPairsResults instance The returned results instance has several statistics, such as p-values, attached, and additional methods for using a non-default ``multitest_method``. Notes ----- Yates continuity correction is not available. """ #all_pairs = lmap(list, lzip(*np.triu_indices(4, 1))) all_pairs = lzip(*np.triu_indices(len(count), 1)) pvals = [proportions_chisquare(count[list(pair)], nobs[list(pair)])[1] for pair in all_pairs] return AllPairsResults(pvals, all_pairs, multitest_method=multitest_method)
Chisquare test of proportions for all pairs of k samples Performs a chisquare test for proportions for all pairwise comparisons. The alternative is two-sided Parameters ---------- count : {int, array_like} the number of successes in nobs trials. nobs : int the number of trials or observations. multitest_method : str This chooses the method for the multiple testing p-value correction, that is used as default in the results. It can be any method that is available in ``multipletesting``. The default is Holm-Sidak 'hs'. Returns ------- result : AllPairsResults instance The returned results instance has several statistics, such as p-values, attached, and additional methods for using a non-default ``multitest_method``. Notes ----- Yates continuity correction is not available.
proportions_chisquare_allpairs
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def proportions_chisquare_pairscontrol(count, nobs, value=None, multitest_method='hs', alternative='two-sided'): """ Chisquare test of proportions for pairs of k samples compared to control Performs a chisquare test for proportions for pairwise comparisons with a control (Dunnet's test). The control is assumed to be the first element of ``count`` and ``nobs``. The alternative is two-sided, larger or smaller. Parameters ---------- count : {int, array_like} the number of successes in nobs trials. nobs : int the number of trials or observations. multitest_method : str This chooses the method for the multiple testing p-value correction, that is used as default in the results. It can be any method that is available in ``multipletesting``. The default is Holm-Sidak 'hs'. alternative : str in ['two-sided', 'smaller', 'larger'] alternative hypothesis, which can be two-sided or either one of the one-sided tests. Returns ------- result : AllPairsResults instance The returned results instance has several statistics, such as p-values, attached, and additional methods for using a non-default ``multitest_method``. Notes ----- Yates continuity correction is not available. ``value`` and ``alternative`` options are not yet implemented. """ if (value is not None) or (alternative not in ['two-sided', '2s']): raise NotImplementedError #all_pairs = lmap(list, lzip(*np.triu_indices(4, 1))) all_pairs = [(0, k) for k in range(1, len(count))] pvals = [proportions_chisquare(count[list(pair)], nobs[list(pair)], #alternative=alternative)[1] )[1] for pair in all_pairs] return AllPairsResults(pvals, all_pairs, multitest_method=multitest_method)
Chisquare test of proportions for pairs of k samples compared to control Performs a chisquare test for proportions for pairwise comparisons with a control (Dunnet's test). The control is assumed to be the first element of ``count`` and ``nobs``. The alternative is two-sided, larger or smaller. Parameters ---------- count : {int, array_like} the number of successes in nobs trials. nobs : int the number of trials or observations. multitest_method : str This chooses the method for the multiple testing p-value correction, that is used as default in the results. It can be any method that is available in ``multipletesting``. The default is Holm-Sidak 'hs'. alternative : str in ['two-sided', 'smaller', 'larger'] alternative hypothesis, which can be two-sided or either one of the one-sided tests. Returns ------- result : AllPairsResults instance The returned results instance has several statistics, such as p-values, attached, and additional methods for using a non-default ``multitest_method``. Notes ----- Yates continuity correction is not available. ``value`` and ``alternative`` options are not yet implemented.
proportions_chisquare_pairscontrol
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def confint_proportions_2indep(count1, nobs1, count2, nobs2, method=None, compare='diff', alpha=0.05, correction=True): """ Confidence intervals for comparing two independent proportions. This assumes that we have two independent binomial samples. Parameters ---------- count1, nobs1 : float Count and sample size for first sample. count2, nobs2 : float Count and sample size for the second sample. method : str Method for computing confidence interval. If method is None, then a default method is used. The default might change as more methods are added. diff: - 'wald', - 'agresti-caffo' - 'newcomb' (default) - 'score' ratio: - 'log' - 'log-adjusted' (default) - 'score' odds-ratio: - 'logit' - 'logit-adjusted' (default) - 'score' compare : string in ['diff', 'ratio' 'odds-ratio'] If compare is diff, then the confidence interval is for diff = p1 - p2. If compare is ratio, then the confidence interval is for the risk ratio defined by ratio = p1 / p2. If compare is odds-ratio, then the confidence interval is for the odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2). alpha : float Significance level for the confidence interval, default is 0.05. The nominal coverage probability is 1 - alpha. Returns ------- low, upp See Also -------- test_proportions_2indep tost_proportions_2indep Notes ----- Status: experimental, API and defaults might still change. more ``methods`` will be added. References ---------- .. [1] Fagerland, Morten W., Stian Lydersen, and Petter Laake. 2015. “Recommended Confidence Intervals for Two Independent Binomial Proportions.” Statistical Methods in Medical Research 24 (2): 224–54. https://doi.org/10.1177/0962280211415469. .. [2] Koopman, P. A. R. 1984. “Confidence Intervals for the Ratio of Two Binomial Proportions.” Biometrics 40 (2): 513–17. https://doi.org/10.2307/2531405. .. [3] Miettinen, Olli, and Markku Nurminen. "Comparative analysis of two rates." Statistics in medicine 4, no. 2 (1985): 213-226. .. [4] Newcombe, Robert G. 1998. “Interval Estimation for the Difference between Independent Proportions: Comparison of Eleven Methods.” Statistics in Medicine 17 (8): 873–90. https://doi.org/10.1002/(SICI)1097-0258(19980430)17:8<873::AID- SIM779>3.0.CO;2-I. .. [5] Newcombe, Robert G., and Markku M. Nurminen. 2011. “In Defence of Score Intervals for Proportions and Their Differences.” Communications in Statistics - Theory and Methods 40 (7): 1271–82. https://doi.org/10.1080/03610920903576580. """ method_default = {'diff': 'newcomb', 'ratio': 'log-adjusted', 'odds-ratio': 'logit-adjusted'} # normalize compare name if compare.lower() == 'or': compare = 'odds-ratio' if method is None: method = method_default[compare] method = method.lower() if method.startswith('agr'): method = 'agresti-caffo' p1 = count1 / nobs1 p2 = count2 / nobs2 diff = p1 - p2 addone = 1 if method == 'agresti-caffo' else 0 if compare == 'diff': if method in ['wald', 'agresti-caffo']: count1_, nobs1_ = count1 + addone, nobs1 + 2 * addone count2_, nobs2_ = count2 + addone, nobs2 + 2 * addone p1_ = count1_ / nobs1_ p2_ = count2_ / nobs2_ diff_ = p1_ - p2_ var = p1_ * (1 - p1_) / nobs1_ + p2_ * (1 - p2_) / nobs2_ z = stats.norm.isf(alpha / 2) d_wald = z * np.sqrt(var) low = diff_ - d_wald upp = diff_ + d_wald elif method.startswith('newcomb'): low1, upp1 = proportion_confint(count1, nobs1, method='wilson', alpha=alpha) low2, upp2 = proportion_confint(count2, nobs2, method='wilson', alpha=alpha) d_low = np.sqrt((p1 - low1)**2 + (upp2 - p2)**2) d_upp = np.sqrt((p2 - low2)**2 + (upp1 - p1)**2) low = diff - d_low upp = diff + d_upp elif method == "score": low, upp = _score_confint_inversion(count1, nobs1, count2, nobs2, compare=compare, alpha=alpha, correction=correction) else: raise ValueError('method not recognized') elif compare == 'ratio': # ratio = p1 / p2 if method in ['log', 'log-adjusted']: addhalf = 0.5 if method == 'log-adjusted' else 0 count1_, nobs1_ = count1 + addhalf, nobs1 + addhalf count2_, nobs2_ = count2 + addhalf, nobs2 + addhalf p1_ = count1_ / nobs1_ p2_ = count2_ / nobs2_ ratio_ = p1_ / p2_ var = (1 / count1_) - 1 / nobs1_ + 1 / count2_ - 1 / nobs2_ z = stats.norm.isf(alpha / 2) d_log = z * np.sqrt(var) low = np.exp(np.log(ratio_) - d_log) upp = np.exp(np.log(ratio_) + d_log) elif method == 'score': res = _confint_riskratio_koopman(count1, nobs1, count2, nobs2, alpha=alpha, correction=correction) low, upp = res.confint else: raise ValueError('method not recognized') elif compare == 'odds-ratio': # odds_ratio = p1 / (1 - p1) / p2 * (1 - p2) if method in ['logit', 'logit-adjusted', 'logit-smoothed']: if method in ['logit-smoothed']: adjusted = _shrink_prob(count1, nobs1, count2, nobs2, shrink_factor=2, return_corr=False)[0] count1_, nobs1_, count2_, nobs2_ = adjusted else: addhalf = 0.5 if method == 'logit-adjusted' else 0 count1_, nobs1_ = count1 + addhalf, nobs1 + 2 * addhalf count2_, nobs2_ = count2 + addhalf, nobs2 + 2 * addhalf p1_ = count1_ / nobs1_ p2_ = count2_ / nobs2_ odds_ratio_ = p1_ / (1 - p1_) / p2_ * (1 - p2_) var = (1 / count1_ + 1 / (nobs1_ - count1_) + 1 / count2_ + 1 / (nobs2_ - count2_)) z = stats.norm.isf(alpha / 2) d_log = z * np.sqrt(var) low = np.exp(np.log(odds_ratio_) - d_log) upp = np.exp(np.log(odds_ratio_) + d_log) elif method == "score": low, upp = _score_confint_inversion(count1, nobs1, count2, nobs2, compare=compare, alpha=alpha, correction=correction) else: raise ValueError('method not recognized') else: raise ValueError('compare not recognized') return low, upp
Confidence intervals for comparing two independent proportions. This assumes that we have two independent binomial samples. Parameters ---------- count1, nobs1 : float Count and sample size for first sample. count2, nobs2 : float Count and sample size for the second sample. method : str Method for computing confidence interval. If method is None, then a default method is used. The default might change as more methods are added. diff: - 'wald', - 'agresti-caffo' - 'newcomb' (default) - 'score' ratio: - 'log' - 'log-adjusted' (default) - 'score' odds-ratio: - 'logit' - 'logit-adjusted' (default) - 'score' compare : string in ['diff', 'ratio' 'odds-ratio'] If compare is diff, then the confidence interval is for diff = p1 - p2. If compare is ratio, then the confidence interval is for the risk ratio defined by ratio = p1 / p2. If compare is odds-ratio, then the confidence interval is for the odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2). alpha : float Significance level for the confidence interval, default is 0.05. The nominal coverage probability is 1 - alpha. Returns ------- low, upp See Also -------- test_proportions_2indep tost_proportions_2indep Notes ----- Status: experimental, API and defaults might still change. more ``methods`` will be added. References ---------- .. [1] Fagerland, Morten W., Stian Lydersen, and Petter Laake. 2015. “Recommended Confidence Intervals for Two Independent Binomial Proportions.” Statistical Methods in Medical Research 24 (2): 224–54. https://doi.org/10.1177/0962280211415469. .. [2] Koopman, P. A. R. 1984. “Confidence Intervals for the Ratio of Two Binomial Proportions.” Biometrics 40 (2): 513–17. https://doi.org/10.2307/2531405. .. [3] Miettinen, Olli, and Markku Nurminen. "Comparative analysis of two rates." Statistics in medicine 4, no. 2 (1985): 213-226. .. [4] Newcombe, Robert G. 1998. “Interval Estimation for the Difference between Independent Proportions: Comparison of Eleven Methods.” Statistics in Medicine 17 (8): 873–90. https://doi.org/10.1002/(SICI)1097-0258(19980430)17:8<873::AID- SIM779>3.0.CO;2-I. .. [5] Newcombe, Robert G., and Markku M. Nurminen. 2011. “In Defence of Score Intervals for Proportions and Their Differences.” Communications in Statistics - Theory and Methods 40 (7): 1271–82. https://doi.org/10.1080/03610920903576580.
confint_proportions_2indep
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def _shrink_prob(count1, nobs1, count2, nobs2, shrink_factor=2, return_corr=True): """ Shrink observed counts towards independence Helper function for 'logit-smoothed' inference for the odds-ratio of two independent proportions. Parameters ---------- count1, nobs1 : float or int count and sample size for first sample count2, nobs2 : float or int count and sample size for the second sample shrink_factor : float This corresponds to the number of observations that are added in total proportional to the probabilities under independence. return_corr : bool If true, then only the correction term is returned If false, then the corrected counts, i.e. original counts plus correction term, are returned. Returns ------- count1_corr, nobs1_corr, count2_corr, nobs2_corr : float correction or corrected counts prob_indep : TODO/Warning : this will change most likely probabilities under independence, only returned if return_corr is false. """ vectorized = any(np.size(i) > 1 for i in [count1, nobs1, count2, nobs2]) if vectorized: raise ValueError("function is not vectorized") nobs_col = np.array([count1 + count2, nobs1 - count1 + nobs2 - count2]) nobs_row = np.array([nobs1, nobs2]) nobs = nobs1 + nobs2 prob_indep = (nobs_col * nobs_row[:, None]) / nobs**2 corr = shrink_factor * prob_indep if return_corr: return (corr[0, 0], corr[0].sum(), corr[1, 0], corr[1].sum()) else: return (count1 + corr[0, 0], nobs1 + corr[0].sum(), count2 + corr[1, 0], nobs2 + corr[1].sum()), prob_indep
Shrink observed counts towards independence Helper function for 'logit-smoothed' inference for the odds-ratio of two independent proportions. Parameters ---------- count1, nobs1 : float or int count and sample size for first sample count2, nobs2 : float or int count and sample size for the second sample shrink_factor : float This corresponds to the number of observations that are added in total proportional to the probabilities under independence. return_corr : bool If true, then only the correction term is returned If false, then the corrected counts, i.e. original counts plus correction term, are returned. Returns ------- count1_corr, nobs1_corr, count2_corr, nobs2_corr : float correction or corrected counts prob_indep : TODO/Warning : this will change most likely probabilities under independence, only returned if return_corr is false.
_shrink_prob
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def score_test_proportions_2indep(count1, nobs1, count2, nobs2, value=None, compare='diff', alternative='two-sided', correction=True, return_results=True): """ Score test for two independent proportions This uses the constrained estimate of the proportions to compute the variance under the Null hypothesis. Parameters ---------- count1, nobs1 : count and sample size for first sample count2, nobs2 : count and sample size for the second sample value : float diff, ratio or odds-ratio under the null hypothesis. If value is None, then equality of proportions under the Null is assumed, i.e. value=0 for 'diff' or value=1 for either rate or odds-ratio. compare : string in ['diff', 'ratio' 'odds-ratio'] If compare is diff, then the confidence interval is for diff = p1 - p2. If compare is ratio, then the confidence interval is for the risk ratio defined by ratio = p1 / p2. If compare is odds-ratio, then the confidence interval is for the odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2) return_results : bool If true, then a results instance with extra information is returned, otherwise a tuple with statistic and pvalue is returned. Returns ------- results : results instance or tuple If return_results is True, then a results instance with the information in attributes is returned. If return_results is False, then only ``statistic`` and ``pvalue`` are returned. statistic : float test statistic asymptotically normal distributed N(0, 1) pvalue : float p-value based on normal distribution other attributes : additional information about the hypothesis test Notes ----- Status: experimental, the type or extra information in the return might change. """ value_default = 0 if compare == 'diff' else 1 if value is None: # TODO: odds ratio does not work if value=1 value = value_default nobs = nobs1 + nobs2 count = count1 + count2 p1 = count1 / nobs1 p2 = count2 / nobs2 if value == value_default: # use pooled estimator if equality test # shortcut, but required for odds ratio prop0 = prop1 = count / nobs # this uses index 0 from Miettinen Nurminned 1985 count0, nobs0 = count2, nobs2 p0 = p2 if compare == 'diff': diff = value # hypothesis value if diff != 0: tmp3 = nobs tmp2 = (nobs1 + 2 * nobs0) * diff - nobs - count tmp1 = (count0 * diff - nobs - 2 * count0) * diff + count tmp0 = count0 * diff * (1 - diff) q = ((tmp2 / (3 * tmp3))**3 - tmp1 * tmp2 / (6 * tmp3**2) + tmp0 / (2 * tmp3)) p = np.sign(q) * np.sqrt((tmp2 / (3 * tmp3))**2 - tmp1 / (3 * tmp3)) a = (np.pi + np.arccos(q / p**3)) / 3 prop0 = 2 * p * np.cos(a) - tmp2 / (3 * tmp3) prop1 = prop0 + diff var = prop1 * (1 - prop1) / nobs1 + prop0 * (1 - prop0) / nobs0 if correction: var *= nobs / (nobs - 1) diff_stat = (p1 - p0 - diff) elif compare == 'ratio': # risk ratio ratio = value if ratio != 1: a = nobs * ratio b = -(nobs1 * ratio + count1 + nobs2 + count0 * ratio) c = count prop0 = (-b - np.sqrt(b**2 - 4 * a * c)) / (2 * a) prop1 = prop0 * ratio var = (prop1 * (1 - prop1) / nobs1 + ratio**2 * prop0 * (1 - prop0) / nobs0) if correction: var *= nobs / (nobs - 1) # NCSS looks incorrect for var, but it is what should be reported # diff_stat = (p1 / p0 - ratio) # NCSS/PASS diff_stat = (p1 - ratio * p0) # Miettinen Nurminen elif compare in ['or', 'odds-ratio']: # odds ratio oratio = value if oratio != 1: # Note the constraint estimator does not handle odds-ratio = 1 a = nobs0 * (oratio - 1) b = nobs1 * oratio + nobs0 - count * (oratio - 1) c = -count prop0 = (-b + np.sqrt(b**2 - 4 * a * c)) / (2 * a) prop1 = prop0 * oratio / (1 + prop0 * (oratio - 1)) # try to avoid 0 and 1 proportions, # those raise Zero Division Runtime Warnings eps = 1e-10 prop0 = np.clip(prop0, eps, 1 - eps) prop1 = np.clip(prop1, eps, 1 - eps) var = (1 / (prop1 * (1 - prop1) * nobs1) + 1 / (prop0 * (1 - prop0) * nobs0)) if correction: var *= nobs / (nobs - 1) diff_stat = ((p1 - prop1) / (prop1 * (1 - prop1)) - (p0 - prop0) / (prop0 * (1 - prop0))) statistic, pvalue = _zstat_generic2(diff_stat, np.sqrt(var), alternative=alternative) if return_results: res = HolderTuple(statistic=statistic, pvalue=pvalue, compare=compare, method='score', variance=var, alternative=alternative, prop1_null=prop1, prop2_null=prop0, ) return res else: return statistic, pvalue
Score test for two independent proportions This uses the constrained estimate of the proportions to compute the variance under the Null hypothesis. Parameters ---------- count1, nobs1 : count and sample size for first sample count2, nobs2 : count and sample size for the second sample value : float diff, ratio or odds-ratio under the null hypothesis. If value is None, then equality of proportions under the Null is assumed, i.e. value=0 for 'diff' or value=1 for either rate or odds-ratio. compare : string in ['diff', 'ratio' 'odds-ratio'] If compare is diff, then the confidence interval is for diff = p1 - p2. If compare is ratio, then the confidence interval is for the risk ratio defined by ratio = p1 / p2. If compare is odds-ratio, then the confidence interval is for the odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2) return_results : bool If true, then a results instance with extra information is returned, otherwise a tuple with statistic and pvalue is returned. Returns ------- results : results instance or tuple If return_results is True, then a results instance with the information in attributes is returned. If return_results is False, then only ``statistic`` and ``pvalue`` are returned. statistic : float test statistic asymptotically normal distributed N(0, 1) pvalue : float p-value based on normal distribution other attributes : additional information about the hypothesis test Notes ----- Status: experimental, the type or extra information in the return might change.
score_test_proportions_2indep
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def test_proportions_2indep(count1, nobs1, count2, nobs2, value=None, method=None, compare='diff', alternative='two-sided', correction=True, return_results=True): """ Hypothesis test for comparing two independent proportions This assumes that we have two independent binomial samples. The Null and alternative hypothesis are for compare = 'diff' - H0: prop1 - prop2 - value = 0 - H1: prop1 - prop2 - value != 0 if alternative = 'two-sided' - H1: prop1 - prop2 - value > 0 if alternative = 'larger' - H1: prop1 - prop2 - value < 0 if alternative = 'smaller' for compare = 'ratio' - H0: prop1 / prop2 - value = 0 - H1: prop1 / prop2 - value != 0 if alternative = 'two-sided' - H1: prop1 / prop2 - value > 0 if alternative = 'larger' - H1: prop1 / prop2 - value < 0 if alternative = 'smaller' for compare = 'odds-ratio' - H0: or - value = 0 - H1: or - value != 0 if alternative = 'two-sided' - H1: or - value > 0 if alternative = 'larger' - H1: or - value < 0 if alternative = 'smaller' where odds-ratio or = prop1 / (1 - prop1) / (prop2 / (1 - prop2)) Parameters ---------- count1 : int Count for first sample. nobs1 : int Sample size for first sample. count2 : int Count for the second sample. nobs2 : int Sample size for the second sample. value : float Value of the difference, risk ratio or odds ratio of 2 independent proportions under the null hypothesis. Default is equal proportions, 0 for diff and 1 for risk-ratio and for odds-ratio. method : string Method for computing the hypothesis test. If method is None, then a default method is used. The default might change as more methods are added. diff: - 'wald', - 'agresti-caffo' - 'score' if correction is True, then this uses the degrees of freedom correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985 ratio: - 'log': wald test using log transformation - 'log-adjusted': wald test using log transformation, adds 0.5 to counts - 'score': if correction is True, then this uses the degrees of freedom correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985 odds-ratio: - 'logit': wald test using logit transformation - 'logit-adjusted': wald test using logit transformation, adds 0.5 to counts - 'logit-smoothed': wald test using logit transformation, biases cell counts towards independence by adding two observations in total. - 'score' if correction is True, then this uses the degrees of freedom correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985 compare : {'diff', 'ratio' 'odds-ratio'} If compare is `diff`, then the hypothesis test is for the risk difference diff = p1 - p2. If compare is `ratio`, then the hypothesis test is for the risk ratio defined by ratio = p1 / p2. If compare is `odds-ratio`, then the hypothesis test is for the odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2) alternative : {'two-sided', 'smaller', 'larger'} alternative hypothesis, which can be two-sided or either one of the one-sided tests. correction : bool If correction is True (default), then the Miettinen and Nurminen small sample correction to the variance nobs / (nobs - 1) is used. Applies only if method='score'. return_results : bool If true, then a results instance with extra information is returned, otherwise a tuple with statistic and pvalue is returned. Returns ------- results : results instance or tuple If return_results is True, then a results instance with the information in attributes is returned. If return_results is False, then only ``statistic`` and ``pvalue`` are returned. statistic : float test statistic asymptotically normal distributed N(0, 1) pvalue : float p-value based on normal distribution other attributes : additional information about the hypothesis test See Also -------- tost_proportions_2indep confint_proportions_2indep Notes ----- Status: experimental, API and defaults might still change. More ``methods`` will be added. The current default methods are - 'diff': 'agresti-caffo', - 'ratio': 'log-adjusted', - 'odds-ratio': 'logit-adjusted' """ method_default = {'diff': 'agresti-caffo', 'ratio': 'log-adjusted', 'odds-ratio': 'logit-adjusted'} # normalize compare name if compare.lower() == 'or': compare = 'odds-ratio' if method is None: method = method_default[compare] method = method.lower() if method.startswith('agr'): method = 'agresti-caffo' if value is None: # TODO: odds ratio does not work if value=1 for score test value = 0 if compare == 'diff' else 1 count1, nobs1, count2, nobs2 = map(np.asarray, [count1, nobs1, count2, nobs2]) p1 = count1 / nobs1 p2 = count2 / nobs2 diff = p1 - p2 ratio = p1 / p2 odds_ratio = p1 / (1 - p1) / p2 * (1 - p2) res = None if compare == 'diff': if method in ['wald', 'agresti-caffo']: addone = 1 if method == 'agresti-caffo' else 0 count1_, nobs1_ = count1 + addone, nobs1 + 2 * addone count2_, nobs2_ = count2 + addone, nobs2 + 2 * addone p1_ = count1_ / nobs1_ p2_ = count2_ / nobs2_ diff_stat = p1_ - p2_ - value var = p1_ * (1 - p1_) / nobs1_ + p2_ * (1 - p2_) / nobs2_ statistic = diff_stat / np.sqrt(var) distr = 'normal' elif method.startswith('newcomb'): msg = 'newcomb not available for hypothesis test' raise NotImplementedError(msg) elif method == 'score': # Note score part is the same call for all compare res = score_test_proportions_2indep(count1, nobs1, count2, nobs2, value=value, compare=compare, alternative=alternative, correction=correction, return_results=return_results) if return_results is False: statistic, pvalue = res[:2] distr = 'normal' # TODO/Note score_test_proportion_2samp returns statistic and # not diff_stat diff_stat = None else: raise ValueError('method not recognized') elif compare == 'ratio': if method in ['log', 'log-adjusted']: addhalf = 0.5 if method == 'log-adjusted' else 0 count1_, nobs1_ = count1 + addhalf, nobs1 + addhalf count2_, nobs2_ = count2 + addhalf, nobs2 + addhalf p1_ = count1_ / nobs1_ p2_ = count2_ / nobs2_ ratio_ = p1_ / p2_ var = (1 / count1_) - 1 / nobs1_ + 1 / count2_ - 1 / nobs2_ diff_stat = np.log(ratio_) - np.log(value) statistic = diff_stat / np.sqrt(var) distr = 'normal' elif method == 'score': res = score_test_proportions_2indep(count1, nobs1, count2, nobs2, value=value, compare=compare, alternative=alternative, correction=correction, return_results=return_results) if return_results is False: statistic, pvalue = res[:2] distr = 'normal' diff_stat = None else: raise ValueError('method not recognized') elif compare == "odds-ratio": if method in ['logit', 'logit-adjusted', 'logit-smoothed']: if method in ['logit-smoothed']: adjusted = _shrink_prob(count1, nobs1, count2, nobs2, shrink_factor=2, return_corr=False)[0] count1_, nobs1_, count2_, nobs2_ = adjusted else: addhalf = 0.5 if method == 'logit-adjusted' else 0 count1_, nobs1_ = count1 + addhalf, nobs1 + 2 * addhalf count2_, nobs2_ = count2 + addhalf, nobs2 + 2 * addhalf p1_ = count1_ / nobs1_ p2_ = count2_ / nobs2_ odds_ratio_ = p1_ / (1 - p1_) / p2_ * (1 - p2_) var = (1 / count1_ + 1 / (nobs1_ - count1_) + 1 / count2_ + 1 / (nobs2_ - count2_)) diff_stat = np.log(odds_ratio_) - np.log(value) statistic = diff_stat / np.sqrt(var) distr = 'normal' elif method == 'score': res = score_test_proportions_2indep(count1, nobs1, count2, nobs2, value=value, compare=compare, alternative=alternative, correction=correction, return_results=return_results) if return_results is False: statistic, pvalue = res[:2] distr = 'normal' diff_stat = None else: raise ValueError('method "%s" not recognized' % method) else: raise ValueError('compare "%s" not recognized' % compare) if distr == 'normal' and diff_stat is not None: statistic, pvalue = _zstat_generic2(diff_stat, np.sqrt(var), alternative=alternative) if return_results: if res is None: res = HolderTuple(statistic=statistic, pvalue=pvalue, compare=compare, method=method, diff=diff, ratio=ratio, odds_ratio=odds_ratio, variance=var, alternative=alternative, value=value, ) else: # we already have a return result from score test # add missing attributes res.diff = diff res.ratio = ratio res.odds_ratio = odds_ratio res.value = value return res else: return statistic, pvalue
Hypothesis test for comparing two independent proportions This assumes that we have two independent binomial samples. The Null and alternative hypothesis are for compare = 'diff' - H0: prop1 - prop2 - value = 0 - H1: prop1 - prop2 - value != 0 if alternative = 'two-sided' - H1: prop1 - prop2 - value > 0 if alternative = 'larger' - H1: prop1 - prop2 - value < 0 if alternative = 'smaller' for compare = 'ratio' - H0: prop1 / prop2 - value = 0 - H1: prop1 / prop2 - value != 0 if alternative = 'two-sided' - H1: prop1 / prop2 - value > 0 if alternative = 'larger' - H1: prop1 / prop2 - value < 0 if alternative = 'smaller' for compare = 'odds-ratio' - H0: or - value = 0 - H1: or - value != 0 if alternative = 'two-sided' - H1: or - value > 0 if alternative = 'larger' - H1: or - value < 0 if alternative = 'smaller' where odds-ratio or = prop1 / (1 - prop1) / (prop2 / (1 - prop2)) Parameters ---------- count1 : int Count for first sample. nobs1 : int Sample size for first sample. count2 : int Count for the second sample. nobs2 : int Sample size for the second sample. value : float Value of the difference, risk ratio or odds ratio of 2 independent proportions under the null hypothesis. Default is equal proportions, 0 for diff and 1 for risk-ratio and for odds-ratio. method : string Method for computing the hypothesis test. If method is None, then a default method is used. The default might change as more methods are added. diff: - 'wald', - 'agresti-caffo' - 'score' if correction is True, then this uses the degrees of freedom correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985 ratio: - 'log': wald test using log transformation - 'log-adjusted': wald test using log transformation, adds 0.5 to counts - 'score': if correction is True, then this uses the degrees of freedom correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985 odds-ratio: - 'logit': wald test using logit transformation - 'logit-adjusted': wald test using logit transformation, adds 0.5 to counts - 'logit-smoothed': wald test using logit transformation, biases cell counts towards independence by adding two observations in total. - 'score' if correction is True, then this uses the degrees of freedom correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985 compare : {'diff', 'ratio' 'odds-ratio'} If compare is `diff`, then the hypothesis test is for the risk difference diff = p1 - p2. If compare is `ratio`, then the hypothesis test is for the risk ratio defined by ratio = p1 / p2. If compare is `odds-ratio`, then the hypothesis test is for the odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2) alternative : {'two-sided', 'smaller', 'larger'} alternative hypothesis, which can be two-sided or either one of the one-sided tests. correction : bool If correction is True (default), then the Miettinen and Nurminen small sample correction to the variance nobs / (nobs - 1) is used. Applies only if method='score'. return_results : bool If true, then a results instance with extra information is returned, otherwise a tuple with statistic and pvalue is returned. Returns ------- results : results instance or tuple If return_results is True, then a results instance with the information in attributes is returned. If return_results is False, then only ``statistic`` and ``pvalue`` are returned. statistic : float test statistic asymptotically normal distributed N(0, 1) pvalue : float p-value based on normal distribution other attributes : additional information about the hypothesis test See Also -------- tost_proportions_2indep confint_proportions_2indep Notes ----- Status: experimental, API and defaults might still change. More ``methods`` will be added. The current default methods are - 'diff': 'agresti-caffo', - 'ratio': 'log-adjusted', - 'odds-ratio': 'logit-adjusted'
test_proportions_2indep
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def tost_proportions_2indep(count1, nobs1, count2, nobs2, low, upp, method=None, compare='diff', correction=True): """ Equivalence test based on two one-sided `test_proportions_2indep` This assumes that we have two independent binomial samples. The Null and alternative hypothesis for equivalence testing are for compare = 'diff' - H0: prop1 - prop2 <= low or upp <= prop1 - prop2 - H1: low < prop1 - prop2 < upp for compare = 'ratio' - H0: prop1 / prop2 <= low or upp <= prop1 / prop2 - H1: low < prop1 / prop2 < upp for compare = 'odds-ratio' - H0: or <= low or upp <= or - H1: low < or < upp where odds-ratio or = prop1 / (1 - prop1) / (prop2 / (1 - prop2)) Parameters ---------- count1, nobs1 : count and sample size for first sample count2, nobs2 : count and sample size for the second sample low, upp : equivalence margin for diff, risk ratio or odds ratio method : string method for computing the hypothesis test. If method is None, then a default method is used. The default might change as more methods are added. diff: - 'wald', - 'agresti-caffo' - 'score' if correction is True, then this uses the degrees of freedom correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985. ratio: - 'log': wald test using log transformation - 'log-adjusted': wald test using log transformation, adds 0.5 to counts - 'score' if correction is True, then this uses the degrees of freedom correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985. odds-ratio: - 'logit': wald test using logit transformation - 'logit-adjusted': : wald test using logit transformation, adds 0.5 to counts - 'logit-smoothed': : wald test using logit transformation, biases cell counts towards independence by adding two observations in total. - 'score' if correction is True, then this uses the degrees of freedom correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985 compare : string in ['diff', 'ratio' 'odds-ratio'] If compare is `diff`, then the hypothesis test is for diff = p1 - p2. If compare is `ratio`, then the hypothesis test is for the risk ratio defined by ratio = p1 / p2. If compare is `odds-ratio`, then the hypothesis test is for the odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2). correction : bool If correction is True (default), then the Miettinen and Nurminen small sample correction to the variance nobs / (nobs - 1) is used. Applies only if method='score'. Returns ------- pvalue : float p-value is the max of the pvalues of the two one-sided tests t1 : test results results instance for one-sided hypothesis at the lower margin t1 : test results results instance for one-sided hypothesis at the upper margin See Also -------- test_proportions_2indep confint_proportions_2indep Notes ----- Status: experimental, API and defaults might still change. The TOST equivalence test delegates to `test_proportions_2indep` and has the same method and comparison options. """ tt1 = test_proportions_2indep(count1, nobs1, count2, nobs2, value=low, method=method, compare=compare, alternative='larger', correction=correction, return_results=True) tt2 = test_proportions_2indep(count1, nobs1, count2, nobs2, value=upp, method=method, compare=compare, alternative='smaller', correction=correction, return_results=True) # idx_max = 1 if t1.pvalue < t2.pvalue else 0 idx_max = np.asarray(tt1.pvalue < tt2.pvalue, int) statistic = np.choose(idx_max, [tt1.statistic, tt2.statistic]) pvalue = np.choose(idx_max, [tt1.pvalue, tt2.pvalue]) res = HolderTuple(statistic=statistic, pvalue=pvalue, compare=compare, method=method, results_larger=tt1, results_smaller=tt2, title="Equivalence test for 2 independent proportions" ) return res
Equivalence test based on two one-sided `test_proportions_2indep` This assumes that we have two independent binomial samples. The Null and alternative hypothesis for equivalence testing are for compare = 'diff' - H0: prop1 - prop2 <= low or upp <= prop1 - prop2 - H1: low < prop1 - prop2 < upp for compare = 'ratio' - H0: prop1 / prop2 <= low or upp <= prop1 / prop2 - H1: low < prop1 / prop2 < upp for compare = 'odds-ratio' - H0: or <= low or upp <= or - H1: low < or < upp where odds-ratio or = prop1 / (1 - prop1) / (prop2 / (1 - prop2)) Parameters ---------- count1, nobs1 : count and sample size for first sample count2, nobs2 : count and sample size for the second sample low, upp : equivalence margin for diff, risk ratio or odds ratio method : string method for computing the hypothesis test. If method is None, then a default method is used. The default might change as more methods are added. diff: - 'wald', - 'agresti-caffo' - 'score' if correction is True, then this uses the degrees of freedom correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985. ratio: - 'log': wald test using log transformation - 'log-adjusted': wald test using log transformation, adds 0.5 to counts - 'score' if correction is True, then this uses the degrees of freedom correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985. odds-ratio: - 'logit': wald test using logit transformation - 'logit-adjusted': : wald test using logit transformation, adds 0.5 to counts - 'logit-smoothed': : wald test using logit transformation, biases cell counts towards independence by adding two observations in total. - 'score' if correction is True, then this uses the degrees of freedom correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985 compare : string in ['diff', 'ratio' 'odds-ratio'] If compare is `diff`, then the hypothesis test is for diff = p1 - p2. If compare is `ratio`, then the hypothesis test is for the risk ratio defined by ratio = p1 / p2. If compare is `odds-ratio`, then the hypothesis test is for the odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2). correction : bool If correction is True (default), then the Miettinen and Nurminen small sample correction to the variance nobs / (nobs - 1) is used. Applies only if method='score'. Returns ------- pvalue : float p-value is the max of the pvalues of the two one-sided tests t1 : test results results instance for one-sided hypothesis at the lower margin t1 : test results results instance for one-sided hypothesis at the upper margin See Also -------- test_proportions_2indep confint_proportions_2indep Notes ----- Status: experimental, API and defaults might still change. The TOST equivalence test delegates to `test_proportions_2indep` and has the same method and comparison options.
tost_proportions_2indep
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def _std_2prop_power(diff, p2, ratio=1, alpha=0.05, value=0): """ Compute standard error under null and alternative for 2 proportions helper function for power and sample size computation """ if value != 0: msg = 'non-zero diff under null, value, is not yet implemented' raise NotImplementedError(msg) nobs_ratio = ratio p1 = p2 + diff # The following contains currently redundant variables that will # be useful for different options for the null variance p_pooled = (p1 + p2 * ratio) / (1 + ratio) # probabilities for the variance for the null statistic p1_vnull, p2_vnull = p_pooled, p_pooled p2_alt = p2 p1_alt = p2_alt + diff std_null = _std_diff_prop(p1_vnull, p2_vnull, ratio=nobs_ratio) std_alt = _std_diff_prop(p1_alt, p2_alt, ratio=nobs_ratio) return p_pooled, std_null, std_alt
Compute standard error under null and alternative for 2 proportions helper function for power and sample size computation
_std_2prop_power
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def power_proportions_2indep(diff, prop2, nobs1, ratio=1, alpha=0.05, value=0, alternative='two-sided', return_results=True): """ Power for ztest that two independent proportions are equal This assumes that the variance is based on the pooled proportion under the null and the non-pooled variance under the alternative Parameters ---------- diff : float difference between proportion 1 and 2 under the alternative prop2 : float proportion for the reference case, prop2, proportions for the first case will be computed using p2 and diff p1 = p2 + diff nobs1 : float or int number of observations in sample 1 ratio : float sample size ratio, nobs2 = ratio * nobs1 alpha : float in interval (0,1) Significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. value : float currently only `value=0`, i.e. equality testing, is supported alternative : string, 'two-sided' (default), 'larger', 'smaller' Alternative hypothesis whether the power is calculated for a two-sided (default) or one sided test. The one-sided test can be either 'larger', 'smaller'. return_results : bool If true, then a results instance with extra information is returned, otherwise only the computed power is returned. Returns ------- results : results instance or float If return_results is True, then a results instance with the information in attributes is returned. If return_results is False, then only the power is returned. power : float Power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. Other attributes in results instance include : p_pooled pooled proportion, used for std_null std_null standard error of difference under the null hypothesis (without sqrt(nobs1)) std_alt standard error of difference under the alternative hypothesis (without sqrt(nobs1)) """ # TODO: avoid possible circular import, check if needed from statsmodels.stats.power import normal_power_het p_pooled, std_null, std_alt = _std_2prop_power(diff, prop2, ratio=ratio, alpha=alpha, value=value) pow_ = normal_power_het(diff, nobs1, alpha, std_null=std_null, std_alternative=std_alt, alternative=alternative) if return_results: res = Holder(power=pow_, p_pooled=p_pooled, std_null=std_null, std_alt=std_alt, nobs1=nobs1, nobs2=ratio * nobs1, nobs_ratio=ratio, alpha=alpha, ) return res else: return pow_
Power for ztest that two independent proportions are equal This assumes that the variance is based on the pooled proportion under the null and the non-pooled variance under the alternative Parameters ---------- diff : float difference between proportion 1 and 2 under the alternative prop2 : float proportion for the reference case, prop2, proportions for the first case will be computed using p2 and diff p1 = p2 + diff nobs1 : float or int number of observations in sample 1 ratio : float sample size ratio, nobs2 = ratio * nobs1 alpha : float in interval (0,1) Significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. value : float currently only `value=0`, i.e. equality testing, is supported alternative : string, 'two-sided' (default), 'larger', 'smaller' Alternative hypothesis whether the power is calculated for a two-sided (default) or one sided test. The one-sided test can be either 'larger', 'smaller'. return_results : bool If true, then a results instance with extra information is returned, otherwise only the computed power is returned. Returns ------- results : results instance or float If return_results is True, then a results instance with the information in attributes is returned. If return_results is False, then only the power is returned. power : float Power of the test, e.g. 0.8, is one minus the probability of a type II error. Power is the probability that the test correctly rejects the Null Hypothesis if the Alternative Hypothesis is true. Other attributes in results instance include : p_pooled pooled proportion, used for std_null std_null standard error of difference under the null hypothesis (without sqrt(nobs1)) std_alt standard error of difference under the alternative hypothesis (without sqrt(nobs1))
power_proportions_2indep
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def samplesize_proportions_2indep_onetail(diff, prop2, power, ratio=1, alpha=0.05, value=0, alternative='two-sided'): """ Required sample size assuming normal distribution based on one tail This uses an explicit computation for the sample size that is required to achieve a given power corresponding to the appropriate tails of the normal distribution. This ignores the far tail in a two-sided test which is negligible in the common case when alternative and null are far apart. Parameters ---------- diff : float Difference between proportion 1 and 2 under the alternative prop2 : float proportion for the reference case, prop2, proportions for the first case will be computing using p2 and diff p1 = p2 + diff power : float Power for which sample size is computed. ratio : float Sample size ratio, nobs2 = ratio * nobs1 alpha : float in interval (0,1) Significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. value : float Currently only `value=0`, i.e. equality testing, is supported alternative : string, 'two-sided' (default), 'larger', 'smaller' Alternative hypothesis whether the power is calculated for a two-sided (default) or one sided test. In the case of a one-sided alternative, it is assumed that the test is in the appropriate tail. Returns ------- nobs1 : float Number of observations in sample 1. """ # TODO: avoid possible circular import, check if needed from statsmodels.stats.power import normal_sample_size_one_tail if alternative in ['two-sided', '2s']: alpha = alpha / 2 _, std_null, std_alt = _std_2prop_power(diff, prop2, ratio=ratio, alpha=alpha, value=value) nobs = normal_sample_size_one_tail(diff, power, alpha, std_null=std_null, std_alternative=std_alt) return nobs
Required sample size assuming normal distribution based on one tail This uses an explicit computation for the sample size that is required to achieve a given power corresponding to the appropriate tails of the normal distribution. This ignores the far tail in a two-sided test which is negligible in the common case when alternative and null are far apart. Parameters ---------- diff : float Difference between proportion 1 and 2 under the alternative prop2 : float proportion for the reference case, prop2, proportions for the first case will be computing using p2 and diff p1 = p2 + diff power : float Power for which sample size is computed. ratio : float Sample size ratio, nobs2 = ratio * nobs1 alpha : float in interval (0,1) Significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. value : float Currently only `value=0`, i.e. equality testing, is supported alternative : string, 'two-sided' (default), 'larger', 'smaller' Alternative hypothesis whether the power is calculated for a two-sided (default) or one sided test. In the case of a one-sided alternative, it is assumed that the test is in the appropriate tail. Returns ------- nobs1 : float Number of observations in sample 1.
samplesize_proportions_2indep_onetail
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def _score_confint_inversion(count1, nobs1, count2, nobs2, compare='diff', alpha=0.05, correction=True): """ Compute score confidence interval by inverting score test Parameters ---------- count1, nobs1 : Count and sample size for first sample. count2, nobs2 : Count and sample size for the second sample. compare : string in ['diff', 'ratio' 'odds-ratio'] If compare is `diff`, then the confidence interval is for diff = p1 - p2. If compare is `ratio`, then the confidence interval is for the risk ratio defined by ratio = p1 / p2. If compare is `odds-ratio`, then the confidence interval is for the odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2). alpha : float in interval (0,1) Significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. correction : bool If correction is True (default), then the Miettinen and Nurminen small sample correction to the variance nobs / (nobs - 1) is used. Applies only if method='score'. Returns ------- low : float Lower confidence bound. upp : float Upper confidence bound. """ def func(v): r = test_proportions_2indep(count1, nobs1, count2, nobs2, value=v, compare=compare, method='score', correction=correction, alternative="two-sided") return r.pvalue - alpha rt0 = test_proportions_2indep(count1, nobs1, count2, nobs2, value=0, compare=compare, method='score', correction=correction, alternative="two-sided") # use default method to get starting values # this will not work if score confint becomes default # maybe use "wald" as alias that works for all compare statistics use_method = {"diff": "wald", "ratio": "log", "odds-ratio": "logit"} rci0 = confint_proportions_2indep(count1, nobs1, count2, nobs2, method=use_method[compare], compare=compare, alpha=alpha) # Note diff might be negative ub = rci0[1] + np.abs(rci0[1]) * 0.5 lb = rci0[0] - np.abs(rci0[0]) * 0.25 if compare == 'diff': param = rt0.diff # 1 might not be the correct upper bound because # rootfinding is for the `diff` and not for a probability. ub = min(ub, 0.99999) elif compare == 'ratio': param = rt0.ratio ub *= 2 # add more buffer if compare == 'odds-ratio': param = rt0.odds_ratio # root finding for confint bounds upp = optimize.brentq(func, param, ub) low = optimize.brentq(func, lb, param) return low, upp
Compute score confidence interval by inverting score test Parameters ---------- count1, nobs1 : Count and sample size for first sample. count2, nobs2 : Count and sample size for the second sample. compare : string in ['diff', 'ratio' 'odds-ratio'] If compare is `diff`, then the confidence interval is for diff = p1 - p2. If compare is `ratio`, then the confidence interval is for the risk ratio defined by ratio = p1 / p2. If compare is `odds-ratio`, then the confidence interval is for the odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2). alpha : float in interval (0,1) Significance level, e.g. 0.05, is the probability of a type I error, that is wrong rejections if the Null Hypothesis is true. correction : bool If correction is True (default), then the Miettinen and Nurminen small sample correction to the variance nobs / (nobs - 1) is used. Applies only if method='score'. Returns ------- low : float Lower confidence bound. upp : float Upper confidence bound.
_score_confint_inversion
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def _confint_riskratio_koopman(count1, nobs1, count2, nobs2, alpha=0.05, correction=True): """ Score confidence interval for ratio or proportions, Koopman/Nam signature not consistent with other functions When correction is True, then the small sample correction nobs / (nobs - 1) by Miettinen/Nurminen is used. """ # The names below follow Nam x0, x1, n0, n1 = count2, count1, nobs2, nobs1 x = x0 + x1 n = n0 + n1 z = stats.norm.isf(alpha / 2)**2 if correction: # Mietinnen/Nurminen small sample correction z *= n / (n - 1) # z = stats.chi2.isf(alpha, 1) # equ 6 in Nam 1995 a1 = n0 * (n0 * n * x1 + n1 * (n0 + x1) * z) a2 = - n0 * (n0 * n1 * x + 2 * n * x0 * x1 + n1 * (n0 + x0 + 2 * x1) * z) a3 = 2 * n0 * n1 * x0 * x + n * x0 * x0 * x1 + n0 * n1 * x * z a4 = - n1 * x0 * x0 * x p_roots_ = np.sort(np.roots([a1, a2, a3, a4])) p_roots = p_roots_[:2][::-1] # equ 5 ci = (1 - (n1 - x1) * (1 - p_roots) / (x0 + n1 - n * p_roots)) / p_roots res = Holder() res.confint = ci res._p_roots = p_roots_ # for unit tests, can be dropped return res
Score confidence interval for ratio or proportions, Koopman/Nam signature not consistent with other functions When correction is True, then the small sample correction nobs / (nobs - 1) by Miettinen/Nurminen is used.
_confint_riskratio_koopman
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def _confint_riskratio_paired_nam(table, alpha=0.05): """ Confidence interval for marginal risk ratio for matched pairs need full table success fail marginal success x11 x10 x1. fail x01 x00 x0. marginal x.1 x.0 n The confidence interval is for the ratio p1 / p0 where p1 = x1. / n and p0 - x.1 / n Todo: rename p1 to pa and p2 to pb, so we have a, b for treatment and 0, 1 for success/failure current namings follow Nam 2009 status testing: compared to example in Nam 2009 internal polynomial coefficients in calculation correspond at around 4 decimals confidence interval agrees only at 2 decimals """ x11, x10, x01, x00 = np.ravel(table) n = np.sum(table) # nobs p10, p01 = x10 / n, x01 / n p1 = (x11 + x10) / n p0 = (x11 + x01) / n q00 = 1 - x00 / n z2 = stats.norm.isf(alpha / 2)**2 # z = stats.chi2.isf(alpha, 1) # before equ 3 in Nam 2009 g1 = (n * p0 + z2 / 2) * p0 g2 = - (2 * n * p1 * p0 + z2 * q00) g3 = (n * p1 + z2 / 2) * p1 a0 = g1**2 - (z2 * p0 / 2)**2 a1 = 2 * g1 * g2 a2 = g2**2 + 2 * g1 * g3 + z2**2 * (p1 * p0 - 2 * p10 * p01) / 2 a3 = 2 * g2 * g3 a4 = g3**2 - (z2 * p1 / 2)**2 p_roots = np.sort(np.roots([a0, a1, a2, a3, a4])) # p_roots = np.sort(np.roots([1, a1 / a0, a2 / a0, a3 / a0, a4 / a0])) ci = [p_roots.min(), p_roots.max()] res = Holder() res.confint = ci res.p = p1, p0 res._p_roots = p_roots # for unit tests, can be dropped return res
Confidence interval for marginal risk ratio for matched pairs need full table success fail marginal success x11 x10 x1. fail x01 x00 x0. marginal x.1 x.0 n The confidence interval is for the ratio p1 / p0 where p1 = x1. / n and p0 - x.1 / n Todo: rename p1 to pa and p2 to pb, so we have a, b for treatment and 0, 1 for success/failure current namings follow Nam 2009 status testing: compared to example in Nam 2009 internal polynomial coefficients in calculation correspond at around 4 decimals confidence interval agrees only at 2 decimals
_confint_riskratio_paired_nam
python
statsmodels/statsmodels
statsmodels/stats/proportion.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py
BSD-3-Clause
def _make_asymptotic_function(params): """ Generates an asymptotic distribution callable from a param matrix Polynomial is a[0] * x**(-1/2) + a[1] * x**(-1) + a[2] * x**(-3/2) Parameters ---------- params : ndarray Array with shape (nalpha, 3) where nalpha is the number of significance levels """ def f(n): poly = np.array([1, np.log(n), np.log(n) ** 2]) return np.exp(poly.dot(params.T)) return f
Generates an asymptotic distribution callable from a param matrix Polynomial is a[0] * x**(-1/2) + a[1] * x**(-1) + a[2] * x**(-3/2) Parameters ---------- params : ndarray Array with shape (nalpha, 3) where nalpha is the number of significance levels
_make_asymptotic_function
python
statsmodels/statsmodels
statsmodels/stats/_lilliefors.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_lilliefors.py
BSD-3-Clause
def ksstat(x, cdf, alternative='two_sided', args=()): """ Calculate statistic for the Kolmogorov-Smirnov test for goodness of fit This calculates the test statistic for a test of the distribution G(x) of an observed variable against a given distribution F(x). Under the null hypothesis the two distributions are identical, G(x)=F(x). The alternative hypothesis can be either 'two_sided' (default), 'less' or 'greater'. The KS test is only valid for continuous distributions. Parameters ---------- x : array_like, 1d array of observations cdf : str or callable string: name of a distribution in scipy.stats callable: function to evaluate cdf alternative : 'two_sided' (default), 'less' or 'greater' defines the alternative hypothesis (see explanation) args : tuple, sequence distribution parameters for call to cdf Returns ------- D : float KS test statistic, either D, D+ or D- See Also -------- scipy.stats.kstest Notes ----- In the one-sided test, the alternative is that the empirical cumulative distribution function of the random variable is "less" or "greater" than the cumulative distribution function F(x) of the hypothesis, G(x)<=F(x), resp. G(x)>=F(x). In contrast to scipy.stats.kstest, this function only calculates the statistic which can be used either as distance measure or to implement case specific p-values. """ nobs = float(len(x)) if isinstance(cdf, str): cdf = getattr(stats.distributions, cdf).cdf elif hasattr(cdf, 'cdf'): cdf = getattr(cdf, 'cdf') x = np.sort(x) cdfvals = cdf(x, *args) d_plus = (np.arange(1.0, nobs + 1) / nobs - cdfvals).max() d_min = (cdfvals - np.arange(0.0, nobs) / nobs).max() if alternative == 'greater': return d_plus elif alternative == 'less': return d_min return np.max([d_plus, d_min])
Calculate statistic for the Kolmogorov-Smirnov test for goodness of fit This calculates the test statistic for a test of the distribution G(x) of an observed variable against a given distribution F(x). Under the null hypothesis the two distributions are identical, G(x)=F(x). The alternative hypothesis can be either 'two_sided' (default), 'less' or 'greater'. The KS test is only valid for continuous distributions. Parameters ---------- x : array_like, 1d array of observations cdf : str or callable string: name of a distribution in scipy.stats callable: function to evaluate cdf alternative : 'two_sided' (default), 'less' or 'greater' defines the alternative hypothesis (see explanation) args : tuple, sequence distribution parameters for call to cdf Returns ------- D : float KS test statistic, either D, D+ or D- See Also -------- scipy.stats.kstest Notes ----- In the one-sided test, the alternative is that the empirical cumulative distribution function of the random variable is "less" or "greater" than the cumulative distribution function F(x) of the hypothesis, G(x)<=F(x), resp. G(x)>=F(x). In contrast to scipy.stats.kstest, this function only calculates the statistic which can be used either as distance measure or to implement case specific p-values.
ksstat
python
statsmodels/statsmodels
statsmodels/stats/_lilliefors.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_lilliefors.py
BSD-3-Clause
def get_lilliefors_table(dist='norm'): """ Generates tables for significance levels of Lilliefors test statistics Tables for available normal and exponential distribution testing, as specified in Lilliefors references above Parameters ---------- dist : str distribution being tested in set {'norm', 'exp'}. Returns ------- lf : TableDist object. table of critical values """ # function just to keep things together # for this test alpha is sf probability, i.e. right tail probability alpha = 1 - np.array(PERCENTILES) / 100.0 alpha = alpha[::-1] dist = 'normal' if dist == 'norm' else dist if dist not in critical_values: raise ValueError("Invalid dist parameter. Must be 'norm' or 'exp'") cv_data = critical_values[dist] acv_data = asymp_critical_values[dist] size = np.array(sorted(cv_data), dtype=float) crit_lf = np.array([cv_data[key] for key in sorted(cv_data)]) crit_lf = crit_lf[:, ::-1] asym_params = np.array([acv_data[key] for key in sorted(acv_data)]) asymp_fn = _make_asymptotic_function(asym_params[::-1]) lf = TableDist(alpha, size, crit_lf, asymptotic=asymp_fn) return lf
Generates tables for significance levels of Lilliefors test statistics Tables for available normal and exponential distribution testing, as specified in Lilliefors references above Parameters ---------- dist : str distribution being tested in set {'norm', 'exp'}. Returns ------- lf : TableDist object. table of critical values
get_lilliefors_table
python
statsmodels/statsmodels
statsmodels/stats/_lilliefors.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_lilliefors.py
BSD-3-Clause
def pval_lf(d_max, n): """ Approximate pvalues for Lilliefors test This is only valid for pvalues smaller than 0.1 which is not checked in this function. Parameters ---------- d_max : array_like two-sided Kolmogorov-Smirnov test statistic n : int or float sample size Returns ------- p-value : float or ndarray pvalue according to approximation formula of Dallal and Wilkinson. Notes ----- This is mainly a helper function where the calling code should dispatch on bound violations. Therefore it does not check whether the pvalue is in the valid range. Precision for the pvalues is around 2 to 3 decimals. This approximation is also used by other statistical packages (e.g. R:fBasics) but might not be the most precise available. References ---------- DallalWilkinson1986 """ # todo: check boundaries, valid range for n and Dmax if n > 100: d_max *= (n / 100.) ** 0.49 n = 100 pval = np.exp(-7.01256 * d_max ** 2 * (n + 2.78019) + 2.99587 * d_max * np.sqrt(n + 2.78019) - 0.122119 + 0.974598 / np.sqrt(n) + 1.67997 / n) return pval
Approximate pvalues for Lilliefors test This is only valid for pvalues smaller than 0.1 which is not checked in this function. Parameters ---------- d_max : array_like two-sided Kolmogorov-Smirnov test statistic n : int or float sample size Returns ------- p-value : float or ndarray pvalue according to approximation formula of Dallal and Wilkinson. Notes ----- This is mainly a helper function where the calling code should dispatch on bound violations. Therefore it does not check whether the pvalue is in the valid range. Precision for the pvalues is around 2 to 3 decimals. This approximation is also used by other statistical packages (e.g. R:fBasics) but might not be the most precise available. References ---------- DallalWilkinson1986
pval_lf
python
statsmodels/statsmodels
statsmodels/stats/_lilliefors.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_lilliefors.py
BSD-3-Clause
def kstest_fit(x, dist='norm', pvalmethod="table"): """ Test assumed normal or exponential distribution using Lilliefors' test. Lilliefors' test is a Kolmogorov-Smirnov test with estimated parameters. Parameters ---------- x : array_like, 1d Data to test. dist : {'norm', 'exp'}, optional The assumed distribution. pvalmethod : {'approx', 'table'}, optional The method used to compute the p-value of the test statistic. In general, 'table' is preferred and makes use of a very large simulation. 'approx' is only valid for normality. if `dist = 'exp'` `table` is always used. 'approx' uses the approximation formula of Dalal and Wilkinson, valid for pvalues < 0.1. If the pvalue is larger than 0.1, then the result of `table` is returned. Returns ------- ksstat : float Kolmogorov-Smirnov test statistic with estimated mean and variance. pvalue : float If the pvalue is lower than some threshold, e.g. 0.05, then we can reject the Null hypothesis that the sample comes from a normal distribution. Notes ----- 'table' uses an improved table based on 10,000,000 simulations. The critical values are approximated using log(cv_alpha) = b_alpha + c[0] log(n) + c[1] log(n)**2 where cv_alpha is the critical value for a test with size alpha, b_alpha is an alpha-specific intercept term and c[1] and c[2] are coefficients that are shared all alphas. Values in the table are linearly interpolated. Values outside the range are be returned as bounds, 0.990 for large and 0.001 for small pvalues. For implementation details, see lilliefors_critical_value_simulation.py in the test directory. """ pvalmethod = string_like(pvalmethod, "pvalmethod", options=("approx", "table")) x = np.asarray(x) if x.ndim == 2 and x.shape[1] == 1: x = x[:, 0] elif x.ndim != 1: raise ValueError("Invalid parameter `x`: must be a one-dimensional" " array-like or a single-column DataFrame") nobs = len(x) if dist == 'norm': z = (x - x.mean()) / x.std(ddof=1) test_d = stats.norm.cdf lilliefors_table = lilliefors_table_norm elif dist == 'exp': z = x / x.mean() test_d = stats.expon.cdf lilliefors_table = lilliefors_table_expon pvalmethod = 'table' else: raise ValueError("Invalid dist parameter, must be 'norm' or 'exp'") min_nobs = 4 if dist == 'norm' else 3 if nobs < min_nobs: raise ValueError('Test for distribution {} requires at least {} ' 'observations'.format(dist, min_nobs)) d_ks = ksstat(z, test_d, alternative='two_sided') if pvalmethod == 'approx': pval = pval_lf(d_ks, nobs) # check pval is in desired range if pval > 0.1: pval = lilliefors_table.prob(d_ks, nobs) else: # pvalmethod == 'table' pval = lilliefors_table.prob(d_ks, nobs) return d_ks, pval
Test assumed normal or exponential distribution using Lilliefors' test. Lilliefors' test is a Kolmogorov-Smirnov test with estimated parameters. Parameters ---------- x : array_like, 1d Data to test. dist : {'norm', 'exp'}, optional The assumed distribution. pvalmethod : {'approx', 'table'}, optional The method used to compute the p-value of the test statistic. In general, 'table' is preferred and makes use of a very large simulation. 'approx' is only valid for normality. if `dist = 'exp'` `table` is always used. 'approx' uses the approximation formula of Dalal and Wilkinson, valid for pvalues < 0.1. If the pvalue is larger than 0.1, then the result of `table` is returned. Returns ------- ksstat : float Kolmogorov-Smirnov test statistic with estimated mean and variance. pvalue : float If the pvalue is lower than some threshold, e.g. 0.05, then we can reject the Null hypothesis that the sample comes from a normal distribution. Notes ----- 'table' uses an improved table based on 10,000,000 simulations. The critical values are approximated using log(cv_alpha) = b_alpha + c[0] log(n) + c[1] log(n)**2 where cv_alpha is the critical value for a test with size alpha, b_alpha is an alpha-specific intercept term and c[1] and c[2] are coefficients that are shared all alphas. Values in the table are linearly interpolated. Values outside the range are be returned as bounds, 0.990 for large and 0.001 for small pvalues. For implementation details, see lilliefors_critical_value_simulation.py in the test directory.
kstest_fit
python
statsmodels/statsmodels
statsmodels/stats/_lilliefors.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_lilliefors.py
BSD-3-Clause
def threshold(self, tfdr): """ Returns the threshold statistic for a given target FDR. """ if np.min(self._ufdr) <= tfdr: return self._unq[self._ufdr <= tfdr][0] else: return np.inf
Returns the threshold statistic for a given target FDR.
threshold
python
statsmodels/statsmodels
statsmodels/stats/_knockoff.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_knockoff.py
BSD-3-Clause
def _design_knockoff_sdp(exog): """ Use semidefinite programming to construct a knockoff design matrix. Requires cvxopt to be installed. """ try: from cvxopt import solvers, matrix except ImportError: raise ValueError("SDP knockoff designs require installation of cvxopt") nobs, nvar = exog.shape # Standardize exog xnm = np.sum(exog**2, 0) xnm = np.sqrt(xnm) exog = exog / xnm Sigma = np.dot(exog.T, exog) c = matrix(-np.ones(nvar)) h0 = np.concatenate((np.zeros(nvar), np.ones(nvar))) h0 = matrix(h0) G0 = np.concatenate((-np.eye(nvar), np.eye(nvar)), axis=0) G0 = matrix(G0) h1 = 2 * Sigma h1 = matrix(h1) i, j = np.diag_indices(nvar) G1 = np.zeros((nvar*nvar, nvar)) G1[i*nvar + j, i] = 1 G1 = matrix(G1) solvers.options['show_progress'] = False sol = solvers.sdp(c, G0, h0, [G1], [h1]) sl = np.asarray(sol['x']).ravel() xcov = np.dot(exog.T, exog) exogn = _get_knmat(exog, xcov, sl) return exog, exogn, sl
Use semidefinite programming to construct a knockoff design matrix. Requires cvxopt to be installed.
_design_knockoff_sdp
python
statsmodels/statsmodels
statsmodels/stats/_knockoff.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_knockoff.py
BSD-3-Clause
def _design_knockoff_equi(exog): """ Construct an equivariant design matrix for knockoff analysis. Follows the 'equi-correlated knockoff approach of equation 2.4 in Barber and Candes. Constructs a pair of design matrices exogs, exogn such that exogs is a scaled/centered version of the input matrix exog, exogn is another matrix of the same shape with cov(exogn) = cov(exogs), and the covariances between corresponding columns of exogn and exogs are as small as possible. """ nobs, nvar = exog.shape if nobs < 2*nvar: msg = "The equivariant knockoff can ony be used when n >= 2*p" raise ValueError(msg) # Standardize exog xnm = np.sum(exog**2, 0) xnm = np.sqrt(xnm) exog = exog / xnm xcov = np.dot(exog.T, exog) ev, _ = np.linalg.eig(xcov) evmin = np.min(ev) sl = min(2*evmin, 1) sl = sl * np.ones(nvar) exogn = _get_knmat(exog, xcov, sl) return exog, exogn, sl
Construct an equivariant design matrix for knockoff analysis. Follows the 'equi-correlated knockoff approach of equation 2.4 in Barber and Candes. Constructs a pair of design matrices exogs, exogn such that exogs is a scaled/centered version of the input matrix exog, exogn is another matrix of the same shape with cov(exogn) = cov(exogs), and the covariances between corresponding columns of exogn and exogs are as small as possible.
_design_knockoff_equi
python
statsmodels/statsmodels
statsmodels/stats/_knockoff.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_knockoff.py
BSD-3-Clause
def conf_int(self, alpha=0.05): """ Returns the confidence interval of the value, `effect` of the constraint. This is currently only available for t and z tests. Parameters ---------- alpha : float, optional The significance level for the confidence interval. ie., The default `alpha` = .05 returns a 95% confidence interval. Returns ------- ci : ndarray, (k_constraints, 2) The array has the lower and the upper limit of the confidence interval in the columns. """ if self.effect is not None: # confidence intervals q = self.dist.ppf(1 - alpha / 2., *self.dist_args) lower = self.effect - q * self.sd upper = self.effect + q * self.sd return np.column_stack((lower, upper)) else: raise NotImplementedError('Confidence Interval not available')
Returns the confidence interval of the value, `effect` of the constraint. This is currently only available for t and z tests. Parameters ---------- alpha : float, optional The significance level for the confidence interval. ie., The default `alpha` = .05 returns a 95% confidence interval. Returns ------- ci : ndarray, (k_constraints, 2) The array has the lower and the upper limit of the confidence interval in the columns.
conf_int
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def summary(self, xname=None, alpha=0.05, title=None): """Summarize the Results of the hypothesis test Parameters ---------- xname : list[str], optional Default is `c_##` for ## in the number of regressors alpha : float significance level for the confidence intervals. Default is alpha = 0.05 which implies a confidence level of 95%. title : str, optional Title for the params table. If not None, then this replaces the default title Returns ------- smry : str or Summary instance This contains a parameter results table in the case of t or z test in the same form as the parameter results table in the model results summary. For F or Wald test, the return is a string. """ if self.effect is not None: # TODO: should also add some extra information, e.g. robust cov ? # TODO: can we infer names for constraints, xname in __init__ ? if title is None: title = 'Test for Constraints' elif title == '': # do not add any title, # I think SimpleTable skips on None - check title = None # we have everything for a params table use_t = (self.distribution == 't') yname='constraints' # Not used in params_frame if xname is None: xname = self.c_names from statsmodels.iolib.summary import summary_params pvalues = np.atleast_1d(self.pvalue) summ = summary_params((self, self.effect, self.sd, self.statistic, pvalues, self.conf_int(alpha)), yname=yname, xname=xname, use_t=use_t, title=title, alpha=alpha) return summ elif hasattr(self, 'fvalue'): # TODO: create something nicer for these casee return ('<F test: F=%s, p=%s, df_denom=%.3g, df_num=%.3g>' % (repr(self.fvalue), self.pvalue, self.df_denom, self.df_num)) elif self.distribution == 'chi2': return ('<Wald test (%s): statistic=%s, p-value=%s, df_denom=%.3g>' % (self.distribution, self.statistic, self.pvalue, self.df_denom)) else: # generic return ('<Wald test: statistic=%s, p-value=%s>' % (self.statistic, self.pvalue))
Summarize the Results of the hypothesis test Parameters ---------- xname : list[str], optional Default is `c_##` for ## in the number of regressors alpha : float significance level for the confidence intervals. Default is alpha = 0.05 which implies a confidence level of 95%. title : str, optional Title for the params table. If not None, then this replaces the default title Returns ------- smry : str or Summary instance This contains a parameter results table in the case of t or z test in the same form as the parameter results table in the model results summary. For F or Wald test, the return is a string.
summary
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def summary_frame(self, xname=None, alpha=0.05): """Return the parameter table as a pandas DataFrame This is only available for t and normal tests """ if self.effect is not None: # we have everything for a params table use_t = (self.distribution == 't') yname='constraints' # Not used in params_frame if xname is None: xname = self.c_names from statsmodels.iolib.summary import summary_params_frame summ = summary_params_frame((self, self.effect, self.sd, self.statistic,self.pvalue, self.conf_int(alpha)), yname=yname, xname=xname, use_t=use_t, alpha=alpha) return summ else: # TODO: create something nicer raise NotImplementedError('only available for t and z')
Return the parameter table as a pandas DataFrame This is only available for t and normal tests
summary_frame
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def _get_matrix(self): """ Gets the contrast_matrix property """ if not hasattr(self, "_contrast_matrix"): self.compute_matrix() return self._contrast_matrix
Gets the contrast_matrix property
_get_matrix
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def compute_matrix(self): """ Construct a contrast matrix C so that colspan(dot(D, C)) = colspan(dot(D, dot(pinv(D), T))) where pinv(D) is the generalized inverse of D=design. """ T = self.term if T.ndim == 1: T = T[:,None] self.T = clean0(T) self.D = self.design self._contrast_matrix = contrastfromcols(self.T, self.D) try: self.rank = self.matrix.shape[1] except (AttributeError, IndexError): self.rank = 1
Construct a contrast matrix C so that colspan(dot(D, C)) = colspan(dot(D, dot(pinv(D), T))) where pinv(D) is the generalized inverse of D=design.
compute_matrix
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def contrastfromcols(L, D, pseudo=None): """ From an n x p design matrix D and a matrix L, tries to determine a p x q contrast matrix C which determines a contrast of full rank, i.e. the n x q matrix dot(transpose(C), pinv(D)) is full rank. L must satisfy either L.shape[0] == n or L.shape[1] == p. If L.shape[0] == n, then L is thought of as representing columns in the column space of D. If L.shape[1] == p, then L is thought of as what is known as a contrast matrix. In this case, this function returns an estimable contrast corresponding to the dot(D, L.T) Note that this always produces a meaningful contrast, not always with the intended properties because q is always non-zero unless L is identically 0. That is, it produces a contrast that spans the column space of L (after projection onto the column space of D). Parameters ---------- L : array_like D : array_like """ L = np.asarray(L) D = np.asarray(D) n, p = D.shape if L.shape[0] != n and L.shape[1] != p: raise ValueError("shape of L and D mismatched") if pseudo is None: pseudo = np.linalg.pinv(D) # D^+ \approx= ((dot(D.T,D))^(-1),D.T) if L.shape[0] == n: C = np.dot(pseudo, L).T else: C = L C = np.dot(pseudo, np.dot(D, C.T)).T Lp = np.dot(D, C.T) if len(Lp.shape) == 1: Lp.shape = (n, 1) if np.linalg.matrix_rank(Lp) != Lp.shape[1]: Lp = fullrank(Lp) C = np.dot(pseudo, Lp).T return np.squeeze(C)
From an n x p design matrix D and a matrix L, tries to determine a p x q contrast matrix C which determines a contrast of full rank, i.e. the n x q matrix dot(transpose(C), pinv(D)) is full rank. L must satisfy either L.shape[0] == n or L.shape[1] == p. If L.shape[0] == n, then L is thought of as representing columns in the column space of D. If L.shape[1] == p, then L is thought of as what is known as a contrast matrix. In this case, this function returns an estimable contrast corresponding to the dot(D, L.T) Note that this always produces a meaningful contrast, not always with the intended properties because q is always non-zero unless L is identically 0. That is, it produces a contrast that spans the column space of L (after projection onto the column space of D). Parameters ---------- L : array_like D : array_like
contrastfromcols
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def col_names(self): """column names for summary table """ pr_test = "P>%s" % self.distribution col_names = [self.distribution, pr_test, 'df constraint'] if self.distribution == 'F': col_names.append('df denom') return col_names
column names for summary table
col_names
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def _get_pairs_labels(k_level, level_names): """helper function for labels for pairwise comparisons """ idx_pairs_all = np.triu_indices(k_level, 1) labels = [f'{level_names[name[1]]}-{level_names[name[0]]}' for name in zip(*idx_pairs_all)] return labels
helper function for labels for pairwise comparisons
_get_pairs_labels
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def _contrast_pairs(k_params, k_level, idx_start): """create pairwise contrast for reference coding currently not used, using encoding contrast matrix is more general, but requires requires factor information from a formula's model_spec. Parameters ---------- k_params : int number of parameters k_level : int number of levels or categories (including reference case) idx_start : int Index of the first parameter of this factor. The restrictions on the factor are inserted as a block in the full restriction matrix starting at column with index `idx_start`. Returns ------- contrasts : ndarray restriction matrix with k_params columns and number of rows equal to the number of restrictions. """ k_level_m1 = k_level - 1 idx_pairs = np.triu_indices(k_level_m1, 1) k = len(idx_pairs[0]) c_pairs = np.zeros((k, k_level_m1)) c_pairs[np.arange(k), idx_pairs[0]] = -1 c_pairs[np.arange(k), idx_pairs[1]] = 1 c_reference = np.eye(k_level_m1) c = np.concatenate((c_reference, c_pairs), axis=0) k_all = c.shape[0] contrasts = np.zeros((k_all, k_params)) contrasts[:, idx_start : idx_start + k_level_m1] = c return contrasts
create pairwise contrast for reference coding currently not used, using encoding contrast matrix is more general, but requires requires factor information from a formula's model_spec. Parameters ---------- k_params : int number of parameters k_level : int number of levels or categories (including reference case) idx_start : int Index of the first parameter of this factor. The restrictions on the factor are inserted as a block in the full restriction matrix starting at column with index `idx_start`. Returns ------- contrasts : ndarray restriction matrix with k_params columns and number of rows equal to the number of restrictions.
_contrast_pairs
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def t_test_multi(result, contrasts, method='hs', alpha=0.05, ci_method=None, contrast_names=None): """perform t_test and add multiplicity correction to results dataframe Parameters ---------- result results instance results of an estimated model contrasts : ndarray restriction matrix for t_test method : str or list of strings method for multiple testing p-value correction, default is'hs'. alpha : float significance level for multiple testing reject decision. ci_method : None not used yet, will be for multiplicity corrected confidence intervals contrast_names : {list[str], None} If contrast_names are provided, then they are used in the index of the returned dataframe, otherwise some generic default names are created. Returns ------- res_df : pandas DataFrame The dataframe contains the results of the t_test and additional columns for multiplicity corrected p-values and boolean indicator for whether the Null hypothesis is rejected. """ tt = result.t_test(contrasts) res_df = tt.summary_frame(xname=contrast_names) if type(method) is not list: method = [method] for meth in method: mt = multipletests(tt.pvalue, method=meth, alpha=alpha) res_df['pvalue-%s' % meth] = mt[1] res_df['reject-%s' % meth] = mt[0] return res_df
perform t_test and add multiplicity correction to results dataframe Parameters ---------- result results instance results of an estimated model contrasts : ndarray restriction matrix for t_test method : str or list of strings method for multiple testing p-value correction, default is'hs'. alpha : float significance level for multiple testing reject decision. ci_method : None not used yet, will be for multiplicity corrected confidence intervals contrast_names : {list[str], None} If contrast_names are provided, then they are used in the index of the returned dataframe, otherwise some generic default names are created. Returns ------- res_df : pandas DataFrame The dataframe contains the results of the t_test and additional columns for multiplicity corrected p-values and boolean indicator for whether the Null hypothesis is rejected.
t_test_multi
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def _embed_constraints(contrasts, k_params, idx_start, index=None): """helper function to expand constraints to a full restriction matrix Parameters ---------- contrasts : ndarray restriction matrix for t_test k_params : int number of parameters idx_start : int Index of the first parameter of this factor. The restrictions on the factor are inserted as a block in the full restriction matrix starting at column with index `idx_start`. index : slice or ndarray Column index if constraints do not form a block in the full restriction matrix, i.e. if parameters that are subject to restrictions are not consecutive in the list of parameters. If index is not None, then idx_start is ignored. Returns ------- contrasts : ndarray restriction matrix with k_params columns and number of rows equal to the number of restrictions. """ k_c, k_p = contrasts.shape c = np.zeros((k_c, k_params)) if index is None: c[:, idx_start : idx_start + k_p] = contrasts else: c[:, index] = contrasts return c
helper function to expand constraints to a full restriction matrix Parameters ---------- contrasts : ndarray restriction matrix for t_test k_params : int number of parameters idx_start : int Index of the first parameter of this factor. The restrictions on the factor are inserted as a block in the full restriction matrix starting at column with index `idx_start`. index : slice or ndarray Column index if constraints do not form a block in the full restriction matrix, i.e. if parameters that are subject to restrictions are not consecutive in the list of parameters. If index is not None, then idx_start is ignored. Returns ------- contrasts : ndarray restriction matrix with k_params columns and number of rows equal to the number of restrictions.
_embed_constraints
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def _constraints_factor(encoding_matrix, comparison='pairwise', k_params=None, idx_start=None): """helper function to create constraints based on encoding matrix Parameters ---------- encoding_matrix : ndarray contrast matrix for the encoding of a factor as defined by patsy. The number of rows should be equal to the number of levels or categories of the factor, the number of columns should be equal to the number of parameters for this factor. comparison : str Currently only 'pairwise' is implemented. The restriction matrix can be used for testing the hypothesis that all pairwise differences are zero. k_params : int number of parameters idx_start : int Index of the first parameter of this factor. The restrictions on the factor are inserted as a block in the full restriction matrix starting at column with index `idx_start`. Returns ------- contrast : ndarray Contrast or restriction matrix that can be used in hypothesis test of model results. The number of columns is k_params. """ cm = encoding_matrix k_level, k_p = cm.shape import statsmodels.sandbox.stats.multicomp as mc if comparison in ['pairwise', 'pw', 'pairs']: c_all = -mc.contrast_allpairs(k_level) else: raise NotImplementedError('currentlyonly pairwise comparison') contrasts = c_all.dot(cm) if k_params is not None: if idx_start is None: raise ValueError("if k_params is not None, then idx_start is " "required") contrasts = _embed_constraints(contrasts, k_params, idx_start) return contrasts
helper function to create constraints based on encoding matrix Parameters ---------- encoding_matrix : ndarray contrast matrix for the encoding of a factor as defined by patsy. The number of rows should be equal to the number of levels or categories of the factor, the number of columns should be equal to the number of parameters for this factor. comparison : str Currently only 'pairwise' is implemented. The restriction matrix can be used for testing the hypothesis that all pairwise differences are zero. k_params : int number of parameters idx_start : int Index of the first parameter of this factor. The restrictions on the factor are inserted as a block in the full restriction matrix starting at column with index `idx_start`. Returns ------- contrast : ndarray Contrast or restriction matrix that can be used in hypothesis test of model results. The number of columns is k_params.
_constraints_factor
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def t_test_pairwise(result, term_name, method='hs', alpha=0.05, factor_labels=None, ignore=False): """ Perform pairwise t_test with multiple testing corrected p-values. This uses the formula's model_spec encoding contrast matrix and should work for all encodings of a main effect. Parameters ---------- result : result instance The results of an estimated model with a categorical main effect. term_name : str name of the term for which pairwise comparisons are computed. Term names for categorical effects are created by patsy and correspond to the main part of the exog names. method : {str, list[str]} multiple testing p-value correction, default is 'hs', see stats.multipletesting alpha : float significance level for multiple testing reject decision. factor_labels : {list[str], None} Labels for the factor levels used for pairwise labels. If not provided, then the labels from the formula's model_spec are used. ignore : bool Turn off some of the exceptions raised by input checks. Returns ------- MultiCompResult The results are stored as attributes, the main attributes are the following two. Other attributes are added for debugging purposes or as background information. - result_frame : pandas DataFrame with t_test results and multiple testing corrected p-values. - contrasts : matrix of constraints of the null hypothesis in the t_test. Notes ----- Status: experimental. Currently only checked for treatment coding with and without specified reference level. Currently there are no multiple testing corrected confidence intervals available. """ mgr = FormulaManager() model_spec = result.model.data.model_spec term_idx = mgr.get_term_names(model_spec).index(term_name) term = model_spec.terms[term_idx] idx_start = model_spec.term_slices[term].start if not ignore and len(term.factors) > 1: raise ValueError('interaction effects not yet supported') factor = term.factors[0] cat = mgr.get_factor_categories(factor, model_spec) # cat = model_spec.encoder_state[factor][1]["categories"] # model_spec.factor_infos[factor].categories if factor_labels is not None: if len(factor_labels) == len(cat): cat = factor_labels else: raise ValueError("factor_labels has the wrong length, should be %d" % len(cat)) k_level = len(cat) cm = mgr.get_contrast_matrix(term, factor, model_spec) k_params = len(result.params) labels = _get_pairs_labels(k_level, cat) import statsmodels.sandbox.stats.multicomp as mc c_all_pairs = -mc.contrast_allpairs(k_level) contrasts_sub = c_all_pairs.dot(cm) contrasts = _embed_constraints(contrasts_sub, k_params, idx_start) res_df = t_test_multi(result, contrasts, method=method, ci_method=None, alpha=alpha, contrast_names=labels) res = MultiCompResult(result_frame=res_df, contrasts=contrasts, term=term, contrast_labels=labels, term_encoding_matrix=cm) return res
Perform pairwise t_test with multiple testing corrected p-values. This uses the formula's model_spec encoding contrast matrix and should work for all encodings of a main effect. Parameters ---------- result : result instance The results of an estimated model with a categorical main effect. term_name : str name of the term for which pairwise comparisons are computed. Term names for categorical effects are created by patsy and correspond to the main part of the exog names. method : {str, list[str]} multiple testing p-value correction, default is 'hs', see stats.multipletesting alpha : float significance level for multiple testing reject decision. factor_labels : {list[str], None} Labels for the factor levels used for pairwise labels. If not provided, then the labels from the formula's model_spec are used. ignore : bool Turn off some of the exceptions raised by input checks. Returns ------- MultiCompResult The results are stored as attributes, the main attributes are the following two. Other attributes are added for debugging purposes or as background information. - result_frame : pandas DataFrame with t_test results and multiple testing corrected p-values. - contrasts : matrix of constraints of the null hypothesis in the t_test. Notes ----- Status: experimental. Currently only checked for treatment coding with and without specified reference level. Currently there are no multiple testing corrected confidence intervals available.
t_test_pairwise
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def _offset_constraint(r_matrix, params_est, params_alt): """offset to the value of a linear constraint for new params usage: (cm, v) is original constraint vo = offset_constraint(cm, res2.params, params_alt) fs = res2.wald_test((cm, v + vo)) nc = fs.statistic * fs.df_num """ diff_est = r_matrix @ params_est diff_alt = r_matrix @ params_alt return diff_est - diff_alt
offset to the value of a linear constraint for new params usage: (cm, v) is original constraint vo = offset_constraint(cm, res2.params, params_alt) fs = res2.wald_test((cm, v + vo)) nc = fs.statistic * fs.df_num
_offset_constraint
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def wald_test_noncent(params, r_matrix, value, results, diff=None, joint=True): """Moncentrality parameter for a wald test in model results The null hypothesis is ``diff = r_matrix @ params - value = 0`` Parameters ---------- params : ndarray parameters of the model at which to evaluate noncentrality. This can be estimated parameters or parameters under an alternative. r_matrix : ndarray Restriction matrix or contrasts for the Null hypothesis value : None or ndarray Value of the linear combination of parameters under the null hypothesis. If value is None, then it will be replaced by zero. results : Results instance of a model The results instance is used to compute the covariance matrix of the linear constraints using `cov_params. diff : None or ndarray If diff is not None, then it will be used instead of ``diff = r_matrix @ params - value`` joint : bool If joint is True, then the noncentrality parameter for the joint hypothesis will be returned. If joint is True, then an array of noncentrality parameters will be returned, where elements correspond to rows of the restriction matrix. This correspond to the `t_test` in models and is not a quadratic form. Returns ------- nc : float or ndarray Noncentrality parameter for Wald tests, correspondig to `wald_test` or `t_test` depending on whether `joint` is true or not. It needs to be divided by nobs to obtain effect size. Notes ----- Status : experimental, API will likely change """ if diff is None: diff = r_matrix @ params - value # at parameter under alternative cov_c = results.cov_params(r_matrix=r_matrix) if joint: nc = diff @ np.linalg.solve(cov_c, diff) else: nc = diff / np.sqrt(np.diag(cov_c)) return nc
Moncentrality parameter for a wald test in model results The null hypothesis is ``diff = r_matrix @ params - value = 0`` Parameters ---------- params : ndarray parameters of the model at which to evaluate noncentrality. This can be estimated parameters or parameters under an alternative. r_matrix : ndarray Restriction matrix or contrasts for the Null hypothesis value : None or ndarray Value of the linear combination of parameters under the null hypothesis. If value is None, then it will be replaced by zero. results : Results instance of a model The results instance is used to compute the covariance matrix of the linear constraints using `cov_params. diff : None or ndarray If diff is not None, then it will be used instead of ``diff = r_matrix @ params - value`` joint : bool If joint is True, then the noncentrality parameter for the joint hypothesis will be returned. If joint is True, then an array of noncentrality parameters will be returned, where elements correspond to rows of the restriction matrix. This correspond to the `t_test` in models and is not a quadratic form. Returns ------- nc : float or ndarray Noncentrality parameter for Wald tests, correspondig to `wald_test` or `t_test` depending on whether `joint` is true or not. It needs to be divided by nobs to obtain effect size. Notes ----- Status : experimental, API will likely change
wald_test_noncent
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def wald_test_noncent_generic(params, r_matrix, value, cov_params, diff=None, joint=True): """noncentrality parameter for a wald test The null hypothesis is ``diff = r_matrix @ params - value = 0`` Parameters ---------- params : ndarray parameters of the model at which to evaluate noncentrality. This can be estimated parameters or parameters under an alternative. r_matrix : ndarray Restriction matrix or contrasts for the Null hypothesis value : None or ndarray Value of the linear combination of parameters under the null hypothesis. If value is None, then it will be replace by zero. cov_params : ndarray covariance matrix of the parameter estimates diff : None or ndarray If diff is not None, then it will be used instead of ``diff = r_matrix @ params - value`` joint : bool If joint is True, then the noncentrality parameter for the joint hypothesis will be returned. If joint is True, then an array of noncentrality parameters will be returned, where elements correspond to rows of the restriction matrix. This correspond to the `t_test` in models and is not a quadratic form. Returns ------- nc : float or ndarray Noncentrality parameter for Wald tests, correspondig to `wald_test` or `t_test` depending on whether `joint` is true or not. It needs to be divided by nobs to obtain effect size. Notes ----- Status : experimental, API will likely change """ if value is None: value = 0 if diff is None: # at parameter under alternative diff = r_matrix @ params - value c = r_matrix cov_c = c.dot(cov_params).dot(c.T) if joint: nc = diff @ np.linalg.solve(cov_c, diff) else: nc = diff / np.sqrt(np.diag(cov_c)) return nc
noncentrality parameter for a wald test The null hypothesis is ``diff = r_matrix @ params - value = 0`` Parameters ---------- params : ndarray parameters of the model at which to evaluate noncentrality. This can be estimated parameters or parameters under an alternative. r_matrix : ndarray Restriction matrix or contrasts for the Null hypothesis value : None or ndarray Value of the linear combination of parameters under the null hypothesis. If value is None, then it will be replace by zero. cov_params : ndarray covariance matrix of the parameter estimates diff : None or ndarray If diff is not None, then it will be used instead of ``diff = r_matrix @ params - value`` joint : bool If joint is True, then the noncentrality parameter for the joint hypothesis will be returned. If joint is True, then an array of noncentrality parameters will be returned, where elements correspond to rows of the restriction matrix. This correspond to the `t_test` in models and is not a quadratic form. Returns ------- nc : float or ndarray Noncentrality parameter for Wald tests, correspondig to `wald_test` or `t_test` depending on whether `joint` is true or not. It needs to be divided by nobs to obtain effect size. Notes ----- Status : experimental, API will likely change
wald_test_noncent_generic
python
statsmodels/statsmodels
statsmodels/stats/contrast.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py
BSD-3-Clause
def _HCCM(results, scale): ''' sandwich with pinv(x) * diag(scale) * pinv(x).T where pinv(x) = (X'X)^(-1) X and scale is (nobs,) ''' H = np.dot(results.model.pinv_wexog, scale[:,None]*results.model.pinv_wexog.T) return H
sandwich with pinv(x) * diag(scale) * pinv(x).T where pinv(x) = (X'X)^(-1) X and scale is (nobs,)
_HCCM
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def cov_hc0(results): """ See statsmodels.RegressionResults """ het_scale = results.resid**2 # or whitened residuals? only OLS? cov_hc0 = _HCCM(results, het_scale) return cov_hc0
See statsmodels.RegressionResults
cov_hc0
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def _get_sandwich_arrays(results, cov_type=''): """Helper function to get scores from results Parameters """ if isinstance(results, tuple): # assume we have jac and hessian_inv jac, hessian_inv = results xu = jac = np.asarray(jac) hessian_inv = np.asarray(hessian_inv) elif hasattr(results, 'model'): if hasattr(results, '_results'): # remove wrapper results = results._results # assume we have a results instance if hasattr(results.model, 'jac'): xu = results.model.jac(results.params) hessian_inv = np.linalg.inv(results.model.hessian(results.params)) elif hasattr(results.model, 'score_obs'): xu = results.model.score_obs(results.params) hessian_inv = np.linalg.inv(results.model.hessian(results.params)) else: xu = results.model.wexog * results.wresid[:, None] hessian_inv = np.asarray(results.normalized_cov_params) # experimental support for freq_weights if hasattr(results.model, 'freq_weights') and not cov_type == 'clu': # we do not want to square the weights in the covariance calculations # assumes that freq_weights are incorporated in score_obs or equivalent # assumes xu/score_obs is 2D # temporary asarray xu /= np.sqrt(np.asarray(results.model.freq_weights)[:, None]) else: raise ValueError('need either tuple of (jac, hessian_inv) or results' + 'instance') return xu, hessian_inv
Helper function to get scores from results Parameters
_get_sandwich_arrays
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def _HCCM1(results, scale): ''' sandwich with pinv(x) * scale * pinv(x).T where pinv(x) = (X'X)^(-1) X and scale is (nobs, nobs), or (nobs,) with diagonal matrix diag(scale) Parameters ---------- results : result instance need to contain regression results, uses results.model.pinv_wexog scale : ndarray (nobs,) or (nobs, nobs) scale matrix, treated as diagonal matrix if scale is one-dimensional Returns ------- H : ndarray (k_vars, k_vars) robust covariance matrix for the parameter estimates ''' if scale.ndim == 1: H = np.dot(results.model.pinv_wexog, scale[:,None]*results.model.pinv_wexog.T) else: H = np.dot(results.model.pinv_wexog, np.dot(scale, results.model.pinv_wexog.T)) return H
sandwich with pinv(x) * scale * pinv(x).T where pinv(x) = (X'X)^(-1) X and scale is (nobs, nobs), or (nobs,) with diagonal matrix diag(scale) Parameters ---------- results : result instance need to contain regression results, uses results.model.pinv_wexog scale : ndarray (nobs,) or (nobs, nobs) scale matrix, treated as diagonal matrix if scale is one-dimensional Returns ------- H : ndarray (k_vars, k_vars) robust covariance matrix for the parameter estimates
_HCCM1
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def _HCCM2(hessian_inv, scale): ''' sandwich with (X'X)^(-1) * scale * (X'X)^(-1) scale is (kvars, kvars) this uses results.normalized_cov_params for (X'X)^(-1) Parameters ---------- results : result instance need to contain regression results, uses results.normalized_cov_params scale : ndarray (k_vars, k_vars) scale matrix Returns ------- H : ndarray (k_vars, k_vars) robust covariance matrix for the parameter estimates ''' if scale.ndim == 1: scale = scale[:,None] xxi = hessian_inv H = np.dot(np.dot(xxi, scale), xxi.T) return H
sandwich with (X'X)^(-1) * scale * (X'X)^(-1) scale is (kvars, kvars) this uses results.normalized_cov_params for (X'X)^(-1) Parameters ---------- results : result instance need to contain regression results, uses results.normalized_cov_params scale : ndarray (k_vars, k_vars) scale matrix Returns ------- H : ndarray (k_vars, k_vars) robust covariance matrix for the parameter estimates
_HCCM2
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def weights_bartlett(nlags): '''Bartlett weights for HAC this will be moved to another module Parameters ---------- nlags : int highest lag in the kernel window, this does not include the zero lag Returns ------- kernel : ndarray, (nlags+1,) weights for Bartlett kernel ''' #with lag zero return 1 - np.arange(nlags+1)/(nlags+1.)
Bartlett weights for HAC this will be moved to another module Parameters ---------- nlags : int highest lag in the kernel window, this does not include the zero lag Returns ------- kernel : ndarray, (nlags+1,) weights for Bartlett kernel
weights_bartlett
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def weights_uniform(nlags): '''uniform weights for HAC this will be moved to another module Parameters ---------- nlags : int highest lag in the kernel window, this does not include the zero lag Returns ------- kernel : ndarray, (nlags+1,) weights for uniform kernel ''' #with lag zero return np.ones(nlags+1)
uniform weights for HAC this will be moved to another module Parameters ---------- nlags : int highest lag in the kernel window, this does not include the zero lag Returns ------- kernel : ndarray, (nlags+1,) weights for uniform kernel
weights_uniform
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def S_hac_simple(x, nlags=None, weights_func=weights_bartlett): '''inner covariance matrix for HAC (Newey, West) sandwich assumes we have a single time series with zero axis consecutive, equal spaced time periods Parameters ---------- x : ndarray (nobs,) or (nobs, k_var) data, for HAC this is array of x_i * u_i nlags : int or None highest lag to include in kernel window. If None, then nlags = floor(4(T/100)^(2/9)) is used. weights_func : callable weights_func is called with nlags as argument to get the kernel weights. default are Bartlett weights Returns ------- S : ndarray, (k_vars, k_vars) inner covariance matrix for sandwich Notes ----- used by cov_hac_simple options might change when other kernels besides Bartlett are available. ''' if x.ndim == 1: x = x[:,None] n_periods = x.shape[0] if nlags is None: nlags = int(np.floor(4 * (n_periods / 100.)**(2./9.))) weights = weights_func(nlags) S = weights[0] * np.dot(x.T, x) #weights[0] just for completeness, is 1 for lag in range(1, nlags+1): s = np.dot(x[lag:].T, x[:-lag]) S += weights[lag] * (s + s.T) return S
inner covariance matrix for HAC (Newey, West) sandwich assumes we have a single time series with zero axis consecutive, equal spaced time periods Parameters ---------- x : ndarray (nobs,) or (nobs, k_var) data, for HAC this is array of x_i * u_i nlags : int or None highest lag to include in kernel window. If None, then nlags = floor(4(T/100)^(2/9)) is used. weights_func : callable weights_func is called with nlags as argument to get the kernel weights. default are Bartlett weights Returns ------- S : ndarray, (k_vars, k_vars) inner covariance matrix for sandwich Notes ----- used by cov_hac_simple options might change when other kernels besides Bartlett are available.
S_hac_simple
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def S_white_simple(x): '''inner covariance matrix for White heteroscedastistity sandwich Parameters ---------- x : ndarray (nobs,) or (nobs, k_var) data, for HAC this is array of x_i * u_i Returns ------- S : ndarray, (k_vars, k_vars) inner covariance matrix for sandwich Notes ----- this is just dot(X.T, X) ''' if x.ndim == 1: x = x[:,None] return np.dot(x.T, x)
inner covariance matrix for White heteroscedastistity sandwich Parameters ---------- x : ndarray (nobs,) or (nobs, k_var) data, for HAC this is array of x_i * u_i Returns ------- S : ndarray, (k_vars, k_vars) inner covariance matrix for sandwich Notes ----- this is just dot(X.T, X)
S_white_simple
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def S_hac_groupsum(x, time, nlags=None, weights_func=weights_bartlett): '''inner covariance matrix for HAC over group sums sandwich This assumes we have complete equal spaced time periods. The number of time periods per group need not be the same, but we need at least one observation for each time period For a single categorical group only, or a everything else but time dimension. This first aggregates x over groups for each time period, then applies HAC on the sum per period. Parameters ---------- x : ndarray (nobs,) or (nobs, k_var) data, for HAC this is array of x_i * u_i time : ndarray, (nobs,) timeindes, assumed to be integers range(n_periods) nlags : int or None highest lag to include in kernel window. If None, then nlags = floor[4(T/100)^(2/9)] is used. weights_func : callable weights_func is called with nlags as argument to get the kernel weights. default are Bartlett weights Returns ------- S : ndarray, (k_vars, k_vars) inner covariance matrix for sandwich References ---------- Daniel Hoechle, xtscc paper Driscoll and Kraay ''' #needs groupsums x_group_sums = group_sums(x, time).T #TODO: transpose return in grou_sum return S_hac_simple(x_group_sums, nlags=nlags, weights_func=weights_func)
inner covariance matrix for HAC over group sums sandwich This assumes we have complete equal spaced time periods. The number of time periods per group need not be the same, but we need at least one observation for each time period For a single categorical group only, or a everything else but time dimension. This first aggregates x over groups for each time period, then applies HAC on the sum per period. Parameters ---------- x : ndarray (nobs,) or (nobs, k_var) data, for HAC this is array of x_i * u_i time : ndarray, (nobs,) timeindes, assumed to be integers range(n_periods) nlags : int or None highest lag to include in kernel window. If None, then nlags = floor[4(T/100)^(2/9)] is used. weights_func : callable weights_func is called with nlags as argument to get the kernel weights. default are Bartlett weights Returns ------- S : ndarray, (k_vars, k_vars) inner covariance matrix for sandwich References ---------- Daniel Hoechle, xtscc paper Driscoll and Kraay
S_hac_groupsum
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def S_crosssection(x, group): '''inner covariance matrix for White on group sums sandwich I guess for a single categorical group only, categorical group, can also be the product/intersection of groups This is used by cov_cluster and indirectly verified ''' x_group_sums = group_sums(x, group).T #TODO: why transposed return S_white_simple(x_group_sums)
inner covariance matrix for White on group sums sandwich I guess for a single categorical group only, categorical group, can also be the product/intersection of groups This is used by cov_cluster and indirectly verified
S_crosssection
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def cov_crosssection_0(results, group): '''this one is still wrong, use cov_cluster instead''' #TODO: currently used version of groupsums requires 2d resid scale = S_crosssection(results.resid[:,None], group) scale = np.squeeze(scale) cov = _HCCM1(results, scale) return cov
this one is still wrong, use cov_cluster instead
cov_crosssection_0
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def cov_cluster(results, group, use_correction=True): '''cluster robust covariance matrix Calculates sandwich covariance matrix for a single cluster, i.e. grouped variables. Parameters ---------- results : result instance result of a regression, uses results.model.exog and results.resid TODO: this should use wexog instead use_correction : bool If true (default), then the small sample correction factor is used. Returns ------- cov : ndarray, (k_vars, k_vars) cluster robust covariance matrix for parameter estimates Notes ----- same result as Stata in UCLA example and same as Peterson ''' #TODO: currently used version of groupsums requires 2d resid xu, hessian_inv = _get_sandwich_arrays(results, cov_type='clu') if not hasattr(group, 'dtype') or group.dtype != np.dtype('int'): clusters, group = np.unique(group, return_inverse=True) else: clusters = np.unique(group) scale = S_crosssection(xu, group) nobs, k_params = xu.shape n_groups = len(clusters) #replace with stored group attributes if available cov_c = _HCCM2(hessian_inv, scale) if use_correction: cov_c *= (n_groups / (n_groups - 1.) * ((nobs-1.) / float(nobs - k_params))) return cov_c
cluster robust covariance matrix Calculates sandwich covariance matrix for a single cluster, i.e. grouped variables. Parameters ---------- results : result instance result of a regression, uses results.model.exog and results.resid TODO: this should use wexog instead use_correction : bool If true (default), then the small sample correction factor is used. Returns ------- cov : ndarray, (k_vars, k_vars) cluster robust covariance matrix for parameter estimates Notes ----- same result as Stata in UCLA example and same as Peterson
cov_cluster
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def cov_cluster_2groups(results, group, group2=None, use_correction=True): '''cluster robust covariance matrix for two groups/clusters Parameters ---------- results : result instance result of a regression, uses results.model.exog and results.resid TODO: this should use wexog instead use_correction : bool If true (default), then the small sample correction factor is used. Returns ------- cov_both : ndarray, (k_vars, k_vars) cluster robust covariance matrix for parameter estimates, for both clusters cov_0 : ndarray, (k_vars, k_vars) cluster robust covariance matrix for parameter estimates for first cluster cov_1 : ndarray, (k_vars, k_vars) cluster robust covariance matrix for parameter estimates for second cluster Notes ----- verified against Peterson's table, (4 decimal print precision) ''' if group2 is None: if group.ndim !=2 or group.shape[1] != 2: raise ValueError('if group2 is not given, then groups needs to be ' + 'an array with two columns') group0 = group[:, 0] group1 = group[:, 1] else: group0 = group group1 = group2 group = (group0, group1) cov0 = cov_cluster(results, group0, use_correction=use_correction) #[0] because we get still also returns bse cov1 = cov_cluster(results, group1, use_correction=use_correction) # cov of cluster formed by intersection of two groups cov01 = cov_cluster(results, combine_indices(group)[0], use_correction=use_correction) #robust cov matrix for union of groups cov_both = cov0 + cov1 - cov01 #return all three (for now?) return cov_both, cov0, cov1
cluster robust covariance matrix for two groups/clusters Parameters ---------- results : result instance result of a regression, uses results.model.exog and results.resid TODO: this should use wexog instead use_correction : bool If true (default), then the small sample correction factor is used. Returns ------- cov_both : ndarray, (k_vars, k_vars) cluster robust covariance matrix for parameter estimates, for both clusters cov_0 : ndarray, (k_vars, k_vars) cluster robust covariance matrix for parameter estimates for first cluster cov_1 : ndarray, (k_vars, k_vars) cluster robust covariance matrix for parameter estimates for second cluster Notes ----- verified against Peterson's table, (4 decimal print precision)
cov_cluster_2groups
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def cov_white_simple(results, use_correction=True): ''' heteroscedasticity robust covariance matrix (White) Parameters ---------- results : result instance result of a regression, uses results.model.exog and results.resid TODO: this should use wexog instead Returns ------- cov : ndarray, (k_vars, k_vars) heteroscedasticity robust covariance matrix for parameter estimates Notes ----- This produces the same result as cov_hc0, and does not include any small sample correction. verified (against LinearRegressionResults and Peterson) See Also -------- cov_hc1, cov_hc2, cov_hc3 : heteroscedasticity robust covariance matrices with small sample corrections ''' xu, hessian_inv = _get_sandwich_arrays(results) sigma = S_white_simple(xu) cov_w = _HCCM2(hessian_inv, sigma) #add bread to sandwich if use_correction: nobs, k_params = xu.shape cov_w *= nobs / float(nobs - k_params) return cov_w
heteroscedasticity robust covariance matrix (White) Parameters ---------- results : result instance result of a regression, uses results.model.exog and results.resid TODO: this should use wexog instead Returns ------- cov : ndarray, (k_vars, k_vars) heteroscedasticity robust covariance matrix for parameter estimates Notes ----- This produces the same result as cov_hc0, and does not include any small sample correction. verified (against LinearRegressionResults and Peterson) See Also -------- cov_hc1, cov_hc2, cov_hc3 : heteroscedasticity robust covariance matrices with small sample corrections
cov_white_simple
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def cov_hac_simple(results, nlags=None, weights_func=weights_bartlett, use_correction=True): ''' heteroscedasticity and autocorrelation robust covariance matrix (Newey-West) Assumes we have a single time series with zero axis consecutive, equal spaced time periods Parameters ---------- results : result instance result of a regression, uses results.model.exog and results.resid TODO: this should use wexog instead nlags : int or None highest lag to include in kernel window. If None, then nlags = floor[4(T/100)^(2/9)] is used. weights_func : callable weights_func is called with nlags as argument to get the kernel weights. default are Bartlett weights Returns ------- cov : ndarray, (k_vars, k_vars) HAC robust covariance matrix for parameter estimates Notes ----- verified only for nlags=0, which is just White just guessing on correction factor, need reference options might change when other kernels besides Bartlett are available. ''' xu, hessian_inv = _get_sandwich_arrays(results) sigma = S_hac_simple(xu, nlags=nlags, weights_func=weights_func) cov_hac = _HCCM2(hessian_inv, sigma) if use_correction: nobs, k_params = xu.shape cov_hac *= nobs / float(nobs - k_params) return cov_hac
heteroscedasticity and autocorrelation robust covariance matrix (Newey-West) Assumes we have a single time series with zero axis consecutive, equal spaced time periods Parameters ---------- results : result instance result of a regression, uses results.model.exog and results.resid TODO: this should use wexog instead nlags : int or None highest lag to include in kernel window. If None, then nlags = floor[4(T/100)^(2/9)] is used. weights_func : callable weights_func is called with nlags as argument to get the kernel weights. default are Bartlett weights Returns ------- cov : ndarray, (k_vars, k_vars) HAC robust covariance matrix for parameter estimates Notes ----- verified only for nlags=0, which is just White just guessing on correction factor, need reference options might change when other kernels besides Bartlett are available.
cov_hac_simple
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def lagged_groups(x, lag, groupidx): ''' assumes sorted by time, groupidx is tuple of start and end values not optimized, just to get a working version, loop over groups ''' out0 = [] out_lagged = [] for lo, up in groupidx: if lo+lag < up: #group is longer than lag out0.append(x[lo+lag:up]) out_lagged.append(x[lo:up-lag]) if out0 == []: raise ValueError('all groups are empty taking lags') return np.vstack(out0), np.vstack(out_lagged)
assumes sorted by time, groupidx is tuple of start and end values not optimized, just to get a working version, loop over groups
lagged_groups
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def S_nw_panel(xw, weights, groupidx): '''inner covariance matrix for HAC for panel data no denominator nobs used no reference for this, just accounting for time indices ''' nlags = len(weights)-1 S = weights[0] * np.dot(xw.T, xw) #weights just for completeness for lag in range(1, nlags+1): xw0, xwlag = lagged_groups(xw, lag, groupidx) s = np.dot(xw0.T, xwlag) S += weights[lag] * (s + s.T) return S
inner covariance matrix for HAC for panel data no denominator nobs used no reference for this, just accounting for time indices
S_nw_panel
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def cov_nw_panel(results, nlags, groupidx, weights_func=weights_bartlett, use_correction='hac'): '''Panel HAC robust covariance matrix Assumes we have a panel of time series with consecutive, equal spaced time periods. Data is assumed to be in long format with time series of each individual stacked into one array. Panel can be unbalanced. Parameters ---------- results : result instance result of a regression, uses results.model.exog and results.resid TODO: this should use wexog instead nlags : int or None Highest lag to include in kernel window. Currently, no default because the optimal length will depend on the number of observations per cross-sectional unit. groupidx : list of tuple each tuple should contain the start and end index for an individual. (groupidx might change in future). weights_func : callable weights_func is called with nlags as argument to get the kernel weights. default are Bartlett weights use_correction : 'cluster' or 'hac' or False If False, then no small sample correction is used. If 'cluster' (default), then the same correction as in cov_cluster is used. If 'hac', then the same correction as in single time series, cov_hac is used. Returns ------- cov : ndarray, (k_vars, k_vars) HAC robust covariance matrix for parameter estimates Notes ----- For nlags=0, this is just White covariance, cov_white. If kernel is uniform, `weights_uniform`, with nlags equal to the number of observations per unit in a balance panel, then cov_cluster and cov_hac_panel are identical. Tested against STATA `newey` command with same defaults. Options might change when other kernels besides Bartlett and uniform are available. ''' if nlags == 0: #so we can reproduce HC0 White weights = [1, 0] #to avoid the scalar check in hac_nw else: weights = weights_func(nlags) xu, hessian_inv = _get_sandwich_arrays(results) S_hac = S_nw_panel(xu, weights, groupidx) cov_hac = _HCCM2(hessian_inv, S_hac) if use_correction: nobs, k_params = xu.shape if use_correction == 'hac': cov_hac *= nobs / float(nobs - k_params) elif use_correction in ['c', 'clu', 'cluster']: n_groups = len(groupidx) cov_hac *= n_groups / (n_groups - 1.) cov_hac *= ((nobs-1.) / float(nobs - k_params)) return cov_hac
Panel HAC robust covariance matrix Assumes we have a panel of time series with consecutive, equal spaced time periods. Data is assumed to be in long format with time series of each individual stacked into one array. Panel can be unbalanced. Parameters ---------- results : result instance result of a regression, uses results.model.exog and results.resid TODO: this should use wexog instead nlags : int or None Highest lag to include in kernel window. Currently, no default because the optimal length will depend on the number of observations per cross-sectional unit. groupidx : list of tuple each tuple should contain the start and end index for an individual. (groupidx might change in future). weights_func : callable weights_func is called with nlags as argument to get the kernel weights. default are Bartlett weights use_correction : 'cluster' or 'hac' or False If False, then no small sample correction is used. If 'cluster' (default), then the same correction as in cov_cluster is used. If 'hac', then the same correction as in single time series, cov_hac is used. Returns ------- cov : ndarray, (k_vars, k_vars) HAC robust covariance matrix for parameter estimates Notes ----- For nlags=0, this is just White covariance, cov_white. If kernel is uniform, `weights_uniform`, with nlags equal to the number of observations per unit in a balance panel, then cov_cluster and cov_hac_panel are identical. Tested against STATA `newey` command with same defaults. Options might change when other kernels besides Bartlett and uniform are available.
cov_nw_panel
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def cov_nw_groupsum(results, nlags, time, weights_func=weights_bartlett, use_correction=0): '''Driscoll and Kraay Panel robust covariance matrix Robust covariance matrix for panel data of Driscoll and Kraay. Assumes we have a panel of time series where the time index is available. The time index is assumed to represent equal spaced periods. At least one observation per period is required. Parameters ---------- results : result instance result of a regression, uses results.model.exog and results.resid TODO: this should use wexog instead nlags : int or None Highest lag to include in kernel window. Currently, no default because the optimal length will depend on the number of observations per cross-sectional unit. time : ndarray of int this should contain the coding for the time period of each observation. time periods should be integers in range(maxT) where maxT is obs of i weights_func : callable weights_func is called with nlags as argument to get the kernel weights. default are Bartlett weights use_correction : 'cluster' or 'hac' or False If False, then no small sample correction is used. If 'hac' (default), then the same correction as in single time series, cov_hac is used. If 'cluster', then the same correction as in cov_cluster is used. Returns ------- cov : ndarray, (k_vars, k_vars) HAC robust covariance matrix for parameter estimates Notes ----- Tested against STATA xtscc package, which uses no small sample correction This first averages relevant variables for each time period over all individuals/groups, and then applies the same kernel weighted averaging over time as in HAC. Warning: In the example with a short panel (few time periods and many individuals) with mainly across individual variation this estimator did not produce reasonable results. Options might change when other kernels besides Bartlett and uniform are available. References ---------- Daniel Hoechle, xtscc paper Driscoll and Kraay ''' xu, hessian_inv = _get_sandwich_arrays(results) #S_hac = S_nw_panel(xw, weights, groupidx) S_hac = S_hac_groupsum(xu, time, nlags=nlags, weights_func=weights_func) cov_hac = _HCCM2(hessian_inv, S_hac) if use_correction: nobs, k_params = xu.shape if use_correction == 'hac': cov_hac *= nobs / float(nobs - k_params) elif use_correction in ['c', 'cluster']: n_groups = len(np.unique(time)) cov_hac *= n_groups / (n_groups - 1.) cov_hac *= ((nobs-1.) / float(nobs - k_params)) return cov_hac
Driscoll and Kraay Panel robust covariance matrix Robust covariance matrix for panel data of Driscoll and Kraay. Assumes we have a panel of time series where the time index is available. The time index is assumed to represent equal spaced periods. At least one observation per period is required. Parameters ---------- results : result instance result of a regression, uses results.model.exog and results.resid TODO: this should use wexog instead nlags : int or None Highest lag to include in kernel window. Currently, no default because the optimal length will depend on the number of observations per cross-sectional unit. time : ndarray of int this should contain the coding for the time period of each observation. time periods should be integers in range(maxT) where maxT is obs of i weights_func : callable weights_func is called with nlags as argument to get the kernel weights. default are Bartlett weights use_correction : 'cluster' or 'hac' or False If False, then no small sample correction is used. If 'hac' (default), then the same correction as in single time series, cov_hac is used. If 'cluster', then the same correction as in cov_cluster is used. Returns ------- cov : ndarray, (k_vars, k_vars) HAC robust covariance matrix for parameter estimates Notes ----- Tested against STATA xtscc package, which uses no small sample correction This first averages relevant variables for each time period over all individuals/groups, and then applies the same kernel weighted averaging over time as in HAC. Warning: In the example with a short panel (few time periods and many individuals) with mainly across individual variation this estimator did not produce reasonable results. Options might change when other kernels besides Bartlett and uniform are available. References ---------- Daniel Hoechle, xtscc paper Driscoll and Kraay
cov_nw_groupsum
python
statsmodels/statsmodels
statsmodels/stats/sandwich_covariance.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py
BSD-3-Clause
def effectsize_oneway(means, vars_, nobs, use_var="unequal", ddof_between=0): """ Effect size corresponding to Cohen's f = nc / nobs for oneway anova This contains adjustment for Welch and Brown-Forsythe Anova so that effect size can be used with FTestAnovaPower. Parameters ---------- means : array_like Mean of samples to be compared vars_ : float or array_like Residual (within) variance of each sample or pooled If ``vars_`` is scalar, then it is interpreted as pooled variance that is the same for all samples, ``use_var`` will be ignored. Otherwise, the variances are used depending on the ``use_var`` keyword. nobs : int or array_like Number of observations for the samples. If nobs is scalar, then it is assumed that all samples have the same number ``nobs`` of observation, i.e. a balanced sample case. Otherwise, statistics will be weighted corresponding to nobs. Only relative sizes are relevant, any proportional change to nobs does not change the effect size. use_var : {"unequal", "equal", "bf"} If ``use_var`` is "unequal", then the variances can differ across samples and the effect size for Welch anova will be computed. ddof_between : int Degrees of freedom correction for the weighted between sum of squares. The denominator is ``nobs_total - ddof_between`` This can be used to match differences across reference literature. Returns ------- f2 : float Effect size corresponding to squared Cohen's f, which is also equal to the noncentrality divided by total number of observations. Notes ----- This currently handles the following cases for oneway anova - balanced sample with homoscedastic variances - samples with different number of observations and with homoscedastic variances - samples with different number of observations and with heteroskedastic variances. This corresponds to Welch anova In the case of "unequal" and "bf" methods for unequal variances, the effect sizes do not directly correspond to the test statistic in Anova. Both have correction terms dropped or added, so the effect sizes match up with using FTestAnovaPower. If all variances are equal, then all three methods result in the same effect size. If variances are unequal, then the three methods produce small differences in effect size. Note, the effect size and power computation for BF Anova was not found in the literature. The correction terms were added so that FTestAnovaPower provides a good approximation to the power. Status: experimental We might add additional returns, if those are needed to support power and sample size applications. Examples -------- The following shows how to compute effect size and power for each of the three anova methods. The null hypothesis is that the means are equal which corresponds to a zero effect size. Under the alternative, means differ with two sample means at a distance delta from the mean. We assume the variance is the same under the null and alternative hypothesis. ``nobs`` for the samples defines the fraction of observations in the samples. ``nobs`` in the power method defines the total sample size. In simulations, the computed power for standard anova, i.e.``use_var="equal"`` overestimates the simulated power by a few percent. The equal variance assumption does not hold in this example. >>> from statsmodels.stats.oneway import effectsize_oneway >>> from statsmodels.stats.power import FTestAnovaPower >>> >>> nobs = np.array([10, 12, 13, 15]) >>> delta = 0.5 >>> means_alt = np.array([-1, 0, 0, 1]) * delta >>> vars_ = np.arange(1, len(means_alt) + 1) >>> >>> f2_alt = effectsize_oneway(means_alt, vars_, nobs, use_var="equal") >>> f2_alt 0.04581300813008131 >>> >>> kwds = {'effect_size': np.sqrt(f2_alt), 'nobs': 100, 'alpha': 0.05, ... 'k_groups': 4} >>> power = FTestAnovaPower().power(**kwds) >>> power 0.39165892158983273 >>> >>> f2_alt = effectsize_oneway(means_alt, vars_, nobs, use_var="unequal") >>> f2_alt 0.060640138408304504 >>> >>> kwds['effect_size'] = np.sqrt(f2_alt) >>> power = FTestAnovaPower().power(**kwds) >>> power 0.5047366512800622 >>> >>> f2_alt = effectsize_oneway(means_alt, vars_, nobs, use_var="bf") >>> f2_alt 0.04391324307956788 >>> >>> kwds['effect_size'] = np.sqrt(f2_alt) >>> power = FTestAnovaPower().power(**kwds) >>> power 0.3765792117047725 """ # the code here is largely a copy of onway_generic with adjustments means = np.asarray(means) n_groups = means.shape[0] if np.size(nobs) == 1: nobs = np.ones(n_groups) * nobs nobs_t = nobs.sum() if use_var == "equal": if np.size(vars_) == 1: var_resid = vars_ else: vars_ = np.asarray(vars_) var_resid = ((nobs - 1) * vars_).sum() / (nobs_t - n_groups) vars_ = var_resid # scalar, if broadcasting works weights = nobs / vars_ w_total = weights.sum() w_rel = weights / w_total # meanw_t = (weights * means).sum() / w_total meanw_t = w_rel @ means f2 = np.dot(weights, (means - meanw_t)**2) / (nobs_t - ddof_between) if use_var.lower() == "bf": weights = nobs w_total = weights.sum() w_rel = weights / w_total meanw_t = w_rel @ means # TODO: reuse general case with weights tmp = ((1. - nobs / nobs_t) * vars_).sum() statistic = 1. * (nobs * (means - meanw_t)**2).sum() statistic /= tmp f2 = statistic * (1. - nobs / nobs_t).sum() / nobs_t # correction factor for df_num in BFM df_num2 = n_groups - 1 df_num = tmp**2 / ((vars_**2).sum() + (nobs / nobs_t * vars_).sum()**2 - 2 * (nobs / nobs_t * vars_**2).sum()) f2 *= df_num / df_num2 return f2
Effect size corresponding to Cohen's f = nc / nobs for oneway anova This contains adjustment for Welch and Brown-Forsythe Anova so that effect size can be used with FTestAnovaPower. Parameters ---------- means : array_like Mean of samples to be compared vars_ : float or array_like Residual (within) variance of each sample or pooled If ``vars_`` is scalar, then it is interpreted as pooled variance that is the same for all samples, ``use_var`` will be ignored. Otherwise, the variances are used depending on the ``use_var`` keyword. nobs : int or array_like Number of observations for the samples. If nobs is scalar, then it is assumed that all samples have the same number ``nobs`` of observation, i.e. a balanced sample case. Otherwise, statistics will be weighted corresponding to nobs. Only relative sizes are relevant, any proportional change to nobs does not change the effect size. use_var : {"unequal", "equal", "bf"} If ``use_var`` is "unequal", then the variances can differ across samples and the effect size for Welch anova will be computed. ddof_between : int Degrees of freedom correction for the weighted between sum of squares. The denominator is ``nobs_total - ddof_between`` This can be used to match differences across reference literature. Returns ------- f2 : float Effect size corresponding to squared Cohen's f, which is also equal to the noncentrality divided by total number of observations. Notes ----- This currently handles the following cases for oneway anova - balanced sample with homoscedastic variances - samples with different number of observations and with homoscedastic variances - samples with different number of observations and with heteroskedastic variances. This corresponds to Welch anova In the case of "unequal" and "bf" methods for unequal variances, the effect sizes do not directly correspond to the test statistic in Anova. Both have correction terms dropped or added, so the effect sizes match up with using FTestAnovaPower. If all variances are equal, then all three methods result in the same effect size. If variances are unequal, then the three methods produce small differences in effect size. Note, the effect size and power computation for BF Anova was not found in the literature. The correction terms were added so that FTestAnovaPower provides a good approximation to the power. Status: experimental We might add additional returns, if those are needed to support power and sample size applications. Examples -------- The following shows how to compute effect size and power for each of the three anova methods. The null hypothesis is that the means are equal which corresponds to a zero effect size. Under the alternative, means differ with two sample means at a distance delta from the mean. We assume the variance is the same under the null and alternative hypothesis. ``nobs`` for the samples defines the fraction of observations in the samples. ``nobs`` in the power method defines the total sample size. In simulations, the computed power for standard anova, i.e.``use_var="equal"`` overestimates the simulated power by a few percent. The equal variance assumption does not hold in this example. >>> from statsmodels.stats.oneway import effectsize_oneway >>> from statsmodels.stats.power import FTestAnovaPower >>> >>> nobs = np.array([10, 12, 13, 15]) >>> delta = 0.5 >>> means_alt = np.array([-1, 0, 0, 1]) * delta >>> vars_ = np.arange(1, len(means_alt) + 1) >>> >>> f2_alt = effectsize_oneway(means_alt, vars_, nobs, use_var="equal") >>> f2_alt 0.04581300813008131 >>> >>> kwds = {'effect_size': np.sqrt(f2_alt), 'nobs': 100, 'alpha': 0.05, ... 'k_groups': 4} >>> power = FTestAnovaPower().power(**kwds) >>> power 0.39165892158983273 >>> >>> f2_alt = effectsize_oneway(means_alt, vars_, nobs, use_var="unequal") >>> f2_alt 0.060640138408304504 >>> >>> kwds['effect_size'] = np.sqrt(f2_alt) >>> power = FTestAnovaPower().power(**kwds) >>> power 0.5047366512800622 >>> >>> f2_alt = effectsize_oneway(means_alt, vars_, nobs, use_var="bf") >>> f2_alt 0.04391324307956788 >>> >>> kwds['effect_size'] = np.sqrt(f2_alt) >>> power = FTestAnovaPower().power(**kwds) >>> power 0.3765792117047725
effectsize_oneway
python
statsmodels/statsmodels
statsmodels/stats/oneway.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py
BSD-3-Clause
def convert_effectsize_fsqu(f2=None, eta2=None): """Convert squared effect sizes in f family f2 is signal to noise ratio, var_explained / var_residual eta2 is proportion of explained variance, var_explained / var_total uses the relationship: f2 = eta2 / (1 - eta2) Parameters ---------- f2 : None or float Squared Cohen's F effect size. If f2 is not None, then eta2 will be computed. eta2 : None or float Squared eta effect size. If f2 is None and eta2 is not None, then f2 is computed. Returns ------- res : Holder instance An instance of the Holder class with f2 and eta2 as attributes. """ if f2 is not None: eta2 = 1 / (1 + 1 / f2) elif eta2 is not None: f2 = eta2 / (1 - eta2) res = Holder(f2=f2, eta2=eta2) return res
Convert squared effect sizes in f family f2 is signal to noise ratio, var_explained / var_residual eta2 is proportion of explained variance, var_explained / var_total uses the relationship: f2 = eta2 / (1 - eta2) Parameters ---------- f2 : None or float Squared Cohen's F effect size. If f2 is not None, then eta2 will be computed. eta2 : None or float Squared eta effect size. If f2 is None and eta2 is not None, then f2 is computed. Returns ------- res : Holder instance An instance of the Holder class with f2 and eta2 as attributes.
convert_effectsize_fsqu
python
statsmodels/statsmodels
statsmodels/stats/oneway.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py
BSD-3-Clause
def _fstat2effectsize(f_stat, df): """Compute anova effect size from F-statistic This might be combined with convert_effectsize_fsqu Parameters ---------- f_stat : array_like Test statistic of an F-test df : tuple degrees of freedom ``df = (df1, df2)`` where - df1 : numerator degrees of freedom, number of constraints - df2 : denominator degrees of freedom, df_resid Returns ------- res : Holder instance This instance contains effect size measures f2, eta2, omega2 and eps2 as attributes. Notes ----- This uses the following definitions: - f2 = f_stat * df1 / df2 - eta2 = f2 / (f2 + 1) - omega2 = (f2 - df1 / df2) / (f2 + 2) - eps2 = (f2 - df1 / df2) / (f2 + 1) This differs from effect size measures in other function which define ``f2 = f_stat * df1 / nobs`` or an equivalent expression for power computation. The noncentrality index for the hypothesis test is in those cases given by ``nc = f_stat * df1``. Currently omega2 and eps2 are computed in two different ways. Those values agree for regular cases but can show different behavior in corner cases (e.g. zero division). """ df1, df2 = df f2 = f_stat * df1 / df2 eta2 = f2 / (f2 + 1) omega2_ = (f_stat - 1) / (f_stat + (df2 + 1) / df1) omega2 = (f2 - df1 / df2) / (f2 + 1 + 1 / df2) # rewrite eps2_ = (f_stat - 1) / (f_stat + df2 / df1) eps2 = (f2 - df1 / df2) / (f2 + 1) # rewrite return Holder(f2=f2, eta2=eta2, omega2=omega2, eps2=eps2, eps2_=eps2_, omega2_=omega2_)
Compute anova effect size from F-statistic This might be combined with convert_effectsize_fsqu Parameters ---------- f_stat : array_like Test statistic of an F-test df : tuple degrees of freedom ``df = (df1, df2)`` where - df1 : numerator degrees of freedom, number of constraints - df2 : denominator degrees of freedom, df_resid Returns ------- res : Holder instance This instance contains effect size measures f2, eta2, omega2 and eps2 as attributes. Notes ----- This uses the following definitions: - f2 = f_stat * df1 / df2 - eta2 = f2 / (f2 + 1) - omega2 = (f2 - df1 / df2) / (f2 + 2) - eps2 = (f2 - df1 / df2) / (f2 + 1) This differs from effect size measures in other function which define ``f2 = f_stat * df1 / nobs`` or an equivalent expression for power computation. The noncentrality index for the hypothesis test is in those cases given by ``nc = f_stat * df1``. Currently omega2 and eps2 are computed in two different ways. Those values agree for regular cases but can show different behavior in corner cases (e.g. zero division).
_fstat2effectsize
python
statsmodels/statsmodels
statsmodels/stats/oneway.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py
BSD-3-Clause
def wellek_to_f2(eps, n_groups): """Convert Wellek's effect size (sqrt) to Cohen's f-squared This computes the following effect size : f2 = 1 / n_groups * eps**2 Parameters ---------- eps : float or ndarray Wellek's effect size used in anova equivalence test n_groups : int Number of groups in oneway comparison Returns ------- f2 : effect size Cohen's f-squared """ f2 = 1 / n_groups * eps**2 return f2
Convert Wellek's effect size (sqrt) to Cohen's f-squared This computes the following effect size : f2 = 1 / n_groups * eps**2 Parameters ---------- eps : float or ndarray Wellek's effect size used in anova equivalence test n_groups : int Number of groups in oneway comparison Returns ------- f2 : effect size Cohen's f-squared
wellek_to_f2
python
statsmodels/statsmodels
statsmodels/stats/oneway.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py
BSD-3-Clause
def f2_to_wellek(f2, n_groups): """Convert Cohen's f-squared to Wellek's effect size (sqrt) This computes the following effect size : eps = sqrt(n_groups * f2) Parameters ---------- f2 : float or ndarray Effect size Cohen's f-squared n_groups : int Number of groups in oneway comparison Returns ------- eps : float or ndarray Wellek's effect size used in anova equivalence test """ eps = np.sqrt(n_groups * f2) return eps
Convert Cohen's f-squared to Wellek's effect size (sqrt) This computes the following effect size : eps = sqrt(n_groups * f2) Parameters ---------- f2 : float or ndarray Effect size Cohen's f-squared n_groups : int Number of groups in oneway comparison Returns ------- eps : float or ndarray Wellek's effect size used in anova equivalence test
f2_to_wellek
python
statsmodels/statsmodels
statsmodels/stats/oneway.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py
BSD-3-Clause
def fstat_to_wellek(f_stat, n_groups, nobs_mean): """Convert F statistic to wellek's effect size eps squared This computes the following effect size : es = f_stat * (n_groups - 1) / nobs_mean Parameters ---------- f_stat : float or ndarray Test statistic of an F-test. n_groups : int Number of groups in oneway comparison nobs_mean : float or ndarray Average number of observations across groups. Returns ------- eps : float or ndarray Wellek's effect size used in anova equivalence test """ es = f_stat * (n_groups - 1) / nobs_mean return es
Convert F statistic to wellek's effect size eps squared This computes the following effect size : es = f_stat * (n_groups - 1) / nobs_mean Parameters ---------- f_stat : float or ndarray Test statistic of an F-test. n_groups : int Number of groups in oneway comparison nobs_mean : float or ndarray Average number of observations across groups. Returns ------- eps : float or ndarray Wellek's effect size used in anova equivalence test
fstat_to_wellek
python
statsmodels/statsmodels
statsmodels/stats/oneway.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py
BSD-3-Clause
def confint_noncentrality(f_stat, df, alpha=0.05, alternative="two-sided"): """ Confidence interval for noncentrality parameter in F-test This does not yet handle non-negativity constraint on nc. Currently only two-sided alternative is supported. Parameters ---------- f_stat : float df : tuple degrees of freedom ``df = (df1, df2)`` where - df1 : numerator degrees of freedom, number of constraints - df2 : denominator degrees of freedom, df_resid alpha : float, default 0.05 alternative : {"two-sided"} Other alternatives have not been implements. Returns ------- float The end point of the confidence interval. Notes ----- The algorithm inverts the cdf of the noncentral F distribution with respect to the noncentrality parameters. See Steiger 2004 and references cited in it. References ---------- .. [1] Steiger, James H. 2004. “Beyond the F Test: Effect Size Confidence Intervals and Tests of Close Fit in the Analysis of Variance and Contrast Analysis.” Psychological Methods 9 (2): 164–82. https://doi.org/10.1037/1082-989X.9.2.164. See Also -------- confint_effectsize_oneway """ df1, df2 = df if alternative in ["two-sided", "2s", "ts"]: alpha1s = alpha / 2 ci = ncfdtrinc(df1, df2, [1 - alpha1s, alpha1s], f_stat) else: raise NotImplementedError return ci
Confidence interval for noncentrality parameter in F-test This does not yet handle non-negativity constraint on nc. Currently only two-sided alternative is supported. Parameters ---------- f_stat : float df : tuple degrees of freedom ``df = (df1, df2)`` where - df1 : numerator degrees of freedom, number of constraints - df2 : denominator degrees of freedom, df_resid alpha : float, default 0.05 alternative : {"two-sided"} Other alternatives have not been implements. Returns ------- float The end point of the confidence interval. Notes ----- The algorithm inverts the cdf of the noncentral F distribution with respect to the noncentrality parameters. See Steiger 2004 and references cited in it. References ---------- .. [1] Steiger, James H. 2004. “Beyond the F Test: Effect Size Confidence Intervals and Tests of Close Fit in the Analysis of Variance and Contrast Analysis.” Psychological Methods 9 (2): 164–82. https://doi.org/10.1037/1082-989X.9.2.164. See Also -------- confint_effectsize_oneway
confint_noncentrality
python
statsmodels/statsmodels
statsmodels/stats/oneway.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py
BSD-3-Clause