code
stringlengths 26
870k
| docstring
stringlengths 1
65.6k
| func_name
stringlengths 1
194
| language
stringclasses 1
value | repo
stringlengths 8
68
| path
stringlengths 5
182
| url
stringlengths 46
251
| license
stringclasses 4
values |
---|---|---|---|---|---|---|---|
def power_equivalence_poisson_2indep(rate1, rate2, nobs1,
low, upp, nobs_ratio=1,
exposure=1, alpha=0.05, dispersion=1,
method_var="alt",
return_results=False):
"""Power of equivalence test of ratio of 2 independent poisson rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
low : float
Lower equivalence margin for the rate ratio, rate1 / rate2.
upp : float
Upper equivalence margin for the rate ratio, rate1 / rate2.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
exposure : float
Exposure for each observation. Total exposure is nobs1 * exposure
and nobs2 * exposure.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
value : float
Difference between rates 1 and 2 under the null hypothesis.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates uder the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Zhu, Haiyuan. 2017. “Sample Size Calculation for Comparing Two
Poisson or Negative Binomial Rates in Noninferiority or Equivalence
Trials.” Statistics in Biopharmaceutical Research, March.
https://doi.org/10.1080/19466315.2016.1225594
.. [2] Zhu, Haiyuan, and Hassan Lakkis. 2014. “Sample Size Calculation for
Comparing Two Negative Binomial Rates.” Statistics in Medicine 33 (3):
376–87. https://doi.org/10.1002/sim.5947.
.. [3] PASS documentation
"""
rate1, rate2, nobs1 = map(np.asarray, [rate1, rate2, nobs1])
nobs2 = nobs_ratio * nobs1
v1 = dispersion / exposure * (1 / rate1 + 1 / (nobs_ratio * rate2))
if method_var == "alt":
v0_low = v0_upp = v1
elif method_var == "score":
v0_low = dispersion / exposure * (1 + low * nobs_ratio)**2
v0_low /= low * nobs_ratio * (rate1 + (nobs_ratio * rate2))
v0_upp = dispersion / exposure * (1 + upp * nobs_ratio)**2
v0_upp /= upp * nobs_ratio * (rate1 + (nobs_ratio * rate2))
else:
raise NotImplementedError(f"method_var {method_var} not recognized")
es_low = np.log(rate1 / rate2) - np.log(low)
es_upp = np.log(rate1 / rate2) - np.log(upp)
std_null_low = np.sqrt(v0_low)
std_null_upp = np.sqrt(v0_upp)
std_alternative = np.sqrt(v1)
pow_ = _power_equivalence_het(es_low, es_upp, nobs2, alpha=alpha,
std_null_low=std_null_low,
std_null_upp=std_null_upp,
std_alternative=std_alternative)
if return_results:
res = HolderTuple(
power=pow_[0],
power_margins=pow[1:],
std_null_low=std_null_low,
std_null_upp=std_null_upp,
std_alt=std_alternative,
nobs1=nobs1,
nobs2=nobs2,
nobs_ratio=nobs_ratio,
alpha=alpha,
tuple_=("power",), # override default
)
return res
else:
return pow_[0] | Power of equivalence test of ratio of 2 independent poisson rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
low : float
Lower equivalence margin for the rate ratio, rate1 / rate2.
upp : float
Upper equivalence margin for the rate ratio, rate1 / rate2.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
exposure : float
Exposure for each observation. Total exposure is nobs1 * exposure
and nobs2 * exposure.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
value : float
Difference between rates 1 and 2 under the null hypothesis.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates uder the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Zhu, Haiyuan. 2017. “Sample Size Calculation for Comparing Two
Poisson or Negative Binomial Rates in Noninferiority or Equivalence
Trials.” Statistics in Biopharmaceutical Research, March.
https://doi.org/10.1080/19466315.2016.1225594
.. [2] Zhu, Haiyuan, and Hassan Lakkis. 2014. “Sample Size Calculation for
Comparing Two Negative Binomial Rates.” Statistics in Medicine 33 (3):
376–87. https://doi.org/10.1002/sim.5947.
.. [3] PASS documentation | power_equivalence_poisson_2indep | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def _power_equivalence_het_v0(es_low, es_upp, nobs, alpha=0.05,
std_null_low=None,
std_null_upp=None,
std_alternative=None):
"""power for equivalence test
"""
s0_low = std_null_low
s0_upp = std_null_upp
s1 = std_alternative
crit = norm.isf(alpha)
pow_ = (
norm.cdf((np.sqrt(nobs) * es_low - crit * s0_low) / s1) +
norm.cdf((np.sqrt(nobs) * es_upp - crit * s0_upp) / s1) - 1
)
return pow_ | power for equivalence test | _power_equivalence_het_v0 | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def power_poisson_diff_2indep(rate1, rate2, nobs1, nobs_ratio=1, alpha=0.05,
value=0,
method_var="score",
alternative='two-sided',
return_results=True):
"""Power of ztest for the difference between two independent poisson rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
value : float
Difference between rates 1 and 2 under the null hypothesis.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates uder the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Stucke, Kathrin, and Meinhard Kieser. 2013. “Sample Size
Calculations for Noninferiority Trials with Poisson Distributed Count
Data.” Biometrical Journal 55 (2): 203–16.
https://doi.org/10.1002/bimj.201200142.
.. [2] PASS manual chapter 436
"""
# TODO: avoid possible circular import, check if needed
from statsmodels.stats.power import normal_power_het
rate1, rate2, nobs1 = map(np.asarray, [rate1, rate2, nobs1])
diff = rate1 - rate2
_, std_null, std_alt = _std_2poisson_power(
rate1,
rate2,
nobs_ratio=nobs_ratio,
alpha=alpha,
value=value,
method_var=method_var,
)
pow_ = normal_power_het(diff - value, nobs1, alpha, std_null=std_null,
std_alternative=std_alt,
alternative=alternative)
if return_results:
res = HolderTuple(
power=pow_,
rates_alt=(rate2 + diff, rate2),
std_null=std_null,
std_alt=std_alt,
nobs1=nobs1,
nobs2=nobs_ratio * nobs1,
nobs_ratio=nobs_ratio,
alpha=alpha,
tuple_=("power",), # override default
)
return res
else:
return pow_ | Power of ztest for the difference between two independent poisson rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
value : float
Difference between rates 1 and 2 under the null hypothesis.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates uder the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Stucke, Kathrin, and Meinhard Kieser. 2013. “Sample Size
Calculations for Noninferiority Trials with Poisson Distributed Count
Data.” Biometrical Journal 55 (2): 203–16.
https://doi.org/10.1002/bimj.201200142.
.. [2] PASS manual chapter 436 | power_poisson_diff_2indep | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def _var_cmle_negbin(rate1, rate2, nobs_ratio, exposure=1, value=1,
dispersion=0):
"""
variance based on constrained cmle, for score test version
for ratio comparison of two negative binomial samples
value = rate1 / rate2 under the null
"""
# definitions in Zhu
# nobs_ratio = n1 / n0
# value = ratio = r1 / r0
rate0 = rate2 # control
nobs_ratio = 1 / nobs_ratio
a = - dispersion * exposure * value * (1 + nobs_ratio)
b = (dispersion * exposure * (rate0 * value + nobs_ratio * rate1) -
(1 + nobs_ratio * value))
c = rate0 + nobs_ratio * rate1
if dispersion == 0:
r0 = -c / b
else:
r0 = (-b - np.sqrt(b**2 - 4 * a * c)) / (2 * a)
r1 = r0 * value
v = (1 / exposure / r0 * (1 + 1 / value / nobs_ratio) +
(1 + nobs_ratio) / nobs_ratio * dispersion)
r2 = r0
return v * nobs_ratio, r1, r2 | variance based on constrained cmle, for score test version
for ratio comparison of two negative binomial samples
value = rate1 / rate2 under the null | _var_cmle_negbin | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def power_negbin_ratio_2indep(
rate1, rate2, nobs1,
nobs_ratio=1,
exposure=1,
value=1,
alpha=0.05,
dispersion=0.01,
alternative="two-sided",
method_var="alt",
return_results=True):
"""
Power of test of ratio of 2 independent negative binomial rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
low : float
Lower equivalence margin for the rate ratio, rate1 / rate2.
upp : float
Upper equivalence margin for the rate ratio, rate1 / rate2.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
exposure : float
Exposure for each observation. Total exposure is nobs1 * exposure
and nobs2 * exposure.
value : float
Rate ratio, rate1 / rate2, under the null hypothesis.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
dispersion : float >= 0.
Dispersion parameter for Negative Binomial distribution.
The Poisson limiting case corresponds to ``dispersion=0``.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates under the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``, or based on a moment
constrained estimate, ``method_var="ftotal"``. see references.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Zhu, Haiyuan. 2017. “Sample Size Calculation for Comparing Two
Poisson or Negative Binomial Rates in Noninferiority or Equivalence
Trials.” Statistics in Biopharmaceutical Research, March.
https://doi.org/10.1080/19466315.2016.1225594
.. [2] Zhu, Haiyuan, and Hassan Lakkis. 2014. “Sample Size Calculation for
Comparing Two Negative Binomial Rates.” Statistics in Medicine 33 (3):
376–87. https://doi.org/10.1002/sim.5947.
.. [3] PASS documentation
"""
# TODO: avoid possible circular import, check if needed
from statsmodels.stats.power import normal_power_het
rate1, rate2, nobs1 = map(np.asarray, [rate1, rate2, nobs1])
nobs2 = nobs_ratio * nobs1
v1 = ((1 / rate1 + 1 / (nobs_ratio * rate2)) / exposure +
(1 + nobs_ratio) / nobs_ratio * dispersion)
if method_var == "alt":
v0 = v1
elif method_var == "ftotal":
v0 = (1 + value * nobs_ratio)**2 / (
exposure * nobs_ratio * value * (rate1 + nobs_ratio * rate2))
v0 += (1 + nobs_ratio) / nobs_ratio * dispersion
elif method_var == "score":
v0 = _var_cmle_negbin(rate1, rate2, nobs_ratio,
exposure=exposure, value=value,
dispersion=dispersion)[0]
else:
raise NotImplementedError(f"method_var {method_var} not recognized")
std_null = np.sqrt(v0)
std_alt = np.sqrt(v1)
es = np.log(rate1 / rate2) - np.log(value)
pow_ = normal_power_het(es, nobs1, alpha, std_null=std_null,
std_alternative=std_alt,
alternative=alternative)
if return_results:
res = HolderTuple(
power=pow_,
std_null=std_null,
std_alt=std_alt,
nobs1=nobs1,
nobs2=nobs2,
nobs_ratio=nobs_ratio,
alpha=alpha,
tuple_=("power",), # override default
)
return res
return pow_ | Power of test of ratio of 2 independent negative binomial rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
low : float
Lower equivalence margin for the rate ratio, rate1 / rate2.
upp : float
Upper equivalence margin for the rate ratio, rate1 / rate2.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
exposure : float
Exposure for each observation. Total exposure is nobs1 * exposure
and nobs2 * exposure.
value : float
Rate ratio, rate1 / rate2, under the null hypothesis.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
dispersion : float >= 0.
Dispersion parameter for Negative Binomial distribution.
The Poisson limiting case corresponds to ``dispersion=0``.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates under the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``, or based on a moment
constrained estimate, ``method_var="ftotal"``. see references.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Zhu, Haiyuan. 2017. “Sample Size Calculation for Comparing Two
Poisson or Negative Binomial Rates in Noninferiority or Equivalence
Trials.” Statistics in Biopharmaceutical Research, March.
https://doi.org/10.1080/19466315.2016.1225594
.. [2] Zhu, Haiyuan, and Hassan Lakkis. 2014. “Sample Size Calculation for
Comparing Two Negative Binomial Rates.” Statistics in Medicine 33 (3):
376–87. https://doi.org/10.1002/sim.5947.
.. [3] PASS documentation | power_negbin_ratio_2indep | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def power_equivalence_neginb_2indep(rate1, rate2, nobs1,
low, upp, nobs_ratio=1,
exposure=1, alpha=0.05, dispersion=0,
method_var="alt",
return_results=False):
"""
Power of equivalence test of ratio of 2 indep. negative binomial rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
low : float
Lower equivalence margin for the rate ratio, rate1 / rate2.
upp : float
Upper equivalence margin for the rate ratio, rate1 / rate2.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
dispersion : float >= 0.
Dispersion parameter for Negative Binomial distribution.
The Poisson limiting case corresponds to ``dispersion=0``.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates under the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``, or based on a moment
constrained estimate, ``method_var="ftotal"``. see references.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Zhu, Haiyuan. 2017. “Sample Size Calculation for Comparing Two
Poisson or Negative Binomial Rates in Noninferiority or Equivalence
Trials.” Statistics in Biopharmaceutical Research, March.
https://doi.org/10.1080/19466315.2016.1225594
.. [2] Zhu, Haiyuan, and Hassan Lakkis. 2014. “Sample Size Calculation for
Comparing Two Negative Binomial Rates.” Statistics in Medicine 33 (3):
376–87. https://doi.org/10.1002/sim.5947.
.. [3] PASS documentation
"""
rate1, rate2, nobs1 = map(np.asarray, [rate1, rate2, nobs1])
nobs2 = nobs_ratio * nobs1
v1 = ((1 / rate2 + 1 / (nobs_ratio * rate1)) / exposure +
(1 + nobs_ratio) / nobs_ratio * dispersion)
if method_var == "alt":
v0_low = v0_upp = v1
elif method_var == "ftotal":
v0_low = (1 + low * nobs_ratio)**2 / (
exposure * nobs_ratio * low * (rate1 + nobs_ratio * rate2))
v0_low += (1 + nobs_ratio) / nobs_ratio * dispersion
v0_upp = (1 + upp * nobs_ratio)**2 / (
exposure * nobs_ratio * upp * (rate1 + nobs_ratio * rate2))
v0_upp += (1 + nobs_ratio) / nobs_ratio * dispersion
elif method_var == "score":
v0_low = _var_cmle_negbin(rate1, rate2, nobs_ratio,
exposure=exposure, value=low,
dispersion=dispersion)[0]
v0_upp = _var_cmle_negbin(rate1, rate2, nobs_ratio,
exposure=exposure, value=upp,
dispersion=dispersion)[0]
else:
raise NotImplementedError(f"method_var {method_var} not recognized")
es_low = np.log(rate1 / rate2) - np.log(low)
es_upp = np.log(rate1 / rate2) - np.log(upp)
std_null_low = np.sqrt(v0_low)
std_null_upp = np.sqrt(v0_upp)
std_alternative = np.sqrt(v1)
pow_ = _power_equivalence_het(es_low, es_upp, nobs1, alpha=alpha,
std_null_low=std_null_low,
std_null_upp=std_null_upp,
std_alternative=std_alternative)
if return_results:
res = HolderTuple(
power=pow_[0],
power_margins=pow[1:],
std_null_low=std_null_low,
std_null_upp=std_null_upp,
std_alt=std_alternative,
nobs1=nobs1,
nobs2=nobs2,
nobs_ratio=nobs_ratio,
alpha=alpha,
tuple_=("power",), # override default
)
return res
else:
return pow_[0] | Power of equivalence test of ratio of 2 indep. negative binomial rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
low : float
Lower equivalence margin for the rate ratio, rate1 / rate2.
upp : float
Upper equivalence margin for the rate ratio, rate1 / rate2.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
dispersion : float >= 0.
Dispersion parameter for Negative Binomial distribution.
The Poisson limiting case corresponds to ``dispersion=0``.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates under the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``, or based on a moment
constrained estimate, ``method_var="ftotal"``. see references.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Zhu, Haiyuan. 2017. “Sample Size Calculation for Comparing Two
Poisson or Negative Binomial Rates in Noninferiority or Equivalence
Trials.” Statistics in Biopharmaceutical Research, March.
https://doi.org/10.1080/19466315.2016.1225594
.. [2] Zhu, Haiyuan, and Hassan Lakkis. 2014. “Sample Size Calculation for
Comparing Two Negative Binomial Rates.” Statistics in Medicine 33 (3):
376–87. https://doi.org/10.1002/sim.5947.
.. [3] PASS documentation | power_equivalence_neginb_2indep | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def test_chisquare_binning(counts, expected, sort_var=None, bins=10,
df=None, ordered=False, sort_method="quicksort",
alpha_nc=0.05):
"""chisquare gof test with binning of data, Hosmer-Lemeshow type
``observed`` and ``expected`` are observation specific and should have
observations in rows and choices in columns
Parameters
----------
counts : array_like
Observed frequency, i.e. counts for all choices
expected : array_like
Expected counts or probability. If expected are counts, then they
need to sum to the same total count as the sum of observed.
If those sums are unequal and all expected values are smaller or equal
to 1, then they are interpreted as probabilities and will be rescaled
to match counts.
sort_var : array_like
1-dimensional array for binning. Groups will be formed according to
quantiles of the sorted array ``sort_var``, so that group sizes have
equal or approximately equal sizes.
Returns
-------
Holdertuple instance
This instance contains the results of the chisquare test and some
information about the data
- statistic : chisquare statistic of the goodness-of-fit test
- pvalue : pvalue of the chisquare test
= df : degrees of freedom of the test
Notes
-----
Degrees of freedom for Hosmer-Lemeshow tests are given by
g groups, c choices
- binary: `df = (g - 2)` for insample,
Stata uses `df = g` for outsample
- multinomial: `df = (g−2) *(c−1)`, reduces to (g-2) for binary c=2,
(Fagerland, Hosmer, Bofin SIM 2008)
- ordinal: `df = (g - 2) * (c - 1) + (c - 2)`, reduces to (g-2) for c=2,
(Hosmer, ... ?)
Note: If there are ties in the ``sort_var`` array, then the split of
observations into groups will depend on the sort algorithm.
"""
observed = np.asarray(counts)
expected = np.asarray(expected)
n_observed = counts.sum()
n_expected = expected.sum()
if not np.allclose(n_observed, n_expected, atol=1e-13):
if np.max(expected) < 1 + 1e-13:
# expected seems to be probability, warn and rescale
import warnings
warnings.warn("sum of expected and of observed differ, "
"rescaling ``expected``")
expected = expected / n_expected * n_observed
else:
# expected doesn't look like fractions or probabilities
raise ValueError("total counts of expected and observed differ")
# k = 1 if observed.ndim == 1 else observed.shape[1]
if sort_var is not None:
argsort = np.argsort(sort_var, kind=sort_method)
else:
argsort = np.arange(observed.shape[0])
# indices = [arr for arr in np.array_split(argsort, bins, axis=0)]
indices = np.array_split(argsort, bins, axis=0)
# in one loop, observed expected in last dimension, too messy,
# freqs_probs = np.array([np.vstack([observed[idx].mean(0),
# expected[idx].mean(0)]).T
# for idx in indices])
freqs = np.array([observed[idx].sum(0) for idx in indices])
probs = np.array([expected[idx].sum(0) for idx in indices])
# chisquare test
resid_pearson = (freqs - probs) / np.sqrt(probs)
chi2_stat_groups = ((freqs - probs)**2 / probs).sum(1)
chi2_stat = chi2_stat_groups.sum()
if df is None:
g, c = freqs.shape
if ordered is True:
df = (g - 2) * (c - 1) + (c - 2)
else:
df = (g - 2) * (c - 1)
pvalue = stats.chi2.sf(chi2_stat, df)
noncentrality = _noncentrality_chisquare(chi2_stat, df, alpha=alpha_nc)
res = HolderTuple(statistic=chi2_stat,
pvalue=pvalue,
df=df,
freqs=freqs,
probs=probs,
noncentrality=noncentrality,
resid_pearson=resid_pearson,
chi2_stat_groups=chi2_stat_groups,
indices=indices
)
return res | chisquare gof test with binning of data, Hosmer-Lemeshow type
``observed`` and ``expected`` are observation specific and should have
observations in rows and choices in columns
Parameters
----------
counts : array_like
Observed frequency, i.e. counts for all choices
expected : array_like
Expected counts or probability. If expected are counts, then they
need to sum to the same total count as the sum of observed.
If those sums are unequal and all expected values are smaller or equal
to 1, then they are interpreted as probabilities and will be rescaled
to match counts.
sort_var : array_like
1-dimensional array for binning. Groups will be formed according to
quantiles of the sorted array ``sort_var``, so that group sizes have
equal or approximately equal sizes.
Returns
-------
Holdertuple instance
This instance contains the results of the chisquare test and some
information about the data
- statistic : chisquare statistic of the goodness-of-fit test
- pvalue : pvalue of the chisquare test
= df : degrees of freedom of the test
Notes
-----
Degrees of freedom for Hosmer-Lemeshow tests are given by
g groups, c choices
- binary: `df = (g - 2)` for insample,
Stata uses `df = g` for outsample
- multinomial: `df = (g−2) *(c−1)`, reduces to (g-2) for binary c=2,
(Fagerland, Hosmer, Bofin SIM 2008)
- ordinal: `df = (g - 2) * (c - 1) + (c - 2)`, reduces to (g-2) for c=2,
(Hosmer, ... ?)
Note: If there are ties in the ``sort_var`` array, then the split of
observations into groups will depend on the sort algorithm. | test_chisquare_binning | python | statsmodels/statsmodels | statsmodels/stats/diagnostic_gen.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic_gen.py | BSD-3-Clause |
def prob_larger_ordinal_choice(prob):
"""probability that observed category is larger than distribution prob
This is a helper function for Ordinal models, where endog is a 1-dim
categorical variable and predicted probabilities are 2-dimensional with
observations in rows and choices in columns.
Parameter
---------
prob : array_like
Expected probabilities for ordinal choices, e.g. from prediction of
an ordinal model with observations in rows and choices in columns.
Returns
-------
cdf_mid : ndarray
mid cdf, i.e ``P(x < y) + 0.5 P(x=y)``
r : ndarray
Probability residual ``P(x > y) - P(x < y)`` for all possible choices.
Computed as ``r = cdf_mid * 2 - 1``
References
----------
.. [2] Li, Chun, and Bryan E. Shepherd. 2012. “A New Residual for Ordinal
Outcomes.” Biometrika 99 (2): 473–80.
See Also
--------
`statsmodels.stats.nonparametric.rank_compare_2ordinal`
"""
# similar to `nonparametric rank_compare_2ordinal`
prob = np.asarray(prob)
cdf = prob.cumsum(-1)
if cdf.ndim == 1:
cdf_ = np.concatenate(([0], cdf))
elif cdf.ndim == 2:
cdf_ = np.concatenate((np.zeros((len(cdf), 1)), cdf), axis=1)
# r_1 = cdf_[..., 1:] + cdf_[..., :-1] - 1
cdf_mid = (cdf_[..., 1:] + cdf_[..., :-1]) / 2
r = cdf_mid * 2 - 1
return cdf_mid, r | probability that observed category is larger than distribution prob
This is a helper function for Ordinal models, where endog is a 1-dim
categorical variable and predicted probabilities are 2-dimensional with
observations in rows and choices in columns.
Parameter
---------
prob : array_like
Expected probabilities for ordinal choices, e.g. from prediction of
an ordinal model with observations in rows and choices in columns.
Returns
-------
cdf_mid : ndarray
mid cdf, i.e ``P(x < y) + 0.5 P(x=y)``
r : ndarray
Probability residual ``P(x > y) - P(x < y)`` for all possible choices.
Computed as ``r = cdf_mid * 2 - 1``
References
----------
.. [2] Li, Chun, and Bryan E. Shepherd. 2012. “A New Residual for Ordinal
Outcomes.” Biometrika 99 (2): 473–80.
See Also
--------
`statsmodels.stats.nonparametric.rank_compare_2ordinal` | prob_larger_ordinal_choice | python | statsmodels/statsmodels | statsmodels/stats/diagnostic_gen.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic_gen.py | BSD-3-Clause |
def prob_larger_2ordinal(probs1, probs2):
"""Stochastically large probability for two ordinal distributions
Computes Pr(x1 > x2) + 0.5 * Pr(x1 = x2) for two ordered multinomial
(ordinal) distributed random variables x1 and x2.
This is vectorized with choices along last axis.
Broadcasting if freq2 is 1-dim also seems to work correctly.
Returns
-------
prob1 : float
Probability that random draw from distribution 1 is larger than a
random draw from distribution 2. Pr(x1 > x2) + 0.5 * Pr(x1 = x2)
prob2 : float
prob2 = 1 - prob1 = Pr(x1 < x2) + 0.5 * Pr(x1 = x2)
"""
# count1 = np.asarray(count1)
# count2 = np.asarray(count2)
# nobs1, nobs2 = count1.sum(), count2.sum()
# freq1 = count1 / nobs1
# freq2 = count2 / nobs2
# if freq1.ndim == 1:
# freq1_ = np.concatenate(([0], freq1))
# elif freq1.ndim == 2:
# freq1_ = np.concatenate((np.zeros((len(freq1), 1)), freq1), axis=1)
# if freq2.ndim == 1:
# freq2_ = np.concatenate(([0], freq2))
# elif freq2.ndim == 2:
# freq2_ = np.concatenate((np.zeros((len(freq2), 1)), freq2), axis=1)
freq1 = np.asarray(probs1)
freq2 = np.asarray(probs2)
# add zero at beginning of choices for cdf computation
freq1_ = np.concatenate((np.zeros(freq1.shape[:-1] + (1,)), freq1),
axis=-1)
freq2_ = np.concatenate((np.zeros(freq2.shape[:-1] + (1,)), freq2),
axis=-1)
cdf1 = freq1_.cumsum(axis=-1)
cdf2 = freq2_.cumsum(axis=-1)
# mid rank cdf
cdfm1 = (cdf1[..., 1:] + cdf1[..., :-1]) / 2
cdfm2 = (cdf2[..., 1:] + cdf2[..., :-1]) / 2
prob1 = (cdfm2 * freq1).sum(-1)
prob2 = (cdfm1 * freq2).sum(-1)
return prob1, prob2 | Stochastically large probability for two ordinal distributions
Computes Pr(x1 > x2) + 0.5 * Pr(x1 = x2) for two ordered multinomial
(ordinal) distributed random variables x1 and x2.
This is vectorized with choices along last axis.
Broadcasting if freq2 is 1-dim also seems to work correctly.
Returns
-------
prob1 : float
Probability that random draw from distribution 1 is larger than a
random draw from distribution 2. Pr(x1 > x2) + 0.5 * Pr(x1 = x2)
prob2 : float
prob2 = 1 - prob1 = Pr(x1 < x2) + 0.5 * Pr(x1 = x2) | prob_larger_2ordinal | python | statsmodels/statsmodels | statsmodels/stats/diagnostic_gen.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic_gen.py | BSD-3-Clause |
def cov_multinomial(probs):
"""covariance matrix of multinomial distribution
This is vectorized with choices along last axis.
cov = diag(probs) - outer(probs, probs)
"""
k = probs.shape[-1]
di = np.diag_indices(k, 2)
cov = probs[..., None] * probs[..., None, :]
cov *= - 1
cov[..., di[0], di[1]] += probs
return cov | covariance matrix of multinomial distribution
This is vectorized with choices along last axis.
cov = diag(probs) - outer(probs, probs) | cov_multinomial | python | statsmodels/statsmodels | statsmodels/stats/diagnostic_gen.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic_gen.py | BSD-3-Clause |
def var_multinomial(probs):
"""variance of multinomial distribution
var = probs * (1 - probs)
"""
var = probs * (1 - probs)
return var | variance of multinomial distribution
var = probs * (1 - probs) | var_multinomial | python | statsmodels/statsmodels | statsmodels/stats/diagnostic_gen.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic_gen.py | BSD-3-Clause |
def _critvals(self, n):
"""
Rows of the table, linearly interpolated for given sample size
Parameters
----------
n : float
sample size, second parameter of the table
Returns
-------
critv : ndarray, 1d
critical values (ppf) corresponding to a row of the table
Notes
-----
This is used in two step interpolation, or if we want to know the
critical values for all alphas for any sample size that we can obtain
through interpolation
"""
if n > self.max_size:
if self.asymptotic is not None:
cv = self.asymptotic(n)
else:
raise ValueError('n is above max(size) and no asymptotic '
'distribtuion is provided')
else:
cv = ([p(n) for p in self.polyn])
if n > self.min_nobs:
w = (n - self.min_nobs) / (self.max_nobs - self.min_nobs)
w = min(1.0, w)
a_cv = self.asymptotic(n)
cv = w * a_cv + (1 - w) * cv
return cv | Rows of the table, linearly interpolated for given sample size
Parameters
----------
n : float
sample size, second parameter of the table
Returns
-------
critv : ndarray, 1d
critical values (ppf) corresponding to a row of the table
Notes
-----
This is used in two step interpolation, or if we want to know the
critical values for all alphas for any sample size that we can obtain
through interpolation | _critvals | python | statsmodels/statsmodels | statsmodels/stats/tabledist.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tabledist.py | BSD-3-Clause |
def prob(self, x, n):
"""
Find pvalues by interpolation, either cdf(x)
Returns extreme probabilities, 0.001 and 0.2, for out of range
Parameters
----------
x : array_like
observed value, assumed to follow the distribution in the table
n : float
sample size, second parameter of the table
Returns
-------
prob : array_like
This is the probability for each value of x, the p-value in
underlying distribution is for a statistical test.
"""
critv = self._critvals(n)
alpha = self.alpha
if self.signcrit < 1:
# reverse if critv is decreasing
critv, alpha = critv[::-1], alpha[::-1]
# now critv is increasing
if np.size(x) == 1:
if x < critv[0]:
return alpha[0]
elif x > critv[-1]:
return alpha[-1]
return interp1d(critv, alpha)(x)[()]
else:
# vectorized
cond_low = (x < critv[0])
cond_high = (x > critv[-1])
cond_interior = ~np.logical_or(cond_low, cond_high)
probs = np.nan * np.ones(x.shape) # mistake if nan left
probs[cond_low] = alpha[0]
probs[cond_low] = alpha[-1]
probs[cond_interior] = interp1d(critv, alpha)(x[cond_interior])
return probs | Find pvalues by interpolation, either cdf(x)
Returns extreme probabilities, 0.001 and 0.2, for out of range
Parameters
----------
x : array_like
observed value, assumed to follow the distribution in the table
n : float
sample size, second parameter of the table
Returns
-------
prob : array_like
This is the probability for each value of x, the p-value in
underlying distribution is for a statistical test. | prob | python | statsmodels/statsmodels | statsmodels/stats/tabledist.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tabledist.py | BSD-3-Clause |
def crit(self, prob, n):
"""
Returns interpolated quantiles, similar to ppf or isf
use two sequential 1d interpolation, first by n then by prob
Parameters
----------
prob : array_like
probabilities corresponding to the definition of table columns
n : int or float
sample size, second parameter of the table
Returns
-------
ppf : array_like
critical values with same shape as prob
"""
prob = np.asarray(prob)
alpha = self.alpha
critv = self._critvals(n)
# vectorized
cond_ilow = (prob > alpha[0])
cond_ihigh = (prob < alpha[-1])
cond_interior = np.logical_or(cond_ilow, cond_ihigh)
# scalar
if prob.size == 1:
if cond_interior:
return interp1d(alpha, critv)(prob)
else:
return np.nan
# vectorized
quantile = np.nan * np.ones(prob.shape) # nans for outside
quantile[cond_interior] = interp1d(alpha, critv)(prob[cond_interior])
return quantile | Returns interpolated quantiles, similar to ppf or isf
use two sequential 1d interpolation, first by n then by prob
Parameters
----------
prob : array_like
probabilities corresponding to the definition of table columns
n : int or float
sample size, second parameter of the table
Returns
-------
ppf : array_like
critical values with same shape as prob | crit | python | statsmodels/statsmodels | statsmodels/stats/tabledist.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tabledist.py | BSD-3-Clause |
def crit3(self, prob, n):
"""
Returns interpolated quantiles, similar to ppf or isf
uses Rbf to interpolate critical values as function of `prob` and `n`
Parameters
----------
prob : array_like
probabilities corresponding to the definition of table columns
n : int or float
sample size, second parameter of the table
Returns
-------
ppf : array_like
critical values with same shape as prob, returns nan for arguments
that are outside of the table bounds
"""
prob = np.asarray(prob)
alpha = self.alpha
# vectorized
cond_ilow = (prob > alpha[0])
cond_ihigh = (prob < alpha[-1])
cond_interior = np.logical_or(cond_ilow, cond_ihigh)
# scalar
if prob.size == 1:
if cond_interior:
return self.polyrbf(n, prob)
else:
return np.nan
# vectorized
quantile = np.nan * np.ones(prob.shape) # nans for outside
quantile[cond_interior] = self.polyrbf(n, prob[cond_interior])
return quantile | Returns interpolated quantiles, similar to ppf or isf
uses Rbf to interpolate critical values as function of `prob` and `n`
Parameters
----------
prob : array_like
probabilities corresponding to the definition of table columns
n : int or float
sample size, second parameter of the table
Returns
-------
ppf : array_like
critical values with same shape as prob, returns nan for arguments
that are outside of the table bounds | crit3 | python | statsmodels/statsmodels | statsmodels/stats/tabledist.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tabledist.py | BSD-3-Clause |
def rankdata_2samp(x1, x2):
"""Compute midranks for two samples
Parameters
----------
x1, x2 : array_like
Original data for two samples that will be converted to midranks.
Returns
-------
rank1 : ndarray
Midranks of the first sample in the pooled sample.
rank2 : ndarray
Midranks of the second sample in the pooled sample.
ranki1 : ndarray
Internal midranks of the first sample.
ranki2 : ndarray
Internal midranks of the second sample.
"""
x1 = np.asarray(x1)
x2 = np.asarray(x2)
nobs1 = len(x1)
nobs2 = len(x2)
if nobs1 == 0 or nobs2 == 0:
raise ValueError("one sample has zero length")
x_combined = np.concatenate((x1, x2))
if x_combined.ndim > 1:
rank = np.apply_along_axis(rankdata, 0, x_combined)
else:
rank = rankdata(x_combined) # no axis in older scipy
rank1 = rank[:nobs1]
rank2 = rank[nobs1:]
if x_combined.ndim > 1:
ranki1 = np.apply_along_axis(rankdata, 0, x1)
ranki2 = np.apply_along_axis(rankdata, 0, x2)
else:
ranki1 = rankdata(x1)
ranki2 = rankdata(x2)
return rank1, rank2, ranki1, ranki2 | Compute midranks for two samples
Parameters
----------
x1, x2 : array_like
Original data for two samples that will be converted to midranks.
Returns
-------
rank1 : ndarray
Midranks of the first sample in the pooled sample.
rank2 : ndarray
Midranks of the second sample in the pooled sample.
ranki1 : ndarray
Internal midranks of the first sample.
ranki2 : ndarray
Internal midranks of the second sample. | rankdata_2samp | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def conf_int(self, value=None, alpha=0.05, alternative="two-sided"):
"""
Confidence interval for probability that sample 1 has larger values
Confidence interval is for the shifted probability
P(x1 > x2) + 0.5 * P(x1 = x2) - value
Parameters
----------
value : float
Value, default 0, shifts the confidence interval,
e.g. ``value=0.5`` centers the confidence interval at zero.
alpha : float
Significance level for the confidence interval, coverage is
``1-alpha``
alternative : str
The alternative hypothesis, H1, has to be one of the following
* 'two-sided' : H1: ``prob - value`` not equal to 0.
* 'larger' : H1: ``prob - value > 0``
* 'smaller' : H1: ``prob - value < 0``
Returns
-------
lower : float or ndarray
Lower confidence limit. This is -inf for the one-sided alternative
"smaller".
upper : float or ndarray
Upper confidence limit. This is inf for the one-sided alternative
"larger".
"""
p0 = value
if p0 is None:
p0 = 0
diff = self.prob1 - p0
std_diff = np.sqrt(self.var / self.nobs)
if self.use_t is False:
return _zconfint_generic(diff, std_diff, alpha, alternative)
else:
return _tconfint_generic(diff, std_diff, self.df, alpha,
alternative) | Confidence interval for probability that sample 1 has larger values
Confidence interval is for the shifted probability
P(x1 > x2) + 0.5 * P(x1 = x2) - value
Parameters
----------
value : float
Value, default 0, shifts the confidence interval,
e.g. ``value=0.5`` centers the confidence interval at zero.
alpha : float
Significance level for the confidence interval, coverage is
``1-alpha``
alternative : str
The alternative hypothesis, H1, has to be one of the following
* 'two-sided' : H1: ``prob - value`` not equal to 0.
* 'larger' : H1: ``prob - value > 0``
* 'smaller' : H1: ``prob - value < 0``
Returns
-------
lower : float or ndarray
Lower confidence limit. This is -inf for the one-sided alternative
"smaller".
upper : float or ndarray
Upper confidence limit. This is inf for the one-sided alternative
"larger". | conf_int | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def test_prob_superior(self, value=0.5, alternative="two-sided"):
"""test for superiority probability
H0: P(x1 > x2) + 0.5 * P(x1 = x2) = value
The alternative is that the probability is either not equal, larger
or smaller than the null-value depending on the chosen alternative.
Parameters
----------
value : float
Value of the probability under the Null hypothesis.
alternative : str
The alternative hypothesis, H1, has to be one of the following
* 'two-sided' : H1: ``prob - value`` not equal to 0.
* 'larger' : H1: ``prob - value > 0``
* 'smaller' : H1: ``prob - value < 0``
Returns
-------
res : HolderTuple
HolderTuple instance with the following main attributes
statistic : float
Test statistic for z- or t-test
pvalue : float
Pvalue of the test based on either normal or t distribution.
"""
p0 = value # alias
# diff = self.prob1 - p0 # for reporting, not used in computation
# TODO: use var_prob
std_diff = np.sqrt(self.var / self.nobs)
# corresponds to a one-sample test and either p0 or diff could be used
if not self.use_t:
stat, pv = _zstat_generic(self.prob1, p0, std_diff, alternative,
diff=0)
distr = "normal"
else:
stat, pv = _tstat_generic(self.prob1, p0, std_diff, self.df,
alternative, diff=0)
distr = "t"
res = HolderTuple(statistic=stat,
pvalue=pv,
df=self.df,
distribution=distr
)
return res | test for superiority probability
H0: P(x1 > x2) + 0.5 * P(x1 = x2) = value
The alternative is that the probability is either not equal, larger
or smaller than the null-value depending on the chosen alternative.
Parameters
----------
value : float
Value of the probability under the Null hypothesis.
alternative : str
The alternative hypothesis, H1, has to be one of the following
* 'two-sided' : H1: ``prob - value`` not equal to 0.
* 'larger' : H1: ``prob - value > 0``
* 'smaller' : H1: ``prob - value < 0``
Returns
-------
res : HolderTuple
HolderTuple instance with the following main attributes
statistic : float
Test statistic for z- or t-test
pvalue : float
Pvalue of the test based on either normal or t distribution. | test_prob_superior | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def tost_prob_superior(self, low, upp):
'''test of stochastic (non-)equivalence of p = P(x1 > x2)
Null hypothesis: p < low or p > upp
Alternative hypothesis: low < p < upp
where p is the probability that a random draw from the population of
the first sample has a larger value than a random draw from the
population of the second sample, specifically
p = P(x1 > x2) + 0.5 * P(x1 = x2)
If the pvalue is smaller than a threshold, say 0.05, then we reject the
hypothesis that the probability p that distribution 1 is stochastically
superior to distribution 2 is outside of the interval given by
thresholds low and upp.
Parameters
----------
low, upp : float
equivalence interval low < mean < upp
Returns
-------
res : HolderTuple
HolderTuple instance with the following main attributes
pvalue : float
Pvalue of the equivalence test given by the larger pvalue of
the two one-sided tests.
statistic : float
Test statistic of the one-sided test that has the larger
pvalue.
results_larger : HolderTuple
Results instanc with test statistic, pvalue and degrees of
freedom for lower threshold test.
results_smaller : HolderTuple
Results instanc with test statistic, pvalue and degrees of
freedom for upper threshold test.
'''
t1 = self.test_prob_superior(low, alternative='larger')
t2 = self.test_prob_superior(upp, alternative='smaller')
# idx_max = 1 if t1.pvalue < t2.pvalue else 0
idx_max = np.asarray(t1.pvalue < t2.pvalue, int)
title = "Equivalence test for Prob(x1 > x2) + 0.5 Prob(x1 = x2) "
res = HolderTuple(statistic=np.choose(idx_max,
[t1.statistic, t2.statistic]),
# pvalue=[t1.pvalue, t2.pvalue][idx_max], # python
# use np.choose for vectorized selection
pvalue=np.choose(idx_max, [t1.pvalue, t2.pvalue]),
results_larger=t1,
results_smaller=t2,
title=title
)
return res | test of stochastic (non-)equivalence of p = P(x1 > x2)
Null hypothesis: p < low or p > upp
Alternative hypothesis: low < p < upp
where p is the probability that a random draw from the population of
the first sample has a larger value than a random draw from the
population of the second sample, specifically
p = P(x1 > x2) + 0.5 * P(x1 = x2)
If the pvalue is smaller than a threshold, say 0.05, then we reject the
hypothesis that the probability p that distribution 1 is stochastically
superior to distribution 2 is outside of the interval given by
thresholds low and upp.
Parameters
----------
low, upp : float
equivalence interval low < mean < upp
Returns
-------
res : HolderTuple
HolderTuple instance with the following main attributes
pvalue : float
Pvalue of the equivalence test given by the larger pvalue of
the two one-sided tests.
statistic : float
Test statistic of the one-sided test that has the larger
pvalue.
results_larger : HolderTuple
Results instanc with test statistic, pvalue and degrees of
freedom for lower threshold test.
results_smaller : HolderTuple
Results instanc with test statistic, pvalue and degrees of
freedom for upper threshold test. | tost_prob_superior | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def confint_lintransf(self, const=-1, slope=2, alpha=0.05,
alternative="two-sided"):
"""confidence interval of a linear transformation of prob1
This computes the confidence interval for
d = const + slope * prob1
Default values correspond to Somers' d.
Parameters
----------
const, slope : float
Constant and slope for linear (affine) transformation.
alpha : float
Significance level for the confidence interval, coverage is
``1-alpha``
alternative : str
The alternative hypothesis, H1, has to be one of the following
* 'two-sided' : H1: ``prob - value`` not equal to 0.
* 'larger' : H1: ``prob - value > 0``
* 'smaller' : H1: ``prob - value < 0``
Returns
-------
lower : float or ndarray
Lower confidence limit. This is -inf for the one-sided alternative
"smaller".
upper : float or ndarray
Upper confidence limit. This is inf for the one-sided alternative
"larger".
"""
low_p, upp_p = self.conf_int(alpha=alpha, alternative=alternative)
low = const + slope * low_p
upp = const + slope * upp_p
if slope < 0:
low, upp = upp, low
return low, upp | confidence interval of a linear transformation of prob1
This computes the confidence interval for
d = const + slope * prob1
Default values correspond to Somers' d.
Parameters
----------
const, slope : float
Constant and slope for linear (affine) transformation.
alpha : float
Significance level for the confidence interval, coverage is
``1-alpha``
alternative : str
The alternative hypothesis, H1, has to be one of the following
* 'two-sided' : H1: ``prob - value`` not equal to 0.
* 'larger' : H1: ``prob - value > 0``
* 'smaller' : H1: ``prob - value < 0``
Returns
-------
lower : float or ndarray
Lower confidence limit. This is -inf for the one-sided alternative
"smaller".
upper : float or ndarray
Upper confidence limit. This is inf for the one-sided alternative
"larger". | confint_lintransf | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def effectsize_normal(self, prob=None):
"""
Cohen's d, standardized mean difference under normality assumption.
This computes the standardized mean difference, Cohen's d, effect size
that is equivalent to the rank based probability ``p`` of being
stochastically larger if we assume that the data is normally
distributed, given by
:math: `d = F^{-1}(p) * \\sqrt{2}`
where :math:`F^{-1}` is the inverse of the cdf of the normal
distribution.
Parameters
----------
prob : float in (0, 1)
Probability to be converted to Cohen's d effect size.
If prob is None, then the ``prob1`` attribute is used.
Returns
-------
equivalent Cohen's d effect size under normality assumption.
"""
if prob is None:
prob = self.prob1
return stats.norm.ppf(prob) * np.sqrt(2) | Cohen's d, standardized mean difference under normality assumption.
This computes the standardized mean difference, Cohen's d, effect size
that is equivalent to the rank based probability ``p`` of being
stochastically larger if we assume that the data is normally
distributed, given by
:math: `d = F^{-1}(p) * \\sqrt{2}`
where :math:`F^{-1}` is the inverse of the cdf of the normal
distribution.
Parameters
----------
prob : float in (0, 1)
Probability to be converted to Cohen's d effect size.
If prob is None, then the ``prob1`` attribute is used.
Returns
-------
equivalent Cohen's d effect size under normality assumption. | effectsize_normal | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def summary(self, alpha=0.05, xname=None):
"""summary table for probability that random draw x1 is larger than x2
Parameters
----------
alpha : float
Significance level for confidence intervals. Coverage is 1 - alpha
xname : None or list of str
If None, then each row has a name column with generic names.
If xname is a list of strings, then it will be included as part
of those names.
Returns
-------
SimpleTable instance with methods to convert to different output
formats.
"""
yname = "None"
effect = np.atleast_1d(self.prob1)
if self.pvalue is None:
statistic, pvalue = self.test_prob_superior()
else:
pvalue = self.pvalue
statistic = self.statistic
pvalues = np.atleast_1d(pvalue)
ci = np.atleast_2d(self.conf_int(alpha=alpha))
if ci.shape[0] > 1:
ci = ci.T
use_t = self.use_t
sd = np.atleast_1d(np.sqrt(self.var_prob))
statistic = np.atleast_1d(statistic)
if xname is None:
xname = ['c%d' % ii for ii in range(len(effect))]
xname2 = ['prob(x1>x2) %s' % ii for ii in xname]
title = "Probability sample 1 is stochastically larger"
from statsmodels.iolib.summary import summary_params
summ = summary_params((self, effect, sd, statistic,
pvalues, ci),
yname=yname, xname=xname2, use_t=use_t,
title=title, alpha=alpha)
return summ | summary table for probability that random draw x1 is larger than x2
Parameters
----------
alpha : float
Significance level for confidence intervals. Coverage is 1 - alpha
xname : None or list of str
If None, then each row has a name column with generic names.
If xname is a list of strings, then it will be included as part
of those names.
Returns
-------
SimpleTable instance with methods to convert to different output
formats. | summary | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def rank_compare_2indep(x1, x2, use_t=True):
"""
Statistics and tests for the probability that x1 has larger values than x2.
p is the probability that a random draw from the population of
the first sample has a larger value than a random draw from the
population of the second sample, specifically
p = P(x1 > x2) + 0.5 * P(x1 = x2)
This is a measure underlying Wilcoxon-Mann-Whitney's U test,
Fligner-Policello test and Brunner-Munzel test, and
Inference is based on the asymptotic distribution of the Brunner-Munzel
test. The half probability for ties corresponds to the use of midranks
and make it valid for discrete variables.
The Null hypothesis for stochastic equality is p = 0.5, which corresponds
to the Brunner-Munzel test.
Parameters
----------
x1, x2 : array_like
Array of samples, should be one-dimensional.
use_t : boolean
If use_t is true, the t distribution with Welch-Satterthwaite type
degrees of freedom is used for p-value and confidence interval.
If use_t is false, then the normal distribution is used.
Returns
-------
res : RankCompareResult
The results instance contains the results for the Brunner-Munzel test
and has methods for hypothesis tests, confidence intervals and summary.
statistic : float
The Brunner-Munzel W statistic.
pvalue : float
p-value assuming an t distribution. One-sided or
two-sided, depending on the choice of `alternative` and `use_t`.
See Also
--------
RankCompareResult
scipy.stats.brunnermunzel : Brunner-Munzel test for stochastic equality
scipy.stats.mannwhitneyu : Mann-Whitney rank test on two samples.
Notes
-----
Wilcoxon-Mann-Whitney assumes equal variance or equal distribution under
the Null hypothesis. Fligner-Policello test allows for unequal variances
but assumes continuous distribution, i.e. no ties.
Brunner-Munzel extend the test to allow for unequal variance and discrete
or ordered categorical random variables.
Brunner and Munzel recommended to estimate the p-value by t-distribution
when the size of data is 50 or less. If the size is lower than 10, it would
be better to use permuted Brunner Munzel test (see [2]_) for the test
of stochastic equality.
This measure has been introduced in the literature under many different
names relying on a variety of assumptions.
In psychology, McGraw and Wong (1992) introduced it as Common Language
effect size for the continuous, normal distribution case,
Vargha and Delaney (2000) [3]_ extended it to the nonparametric
continuous distribution case as in Fligner-Policello.
WMW and related tests can only be interpreted as test of medians or tests
of central location only under very restrictive additional assumptions
such as both distribution are identical under the equality null hypothesis
(assumed by Mann-Whitney) or both distributions are symmetric (shown by
Fligner-Policello). If the distribution of the two samples can differ in
an arbitrary way, then the equality Null hypothesis corresponds to p=0.5
against an alternative p != 0.5. see for example Conroy (2012) [4]_ and
Divine et al (2018) [5]_ .
Note: Brunner-Munzel and related literature define the probability that x1
is stochastically smaller than x2, while here we use stochastically larger.
This equivalent to switching x1 and x2 in the two sample case.
References
----------
.. [1] Brunner, E. and Munzel, U. "The nonparametric Benhrens-Fisher
problem: Asymptotic theory and a small-sample approximation".
Biometrical Journal. Vol. 42(2000): 17-25.
.. [2] Neubert, K. and Brunner, E. "A studentized permutation test for the
non-parametric Behrens-Fisher problem". Computational Statistics and
Data Analysis. Vol. 51(2007): 5192-5204.
.. [3] Vargha, András, and Harold D. Delaney. 2000. “A Critique and
Improvement of the CL Common Language Effect Size Statistics of
McGraw and Wong.” Journal of Educational and Behavioral Statistics
25 (2): 101–32. https://doi.org/10.3102/10769986025002101.
.. [4] Conroy, Ronán M. 2012. “What Hypotheses Do ‘Nonparametric’ Two-Group
Tests Actually Test?” The Stata Journal: Promoting Communications on
Statistics and Stata 12 (2): 182–90.
https://doi.org/10.1177/1536867X1201200202.
.. [5] Divine, George W., H. James Norton, Anna E. Barón, and Elizabeth
Juarez-Colunga. 2018. “The Wilcoxon–Mann–Whitney Procedure Fails as
a Test of Medians.” The American Statistician 72 (3): 278–86.
https://doi.org/10.1080/00031305.2017.1305291.
"""
x1 = np.asarray(x1)
x2 = np.asarray(x2)
nobs1 = len(x1)
nobs2 = len(x2)
nobs = nobs1 + nobs2
if nobs1 == 0 or nobs2 == 0:
raise ValueError("one sample has zero length")
rank1, rank2, ranki1, ranki2 = rankdata_2samp(x1, x2)
meanr1 = np.mean(rank1, axis=0)
meanr2 = np.mean(rank2, axis=0)
meanri1 = np.mean(ranki1, axis=0)
meanri2 = np.mean(ranki2, axis=0)
S1 = np.sum(np.power(rank1 - ranki1 - meanr1 + meanri1, 2.0), axis=0)
S1 /= nobs1 - 1
S2 = np.sum(np.power(rank2 - ranki2 - meanr2 + meanri2, 2.0), axis=0)
S2 /= nobs2 - 1
wbfn = nobs1 * nobs2 * (meanr1 - meanr2)
wbfn /= (nobs1 + nobs2) * np.sqrt(nobs1 * S1 + nobs2 * S2)
# Here we only use alternative == "two-sided"
if use_t:
df_numer = np.power(nobs1 * S1 + nobs2 * S2, 2.0)
df_denom = np.power(nobs1 * S1, 2.0) / (nobs1 - 1)
df_denom += np.power(nobs2 * S2, 2.0) / (nobs2 - 1)
df = df_numer / df_denom
pvalue = 2 * stats.t.sf(np.abs(wbfn), df)
else:
pvalue = 2 * stats.norm.sf(np.abs(wbfn))
df = None
# other info
var1 = S1 / (nobs - nobs1)**2
var2 = S2 / (nobs - nobs2)**2
var_prob = (var1 / nobs1 + var2 / nobs2)
var = nobs * (var1 / nobs1 + var2 / nobs2)
prob1 = (meanr1 - (nobs1 + 1) / 2) / nobs2
prob2 = (meanr2 - (nobs2 + 1) / 2) / nobs1
return RankCompareResult(statistic=wbfn, pvalue=pvalue, s1=S1, s2=S2,
var1=var1, var2=var2, var=var,
var_prob=var_prob,
nobs1=nobs1, nobs2=nobs2, nobs=nobs,
mean1=meanr1, mean2=meanr2,
prob1=prob1, prob2=prob2,
somersd1=prob1 * 2 - 1, somersd2=prob2 * 2 - 1,
df=df, use_t=use_t
) | Statistics and tests for the probability that x1 has larger values than x2.
p is the probability that a random draw from the population of
the first sample has a larger value than a random draw from the
population of the second sample, specifically
p = P(x1 > x2) + 0.5 * P(x1 = x2)
This is a measure underlying Wilcoxon-Mann-Whitney's U test,
Fligner-Policello test and Brunner-Munzel test, and
Inference is based on the asymptotic distribution of the Brunner-Munzel
test. The half probability for ties corresponds to the use of midranks
and make it valid for discrete variables.
The Null hypothesis for stochastic equality is p = 0.5, which corresponds
to the Brunner-Munzel test.
Parameters
----------
x1, x2 : array_like
Array of samples, should be one-dimensional.
use_t : boolean
If use_t is true, the t distribution with Welch-Satterthwaite type
degrees of freedom is used for p-value and confidence interval.
If use_t is false, then the normal distribution is used.
Returns
-------
res : RankCompareResult
The results instance contains the results for the Brunner-Munzel test
and has methods for hypothesis tests, confidence intervals and summary.
statistic : float
The Brunner-Munzel W statistic.
pvalue : float
p-value assuming an t distribution. One-sided or
two-sided, depending on the choice of `alternative` and `use_t`.
See Also
--------
RankCompareResult
scipy.stats.brunnermunzel : Brunner-Munzel test for stochastic equality
scipy.stats.mannwhitneyu : Mann-Whitney rank test on two samples.
Notes
-----
Wilcoxon-Mann-Whitney assumes equal variance or equal distribution under
the Null hypothesis. Fligner-Policello test allows for unequal variances
but assumes continuous distribution, i.e. no ties.
Brunner-Munzel extend the test to allow for unequal variance and discrete
or ordered categorical random variables.
Brunner and Munzel recommended to estimate the p-value by t-distribution
when the size of data is 50 or less. If the size is lower than 10, it would
be better to use permuted Brunner Munzel test (see [2]_) for the test
of stochastic equality.
This measure has been introduced in the literature under many different
names relying on a variety of assumptions.
In psychology, McGraw and Wong (1992) introduced it as Common Language
effect size for the continuous, normal distribution case,
Vargha and Delaney (2000) [3]_ extended it to the nonparametric
continuous distribution case as in Fligner-Policello.
WMW and related tests can only be interpreted as test of medians or tests
of central location only under very restrictive additional assumptions
such as both distribution are identical under the equality null hypothesis
(assumed by Mann-Whitney) or both distributions are symmetric (shown by
Fligner-Policello). If the distribution of the two samples can differ in
an arbitrary way, then the equality Null hypothesis corresponds to p=0.5
against an alternative p != 0.5. see for example Conroy (2012) [4]_ and
Divine et al (2018) [5]_ .
Note: Brunner-Munzel and related literature define the probability that x1
is stochastically smaller than x2, while here we use stochastically larger.
This equivalent to switching x1 and x2 in the two sample case.
References
----------
.. [1] Brunner, E. and Munzel, U. "The nonparametric Benhrens-Fisher
problem: Asymptotic theory and a small-sample approximation".
Biometrical Journal. Vol. 42(2000): 17-25.
.. [2] Neubert, K. and Brunner, E. "A studentized permutation test for the
non-parametric Behrens-Fisher problem". Computational Statistics and
Data Analysis. Vol. 51(2007): 5192-5204.
.. [3] Vargha, András, and Harold D. Delaney. 2000. “A Critique and
Improvement of the CL Common Language Effect Size Statistics of
McGraw and Wong.” Journal of Educational and Behavioral Statistics
25 (2): 101–32. https://doi.org/10.3102/10769986025002101.
.. [4] Conroy, Ronán M. 2012. “What Hypotheses Do ‘Nonparametric’ Two-Group
Tests Actually Test?” The Stata Journal: Promoting Communications on
Statistics and Stata 12 (2): 182–90.
https://doi.org/10.1177/1536867X1201200202.
.. [5] Divine, George W., H. James Norton, Anna E. Barón, and Elizabeth
Juarez-Colunga. 2018. “The Wilcoxon–Mann–Whitney Procedure Fails as
a Test of Medians.” The American Statistician 72 (3): 278–86.
https://doi.org/10.1080/00031305.2017.1305291. | rank_compare_2indep | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def rank_compare_2ordinal(count1, count2, ddof=1, use_t=True):
"""
Stochastically larger probability for 2 independent ordinal samples.
This is a special case of `rank_compare_2indep` when the data are given as
counts of two independent ordinal, i.e. ordered multinomial, samples.
The statistic of interest is the probability that a random draw from the
population of the first sample has a larger value than a random draw from
the population of the second sample, specifically
p = P(x1 > x2) + 0.5 * P(x1 = x2)
Parameters
----------
count1 : array_like
Counts of the first sample, categories are assumed to be ordered.
count2 : array_like
Counts of the second sample, number of categories and ordering needs
to be the same as for sample 1.
ddof : scalar
Degrees of freedom correction for variance estimation. The default
ddof=1 corresponds to `rank_compare_2indep`.
use_t : bool
If use_t is true, the t distribution with Welch-Satterthwaite type
degrees of freedom is used for p-value and confidence interval.
If use_t is false, then the normal distribution is used.
Returns
-------
res : RankCompareResult
This includes methods for hypothesis tests and confidence intervals
for the probability that sample 1 is stochastically larger than
sample 2.
See Also
--------
rank_compare_2indep
RankCompareResult
Notes
-----
The implementation is based on the appendix of Munzel and Hauschke (2003)
with the addition of ``ddof`` so that the results match the general
function `rank_compare_2indep`.
"""
count1 = np.asarray(count1)
count2 = np.asarray(count2)
nobs1, nobs2 = count1.sum(), count2.sum()
freq1 = count1 / nobs1
freq2 = count2 / nobs2
cdf1 = np.concatenate(([0], freq1)).cumsum(axis=0)
cdf2 = np.concatenate(([0], freq2)).cumsum(axis=0)
# mid rank cdf
cdfm1 = (cdf1[1:] + cdf1[:-1]) / 2
cdfm2 = (cdf2[1:] + cdf2[:-1]) / 2
prob1 = (cdfm2 * freq1).sum()
prob2 = (cdfm1 * freq2).sum()
var1 = (cdfm2**2 * freq1).sum() - prob1**2
var2 = (cdfm1**2 * freq2).sum() - prob2**2
var_prob = (var1 / (nobs1 - ddof) + var2 / (nobs2 - ddof))
nobs = nobs1 + nobs2
var = nobs * var_prob
vn1 = var1 * nobs2 * nobs1 / (nobs1 - ddof)
vn2 = var2 * nobs1 * nobs2 / (nobs2 - ddof)
df = (vn1 + vn2)**2 / (vn1**2 / (nobs1 - 1) + vn2**2 / (nobs2 - 1))
res = RankCompareResult(statistic=None, pvalue=None, s1=None, s2=None,
var1=var1, var2=var2, var=var,
var_prob=var_prob,
nobs1=nobs1, nobs2=nobs2, nobs=nobs,
mean1=None, mean2=None,
prob1=prob1, prob2=prob2,
somersd1=prob1 * 2 - 1, somersd2=prob2 * 2 - 1,
df=df, use_t=use_t
)
return res | Stochastically larger probability for 2 independent ordinal samples.
This is a special case of `rank_compare_2indep` when the data are given as
counts of two independent ordinal, i.e. ordered multinomial, samples.
The statistic of interest is the probability that a random draw from the
population of the first sample has a larger value than a random draw from
the population of the second sample, specifically
p = P(x1 > x2) + 0.5 * P(x1 = x2)
Parameters
----------
count1 : array_like
Counts of the first sample, categories are assumed to be ordered.
count2 : array_like
Counts of the second sample, number of categories and ordering needs
to be the same as for sample 1.
ddof : scalar
Degrees of freedom correction for variance estimation. The default
ddof=1 corresponds to `rank_compare_2indep`.
use_t : bool
If use_t is true, the t distribution with Welch-Satterthwaite type
degrees of freedom is used for p-value and confidence interval.
If use_t is false, then the normal distribution is used.
Returns
-------
res : RankCompareResult
This includes methods for hypothesis tests and confidence intervals
for the probability that sample 1 is stochastically larger than
sample 2.
See Also
--------
rank_compare_2indep
RankCompareResult
Notes
-----
The implementation is based on the appendix of Munzel and Hauschke (2003)
with the addition of ``ddof`` so that the results match the general
function `rank_compare_2indep`. | rank_compare_2ordinal | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def prob_larger_continuous(distr1, distr2):
"""
Probability indicating that distr1 is stochastically larger than distr2.
This computes
p = P(x1 > x2)
for two continuous distributions, where `distr1` and `distr2` are the
distributions of random variables x1 and x2 respectively.
Parameters
----------
distr1, distr2 : distributions
Two instances of scipy.stats.distributions. The required methods are
cdf of the second distribution and expect of the first distribution.
Returns
-------
p : probability x1 is larger than x2
Notes
-----
This is a one-liner that is added mainly as reference.
Examples
--------
>>> from scipy import stats
>>> prob_larger_continuous(stats.norm, stats.t(5))
0.4999999999999999
# which is the same as
>>> stats.norm.expect(stats.t(5).cdf)
0.4999999999999999
# distribution 1 with smaller mean (loc) than distribution 2
>>> prob_larger_continuous(stats.norm, stats.norm(loc=1))
0.23975006109347669
"""
return distr1.expect(distr2.cdf) | Probability indicating that distr1 is stochastically larger than distr2.
This computes
p = P(x1 > x2)
for two continuous distributions, where `distr1` and `distr2` are the
distributions of random variables x1 and x2 respectively.
Parameters
----------
distr1, distr2 : distributions
Two instances of scipy.stats.distributions. The required methods are
cdf of the second distribution and expect of the first distribution.
Returns
-------
p : probability x1 is larger than x2
Notes
-----
This is a one-liner that is added mainly as reference.
Examples
--------
>>> from scipy import stats
>>> prob_larger_continuous(stats.norm, stats.t(5))
0.4999999999999999
# which is the same as
>>> stats.norm.expect(stats.t(5).cdf)
0.4999999999999999
# distribution 1 with smaller mean (loc) than distribution 2
>>> prob_larger_continuous(stats.norm, stats.norm(loc=1))
0.23975006109347669 | prob_larger_continuous | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def cohensd2problarger(d):
"""
Convert Cohen's d effect size to stochastically-larger-probability.
This assumes observations are normally distributed.
Computed as
p = Prob(x1 > x2) = F(d / sqrt(2))
where `F` is cdf of normal distribution. Cohen's d is defined as
d = (mean1 - mean2) / std
where ``std`` is the pooled within standard deviation.
Parameters
----------
d : float or array_like
Cohen's d effect size for difference mean1 - mean2.
Returns
-------
prob : float or ndarray
Prob(x1 > x2)
"""
return stats.norm.cdf(d / np.sqrt(2)) | Convert Cohen's d effect size to stochastically-larger-probability.
This assumes observations are normally distributed.
Computed as
p = Prob(x1 > x2) = F(d / sqrt(2))
where `F` is cdf of normal distribution. Cohen's d is defined as
d = (mean1 - mean2) / std
where ``std`` is the pooled within standard deviation.
Parameters
----------
d : float or array_like
Cohen's d effect size for difference mean1 - mean2.
Returns
-------
prob : float or ndarray
Prob(x1 > x2) | cohensd2problarger | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def _compute_rank_placements(x1, x2) -> Holder:
"""
Compute ranks and placements for two samples.
This helper is used by `samplesize_rank_compare_onetail`
to calculate rank-based statistics for two input samples.
It assumes that the input data has been validated beforehand.
Parameters
----------
x1, x2 : array_like
Data samples used to compute ranks and placements.
Returns
-------
res : Holder
An instance of Holder containing the following attributes:
n_1 : int
Number of observations in the first sample.
n_2 : int
Number of observations in the second sample.
overall_ranks_pooled : ndarray
Ranks of the pooled sample.
overall_ranks_1 : ndarray
Ranks of the first sample in the pooled sample.
overall_ranks_2 : ndarray
Ranks of the second sample in the pooled sample.
within_group_ranks_1 : ndarray
Internal ranks of the first sample.
within_group_ranks_2 : ndarray
Internal ranks of the second sample.
placements_1 : ndarray
Placements of the first sample in the pooled sample.
placements_2 : ndarray
Placements of the second sample in the pooled sample.
Notes
-----
* The overall rank for each observation is determined
by ranking all data points from both samples combined
(`x1` and `x2`) in ascending order, with ties averaged.
* The within-group rank for each observation is determined
by ranking the data points within each sample separately,
* The placement of each observation is calculated by
taking the difference between the overall rank and the
within-group rank of the observation. Placements can be
thought of as measuress of the degree of overlap or
separation between two samples.
"""
n_1 = len(x1)
n_2 = len(x2)
# Overall ranks for each obs among combined sample
overall_ranks_pooled = rankdata(
np.r_[x1, x2], method="average"
)
overall_ranks_1 = overall_ranks_pooled[:n_1]
overall_ranks_2 = overall_ranks_pooled[n_1:]
# Within group ranks for each obs
within_group_ranks_1 = rankdata(x1, method="average")
within_group_ranks_2 = rankdata(x2, method="average")
placements_1 = overall_ranks_1 - within_group_ranks_1
placements_2 = overall_ranks_2 - within_group_ranks_2
return Holder(
n_1=n_1,
n_2=n_2,
overall_ranks_pooled=overall_ranks_pooled,
overall_ranks_1=overall_ranks_1,
overall_ranks_2=overall_ranks_2,
within_group_ranks_1=within_group_ranks_1,
within_group_ranks_2=within_group_ranks_2,
placements_1=placements_1,
placements_2=placements_2,
) | Compute ranks and placements for two samples.
This helper is used by `samplesize_rank_compare_onetail`
to calculate rank-based statistics for two input samples.
It assumes that the input data has been validated beforehand.
Parameters
----------
x1, x2 : array_like
Data samples used to compute ranks and placements.
Returns
-------
res : Holder
An instance of Holder containing the following attributes:
n_1 : int
Number of observations in the first sample.
n_2 : int
Number of observations in the second sample.
overall_ranks_pooled : ndarray
Ranks of the pooled sample.
overall_ranks_1 : ndarray
Ranks of the first sample in the pooled sample.
overall_ranks_2 : ndarray
Ranks of the second sample in the pooled sample.
within_group_ranks_1 : ndarray
Internal ranks of the first sample.
within_group_ranks_2 : ndarray
Internal ranks of the second sample.
placements_1 : ndarray
Placements of the first sample in the pooled sample.
placements_2 : ndarray
Placements of the second sample in the pooled sample.
Notes
-----
* The overall rank for each observation is determined
by ranking all data points from both samples combined
(`x1` and `x2`) in ascending order, with ties averaged.
* The within-group rank for each observation is determined
by ranking the data points within each sample separately,
* The placement of each observation is calculated by
taking the difference between the overall rank and the
within-group rank of the observation. Placements can be
thought of as measuress of the degree of overlap or
separation between two samples. | _compute_rank_placements | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def samplesize_rank_compare_onetail(
synthetic_sample,
reference_sample,
alpha,
power,
nobs_ratio=1,
alternative="two-sided",
) -> Holder:
"""
Compute sample size for the non-parametric Mann-Whitney U test.
This function implements the method of Happ et al (2019).
Parameters
----------
synthetic_sample : array_like
Generated `synthetic` data representing the treatment
group under the research hypothesis.
reference_sample : array_like
Advance information for the reference group.
alpha : float
The type I error rate for the test (two-sided).
power : float
The desired power of the test.
nobs_ratio : float, optional
Sample size ratio, `nobs_ref` = `nobs_ratio` *
`nobs_treat`. This is the ratio of the reference
group sample size to the treatment group sample
size, by default 1 (balanced design). See Notes.
alternative : str, ‘two-sided’ (default), ‘larger’, or ‘smaller’
Extra argument to choose whether the sample size is
calculated for a two-sided (default) or one-sided test.
See Notes.
Returns
-------
res : Holder
An instance of Holder containing the following attributes:
nobs_total : float
The total sample size required for the experiment.
nobs_treat : float
Sample size for the treatment group.
nobs_ref : float
Sample size for the reference group.
relative_effect : float
The estimated relative effect size.
power : float
The desired power for the test.
alpha : float
The type I error rate for the test.
Notes
-----
In the context of the two-sample Wilcoxon Mann-Whitney
U test, the `reference_sample` typically represents data
from the control group or previous studies. The
`synthetic_sample` is generated based on this reference
data and a prespecified relative effect size that is
meaningful for the research question. This effect size
is often determined in collaboration with subject matter
experts to reflect a significant difference worth detecting.
By comparing the reference and synthetic samples, this
function estimates the sample size needed to acheve the
desired power at the specified Type-I error rate.
Choosing between `one-sided` and `two-sided` tests has
important implications for sample size planning. A
`two-sided` test is more conservative and requires a
larger sample size but covers effects in both directions.
In contrast, a `larger` (`relative_effect > 0.5`) or `smaller`
(`relative_effect < 0.5`) one-sided test assumes the effect
occurs only in one direction, leading to a smaller required
sample size. However, if the true effect is in the opposite
direction, the `one-sided` test have virtually no power to
detect it. Additionally, if a two-sided test ends up being
used instead of the planned one-sided test, the original
sample size may be insufficient, resulting in an underpowered
study. It is important to carefully consider these trade-offs
when planning a study.
For `nobs_ratio > 1`, `nobs_ratio = 1`, or `nobs_ratio < 1`,
the reference group sample size is larger, equal to, or smaller
than the treatment group sample size, respectively.
Example
-------
The data for the placebo group of a clinical trial published in
Thall and Vail [2] is shown below. A relevant effect for the treatment
under investigation is considered to be a 50% reduction in the number
of seizures. To compute the required sample size with a power of 0.8
and holding the type I error rate at 0.05, we generate synthetic data
for the treatment group under the alternative assuming this reduction.
>>> from statsmodels.stats.nonparametric import samplesize_rank_compare_onetail
>>> import numpy as np
>>> reference_sample = np.array([3, 3, 5, 4, 21, 7, 2, 12, 5, 0, 22, 4, 2, 12,
... 9, 5, 3, 29, 5, 7, 4, 4, 5, 8, 25, 1, 2, 12])
>>> # Apply 50% reduction in seizure counts and floor operation
>>> synthetic_sample = np.floor(reference_sample / 2)
>>> result = samplesize_rank_compare_onetail(
... synthetic_sample=synthetic_sample,
... reference_sample=reference_sample,
... alpha=0.05, power=0.8
... )
>>> print(f"Total sample size: {result.nobs_total}, "
... f"Treatment group: {result.nobs_treat}, "
... f"Reference group: {result.nobs_ref}")
References
----------
.. [1] Happ, M., Bathke, A. C., and Brunner, E. "Optimal sample size
planning for the Wilcoxon-Mann-Whitney test". Statistics in Medicine.
Vol. 38(2019): 363-375. https://doi.org/10.1002/sim.7983.
.. [2] Thall, P. F., and Vail, S. C. "Some covariance models for longitudinal
count data with overdispersion". Biometrics, pp. 657-671, 1990.
"""
synthetic_sample = np.asarray(synthetic_sample)
reference_sample = np.asarray(reference_sample)
if not (len(synthetic_sample) > 0 and len(reference_sample) > 0):
raise ValueError(
"Both `synthetic_sample` and `reference_sample`"
" must have at least one element."
)
if not (
np.all(np.isfinite(reference_sample))
and np.all(np.isfinite(synthetic_sample))
):
raise ValueError(
"All elements of `synthetic_sample` and `reference_sample`"
" must be finite; check for missing values."
)
if not (0 < alpha < 1):
raise ValueError("Alpha must be between 0 and 1 non-inclusive.")
if not (0 < power < 1):
raise ValueError("Power must be between 0 and 1 non-inclusive.")
if not (0 < nobs_ratio):
raise ValueError(
"Ratio of reference group to treatment group must be"
" strictly positive."
)
if alternative not in ("two-sided", "larger", "smaller"):
raise ValueError(
"Alternative must be one of `two-sided`, `larger`, or `smaller`."
)
# Group 1 is the treatment group, Group 2 is the reference group
rank_place = _compute_rank_placements(
synthetic_sample,
reference_sample,
)
# Extra few bytes of name binding for explicitness & readability
n_syn = rank_place.n_1
n_ref = rank_place.n_2
overall_ranks_pooled = rank_place.overall_ranks_pooled
placements_syn = rank_place.placements_1
placements_ref = rank_place.placements_2
relative_effect = (
np.mean(placements_syn) - np.mean(placements_ref)
) / (n_syn + n_ref) + 0.5
# Values [0.499, 0.501] considered 'practically' = 0.5 (0.1% atol)
if np.isclose(relative_effect, 0.5, atol=1e-3):
raise ValueError(
"Estimated relative effect is effectively 0.5, i.e."
" stochastic equality between `synthetic_sample` and"
" `reference_sample`. Given null hypothesis is true,"
" sample size cannot be calculated. Please review data"
" samples to ensure they reflect appropriate relative"
" effect size assumptions."
)
if relative_effect < 0.5 and alternative == "larger":
raise ValueError(
"Estimated relative effect is smaller than 0.5;"
" `synthetic_sample` is stochastically smaller than"
" `reference_sample`. No sample size can be calculated"
" for `alternative == 'larger'`. Please review data"
" samples to ensure they reflect appropriate relative"
" effect size assumptions."
)
if relative_effect > 0.5 and alternative == "smaller":
raise ValueError(
"Estimated relative effect is larger than 0.5;"
" `synthetic_sample` is stochastically larger than"
" `reference_sample`. No sample size can be calculated"
" for `alternative == 'smaller'`. Please review data"
" samples to ensure they reflect appropriate relative"
" effect size assumptions."
)
sd_overall = np.sqrt(
np.sum(
(overall_ranks_pooled - (n_syn + n_ref + 1) / 2) ** 2
)
/ (n_syn + n_ref) ** 3
)
var_ref = (
np.sum(
(placements_ref - np.mean(placements_ref)) ** 2
) / (n_ref * (n_syn ** 2))
)
var_syn = (
np.sum(
(placements_syn - np.mean(placements_syn)) ** 2
) / ((n_ref ** 2) * n_syn)
)
quantile_prob = (1 - alpha / 2) if alternative == "two-sided" else (1 - alpha)
quantile_alpha = stats.norm.ppf(quantile_prob, loc=0, scale=1)
quantile_power = stats.norm.ppf(power, loc=0, scale=1)
# Convert `nobs_ratio` to proportion of total allocated to reference group
prop_treatment = 1 / (1 + nobs_ratio)
prop_reference = 1 - prop_treatment
var_terms = np.sqrt(
prop_reference * var_syn + (1 - prop_reference) * var_ref
)
quantiles_terms = sd_overall * quantile_alpha + quantile_power * var_terms
# Add a small epsilon to avoid division by zero when there is no
# treatment effect, i.e. p_hat = 0.5
nobs_total = (quantiles_terms**2) / (
prop_reference
* (1 - prop_reference)
* (relative_effect - 0.5 + 1e-12) ** 2
)
nobs_treat = nobs_total * (1 - prop_reference)
nobs_ref = nobs_total * prop_reference
return Holder(
nobs_total=nobs_total.item(),
nobs_treat=nobs_treat.item(),
nobs_ref=nobs_ref.item(),
relative_effect=relative_effect.item(),
power=power,
alpha=alpha,
) | Compute sample size for the non-parametric Mann-Whitney U test.
This function implements the method of Happ et al (2019).
Parameters
----------
synthetic_sample : array_like
Generated `synthetic` data representing the treatment
group under the research hypothesis.
reference_sample : array_like
Advance information for the reference group.
alpha : float
The type I error rate for the test (two-sided).
power : float
The desired power of the test.
nobs_ratio : float, optional
Sample size ratio, `nobs_ref` = `nobs_ratio` *
`nobs_treat`. This is the ratio of the reference
group sample size to the treatment group sample
size, by default 1 (balanced design). See Notes.
alternative : str, ‘two-sided’ (default), ‘larger’, or ‘smaller’
Extra argument to choose whether the sample size is
calculated for a two-sided (default) or one-sided test.
See Notes.
Returns
-------
res : Holder
An instance of Holder containing the following attributes:
nobs_total : float
The total sample size required for the experiment.
nobs_treat : float
Sample size for the treatment group.
nobs_ref : float
Sample size for the reference group.
relative_effect : float
The estimated relative effect size.
power : float
The desired power for the test.
alpha : float
The type I error rate for the test.
Notes
-----
In the context of the two-sample Wilcoxon Mann-Whitney
U test, the `reference_sample` typically represents data
from the control group or previous studies. The
`synthetic_sample` is generated based on this reference
data and a prespecified relative effect size that is
meaningful for the research question. This effect size
is often determined in collaboration with subject matter
experts to reflect a significant difference worth detecting.
By comparing the reference and synthetic samples, this
function estimates the sample size needed to acheve the
desired power at the specified Type-I error rate.
Choosing between `one-sided` and `two-sided` tests has
important implications for sample size planning. A
`two-sided` test is more conservative and requires a
larger sample size but covers effects in both directions.
In contrast, a `larger` (`relative_effect > 0.5`) or `smaller`
(`relative_effect < 0.5`) one-sided test assumes the effect
occurs only in one direction, leading to a smaller required
sample size. However, if the true effect is in the opposite
direction, the `one-sided` test have virtually no power to
detect it. Additionally, if a two-sided test ends up being
used instead of the planned one-sided test, the original
sample size may be insufficient, resulting in an underpowered
study. It is important to carefully consider these trade-offs
when planning a study.
For `nobs_ratio > 1`, `nobs_ratio = 1`, or `nobs_ratio < 1`,
the reference group sample size is larger, equal to, or smaller
than the treatment group sample size, respectively.
Example
-------
The data for the placebo group of a clinical trial published in
Thall and Vail [2] is shown below. A relevant effect for the treatment
under investigation is considered to be a 50% reduction in the number
of seizures. To compute the required sample size with a power of 0.8
and holding the type I error rate at 0.05, we generate synthetic data
for the treatment group under the alternative assuming this reduction.
>>> from statsmodels.stats.nonparametric import samplesize_rank_compare_onetail
>>> import numpy as np
>>> reference_sample = np.array([3, 3, 5, 4, 21, 7, 2, 12, 5, 0, 22, 4, 2, 12,
... 9, 5, 3, 29, 5, 7, 4, 4, 5, 8, 25, 1, 2, 12])
>>> # Apply 50% reduction in seizure counts and floor operation
>>> synthetic_sample = np.floor(reference_sample / 2)
>>> result = samplesize_rank_compare_onetail(
... synthetic_sample=synthetic_sample,
... reference_sample=reference_sample,
... alpha=0.05, power=0.8
... )
>>> print(f"Total sample size: {result.nobs_total}, "
... f"Treatment group: {result.nobs_treat}, "
... f"Reference group: {result.nobs_ref}")
References
----------
.. [1] Happ, M., Bathke, A. C., and Brunner, E. "Optimal sample size
planning for the Wilcoxon-Mann-Whitney test". Statistics in Medicine.
Vol. 38(2019): 363-375. https://doi.org/10.1002/sim.7983.
.. [2] Thall, P. F., and Vail, S. C. "Some covariance models for longitudinal
count data with overdispersion". Biometrics, pp. 657-671, 1990. | samplesize_rank_compare_onetail | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def grad(self, params=None, **kwds):
"""First derivative, jacobian of func evaluated at params.
Parameters
----------
params : None or ndarray
Values at which gradient is evaluated. If params is None, then
the attached params are used.
TODO: should we drop this
kwds : keyword arguments
This keyword arguments are used without changes in the calulation
of numerical derivatives. These are only used if a `deriv` function
was not provided.
Returns
-------
grad : ndarray
gradient or jacobian of the function
"""
if params is None:
params = self.params
if self._grad is not None:
return self._grad(params)
else:
# copied from discrete_margins
try:
from statsmodels.tools.numdiff import approx_fprime_cs
jac = approx_fprime_cs(params, self.fun, **kwds)
except TypeError: # norm.cdf doesn't take complex values
from statsmodels.tools.numdiff import approx_fprime
jac = approx_fprime(params, self.fun, **kwds)
return jac | First derivative, jacobian of func evaluated at params.
Parameters
----------
params : None or ndarray
Values at which gradient is evaluated. If params is None, then
the attached params are used.
TODO: should we drop this
kwds : keyword arguments
This keyword arguments are used without changes in the calulation
of numerical derivatives. These are only used if a `deriv` function
was not provided.
Returns
-------
grad : ndarray
gradient or jacobian of the function | grad | python | statsmodels/statsmodels | statsmodels/stats/_delta_method.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_delta_method.py | BSD-3-Clause |
def cov(self):
"""Covariance matrix of the transformed random variable.
"""
g = self.grad()
covar = np.dot(np.dot(g, self.cov_params), g.T)
return covar | Covariance matrix of the transformed random variable. | cov | python | statsmodels/statsmodels | statsmodels/stats/_delta_method.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_delta_method.py | BSD-3-Clause |
def predicted(self):
"""Value of the function evaluated at the attached params.
Note: This is not equal to the expected value if the transformation is
nonlinear. If params is the maximum likelihood estimate, then
`predicted` is the maximum likelihood estimate of the value of the
nonlinear function.
"""
predicted = self.fun(self.params)
# TODO: why do I need to squeeze in poisson example
if predicted.ndim > 1:
predicted = predicted.squeeze()
return predicted | Value of the function evaluated at the attached params.
Note: This is not equal to the expected value if the transformation is
nonlinear. If params is the maximum likelihood estimate, then
`predicted` is the maximum likelihood estimate of the value of the
nonlinear function. | predicted | python | statsmodels/statsmodels | statsmodels/stats/_delta_method.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_delta_method.py | BSD-3-Clause |
def wald_test(self, value):
"""Joint hypothesis tests that H0: f(params) = value.
The alternative hypothesis is two-sided H1: f(params) != value.
Warning: this might be replaced with more general version that returns
ContrastResults.
currently uses chisquare distribution, use_f option not yet implemented
Parameters
----------
value : float or ndarray
value of f(params) under the Null Hypothesis
Returns
-------
statistic : float
Value of the test statistic.
pvalue : float
The p-value for the hypothesis test, based and chisquare
distribution and implies a two-sided hypothesis test
"""
# TODO: add use_t option or not?
m = self.predicted()
v = self.cov()
df_constraints = np.size(m)
diff = m - value
lmstat = np.dot(np.dot(diff.T, np.linalg.inv(v)), diff)
return lmstat, stats.chi2.sf(lmstat, df_constraints) | Joint hypothesis tests that H0: f(params) = value.
The alternative hypothesis is two-sided H1: f(params) != value.
Warning: this might be replaced with more general version that returns
ContrastResults.
currently uses chisquare distribution, use_f option not yet implemented
Parameters
----------
value : float or ndarray
value of f(params) under the Null Hypothesis
Returns
-------
statistic : float
Value of the test statistic.
pvalue : float
The p-value for the hypothesis test, based and chisquare
distribution and implies a two-sided hypothesis test | wald_test | python | statsmodels/statsmodels | statsmodels/stats/_delta_method.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_delta_method.py | BSD-3-Clause |
def var(self):
"""standard error for each equation (row) treated separately
"""
g = self.grad()
var = (np.dot(g, self.cov_params) * g).sum(-1)
if var.ndim == 2:
var = var.T
return var | standard error for each equation (row) treated separately | var | python | statsmodels/statsmodels | statsmodels/stats/_delta_method.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_delta_method.py | BSD-3-Clause |
def conf_int(self, alpha=0.05, use_t=False, df=None, var_extra=None,
predicted=None, se=None):
"""
Confidence interval for predicted based on delta method.
Parameters
----------
alpha : float, optional
The significance level for the confidence interval.
ie., The default `alpha` = .05 returns a 95% confidence interval.
use_t : boolean
If use_t is False (default), then the normal distribution is used
for the confidence interval, otherwise the t distribution with
`df` degrees of freedom is used.
df : int or float
degrees of freedom for t distribution. Only used and required if
use_t is True.
var_extra : None or array_like float
Additional variance that is added to the variance based on the
delta method. This can be used to obtain confidence intervalls for
new observations (prediction interval).
predicted : ndarray (float)
Predicted value, can be used to avoid repeated calculations if it
is already available.
se : ndarray (float)
Standard error, can be used to avoid repeated calculations if it
is already available.
Returns
-------
conf_int : array
Each row contains [lower, upper] limits of the confidence interval
for the corresponding parameter. The first column contains all
lower, the second column contains all upper limits.
"""
# TODO: predicted and se as arguments to avoid duplicate calculations
# or leave unchanged?
if not use_t:
dist = stats.norm
dist_args = ()
else:
if df is None:
raise ValueError('t distribution requires df')
dist = stats.t
dist_args = (df,)
if predicted is None:
predicted = self.predicted()
if se is None:
se = self.se_vectorized()
if var_extra is not None:
se = np.sqrt(se**2 + var_extra)
q = dist.ppf(1 - alpha / 2., *dist_args)
lower = predicted - q * se
upper = predicted + q * se
ci = np.column_stack((lower, upper))
if ci.shape[1] != 2:
raise RuntimeError('something wrong: ci not 2 columns')
return ci | Confidence interval for predicted based on delta method.
Parameters
----------
alpha : float, optional
The significance level for the confidence interval.
ie., The default `alpha` = .05 returns a 95% confidence interval.
use_t : boolean
If use_t is False (default), then the normal distribution is used
for the confidence interval, otherwise the t distribution with
`df` degrees of freedom is used.
df : int or float
degrees of freedom for t distribution. Only used and required if
use_t is True.
var_extra : None or array_like float
Additional variance that is added to the variance based on the
delta method. This can be used to obtain confidence intervalls for
new observations (prediction interval).
predicted : ndarray (float)
Predicted value, can be used to avoid repeated calculations if it
is already available.
se : ndarray (float)
Standard error, can be used to avoid repeated calculations if it
is already available.
Returns
-------
conf_int : array
Each row contains [lower, upper] limits of the confidence interval
for the corresponding parameter. The first column contains all
lower, the second column contains all upper limits. | conf_int | python | statsmodels/statsmodels | statsmodels/stats/_delta_method.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_delta_method.py | BSD-3-Clause |
def summary(self, xname=None, alpha=0.05, title=None, use_t=False,
df=None):
"""Summarize the Results of the nonlinear transformation.
This provides a parameter table equivalent to `t_test` and reuses
`ContrastResults`.
Parameters
-----------
xname : list of strings, optional
Default is `c_##` for ## in p the number of regressors
alpha : float
Significance level for the confidence intervals. Default is
alpha = 0.05 which implies a confidence level of 95%.
title : string, optional
Title for the params table. If not None, then this replaces the
default title
use_t : boolean
If use_t is False (default), then the normal distribution is used
for the confidence interval, otherwise the t distribution with
`df` degrees of freedom is used.
df : int or float
degrees of freedom for t distribution. Only used and required if
use_t is True.
Returns
-------
smry : string or Summary instance
This contains a parameter results table in the case of t or z test
in the same form as the parameter results table in the model
results summary.
For F or Wald test, the return is a string.
"""
# this is an experimental reuse of ContrastResults
from statsmodels.stats.contrast import ContrastResults
predicted = self.predicted()
se = self.se_vectorized()
# TODO check shape for scalar case, ContrastResults requires iterable
predicted = np.atleast_1d(predicted)
if predicted.ndim > 1:
predicted = predicted.squeeze()
se = np.atleast_1d(se)
statistic = predicted / se
if use_t:
df_resid = df
cr = ContrastResults(effect=predicted, t=statistic, sd=se,
df_denom=df_resid)
else:
cr = ContrastResults(effect=predicted, statistic=statistic, sd=se,
df_denom=None, distribution='norm')
return cr.summary(xname=xname, alpha=alpha, title=title) | Summarize the Results of the nonlinear transformation.
This provides a parameter table equivalent to `t_test` and reuses
`ContrastResults`.
Parameters
-----------
xname : list of strings, optional
Default is `c_##` for ## in p the number of regressors
alpha : float
Significance level for the confidence intervals. Default is
alpha = 0.05 which implies a confidence level of 95%.
title : string, optional
Title for the params table. If not None, then this replaces the
default title
use_t : boolean
If use_t is False (default), then the normal distribution is used
for the confidence interval, otherwise the t distribution with
`df` degrees of freedom is used.
df : int or float
degrees of freedom for t distribution. Only used and required if
use_t is True.
Returns
-------
smry : string or Summary instance
This contains a parameter results table in the case of t or z test
in the same form as the parameter results table in the model
results summary.
For F or Wald test, the return is a string. | summary | python | statsmodels/statsmodels | statsmodels/stats/_delta_method.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_delta_method.py | BSD-3-Clause |
def _make_df_square(table):
"""
Reindex a pandas DataFrame so that it becomes square, meaning that
the row and column indices contain the same values, in the same
order. The row and column index are extended to achieve this.
"""
if not isinstance(table, pd.DataFrame):
return table
# If the table is not square, make it square
if not table.index.equals(table.columns):
ix = list(set(table.index) | set(table.columns))
ix.sort()
table = table.reindex(index=ix, columns=ix, fill_value=0)
# Ensures that the rows and columns are in the same order.
table = table.reindex(table.columns)
return table | Reindex a pandas DataFrame so that it becomes square, meaning that
the row and column indices contain the same values, in the same
order. The row and column index are extended to achieve this. | _make_df_square | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def from_data(cls, data, shift_zeros=True):
"""
Construct a Table object from data.
Parameters
----------
data : array_like
The raw data, from which a contingency table is constructed
using the first two columns.
shift_zeros : bool
If True and any cell count is zero, add 0.5 to all values
in the table.
Returns
-------
A Table instance.
"""
if isinstance(data, pd.DataFrame):
table = pd.crosstab(data.iloc[:, 0], data.iloc[:, 1])
else:
table = pd.crosstab(data[:, 0], data[:, 1])
return cls(table, shift_zeros) | Construct a Table object from data.
Parameters
----------
data : array_like
The raw data, from which a contingency table is constructed
using the first two columns.
shift_zeros : bool
If True and any cell count is zero, add 0.5 to all values
in the table.
Returns
-------
A Table instance. | from_data | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def test_nominal_association(self):
"""
Assess independence for nominal factors.
Assessment of independence between rows and columns using
chi^2 testing. The rows and columns are treated as nominal
(unordered) categorical variables.
Returns
-------
A bunch containing the following attributes:
statistic : float
The chi^2 test statistic.
df : int
The degrees of freedom of the reference distribution
pvalue : float
The p-value for the test.
"""
statistic = np.asarray(self.chi2_contribs).sum()
df = np.prod(np.asarray(self.table.shape) - 1)
pvalue = 1 - stats.chi2.cdf(statistic, df)
b = _Bunch()
b.statistic = statistic
b.df = df
b.pvalue = pvalue
return b | Assess independence for nominal factors.
Assessment of independence between rows and columns using
chi^2 testing. The rows and columns are treated as nominal
(unordered) categorical variables.
Returns
-------
A bunch containing the following attributes:
statistic : float
The chi^2 test statistic.
df : int
The degrees of freedom of the reference distribution
pvalue : float
The p-value for the test. | test_nominal_association | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def test_ordinal_association(self, row_scores=None, col_scores=None):
"""
Assess independence between two ordinal variables.
This is the 'linear by linear' association test, which uses
weights or scores to target the test to have more power
against ordered alternatives.
Parameters
----------
row_scores : array_like
An array of numeric row scores
col_scores : array_like
An array of numeric column scores
Returns
-------
A bunch with the following attributes:
statistic : float
The test statistic.
null_mean : float
The expected value of the test statistic under the null
hypothesis.
null_sd : float
The standard deviation of the test statistic under the
null hypothesis.
zscore : float
The Z-score for the test statistic.
pvalue : float
The p-value for the test.
Notes
-----
The scores define the trend to which the test is most sensitive.
Using the default row and column scores gives the
Cochran-Armitage trend test.
"""
if row_scores is None:
row_scores = np.arange(self.table.shape[0])
if col_scores is None:
col_scores = np.arange(self.table.shape[1])
if len(row_scores) != self.table.shape[0]:
msg = ("The length of `row_scores` must match the first " +
"dimension of `table`.")
raise ValueError(msg)
if len(col_scores) != self.table.shape[1]:
msg = ("The length of `col_scores` must match the second " +
"dimension of `table`.")
raise ValueError(msg)
# The test statistic
statistic = np.dot(row_scores, np.dot(self.table, col_scores))
# Some needed quantities
n_obs = self.table.sum()
rtot = self.table.sum(1)
um = np.dot(row_scores, rtot)
u2m = np.dot(row_scores**2, rtot)
ctot = self.table.sum(0)
vn = np.dot(col_scores, ctot)
v2n = np.dot(col_scores**2, ctot)
# The null mean and variance of the test statistic
e_stat = um * vn / n_obs
v_stat = (u2m - um**2 / n_obs) * (v2n - vn**2 / n_obs) / (n_obs - 1)
sd_stat = np.sqrt(v_stat)
zscore = (statistic - e_stat) / sd_stat
pvalue = 2 * stats.norm.cdf(-np.abs(zscore))
b = _Bunch()
b.statistic = statistic
b.null_mean = e_stat
b.null_sd = sd_stat
b.zscore = zscore
b.pvalue = pvalue
return b | Assess independence between two ordinal variables.
This is the 'linear by linear' association test, which uses
weights or scores to target the test to have more power
against ordered alternatives.
Parameters
----------
row_scores : array_like
An array of numeric row scores
col_scores : array_like
An array of numeric column scores
Returns
-------
A bunch with the following attributes:
statistic : float
The test statistic.
null_mean : float
The expected value of the test statistic under the null
hypothesis.
null_sd : float
The standard deviation of the test statistic under the
null hypothesis.
zscore : float
The Z-score for the test statistic.
pvalue : float
The p-value for the test.
Notes
-----
The scores define the trend to which the test is most sensitive.
Using the default row and column scores gives the
Cochran-Armitage trend test. | test_ordinal_association | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def marginal_probabilities(self):
"""
Estimate marginal probability distributions for the rows and columns.
Returns
-------
row : ndarray
Marginal row probabilities
col : ndarray
Marginal column probabilities
"""
n = self.table.sum()
row = self.table.sum(1) / n
col = self.table.sum(0) / n
if isinstance(self.table_orig, pd.DataFrame):
row = pd.Series(row, self.table_orig.index)
col = pd.Series(col, self.table_orig.columns)
return row, col | Estimate marginal probability distributions for the rows and columns.
Returns
-------
row : ndarray
Marginal row probabilities
col : ndarray
Marginal column probabilities | marginal_probabilities | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def independence_probabilities(self):
"""
Returns fitted joint probabilities under independence.
The returned table is outer(row, column), where row and
column are the estimated marginal distributions
of the rows and columns.
"""
row, col = self.marginal_probabilities
itab = np.outer(row, col)
if isinstance(self.table_orig, pd.DataFrame):
itab = pd.DataFrame(itab, self.table_orig.index,
self.table_orig.columns)
return itab | Returns fitted joint probabilities under independence.
The returned table is outer(row, column), where row and
column are the estimated marginal distributions
of the rows and columns. | independence_probabilities | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def fittedvalues(self):
"""
Returns fitted cell counts under independence.
The returned cell counts are estimates under a model
where the rows and columns of the table are independent.
"""
probs = self.independence_probabilities
fit = self.table.sum() * probs
return fit | Returns fitted cell counts under independence.
The returned cell counts are estimates under a model
where the rows and columns of the table are independent. | fittedvalues | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def resid_pearson(self):
"""
Returns Pearson residuals.
The Pearson residuals are calculated under a model where
the rows and columns of the table are independent.
"""
fit = self.fittedvalues
resids = (self.table - fit) / np.sqrt(fit)
return resids | Returns Pearson residuals.
The Pearson residuals are calculated under a model where
the rows and columns of the table are independent. | resid_pearson | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def standardized_resids(self):
"""
Returns standardized residuals under independence.
"""
row, col = self.marginal_probabilities
sresids = self.resid_pearson / np.sqrt(np.outer(1 - row, 1 - col))
return sresids | Returns standardized residuals under independence. | standardized_resids | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def chi2_contribs(self):
"""
Returns the contributions to the chi^2 statistic for independence.
The returned table contains the contribution of each cell to the chi^2
test statistic for the null hypothesis that the rows and columns
are independent.
"""
return self.resid_pearson**2 | Returns the contributions to the chi^2 statistic for independence.
The returned table contains the contribution of each cell to the chi^2
test statistic for the null hypothesis that the rows and columns
are independent. | chi2_contribs | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def local_log_oddsratios(self):
"""
Returns local log odds ratios.
The local log odds ratios are the log odds ratios
calculated for contiguous 2x2 sub-tables.
"""
ta = self.table.copy()
a = ta[0:-1, 0:-1]
b = ta[0:-1, 1:]
c = ta[1:, 0:-1]
d = ta[1:, 1:]
tab = np.log(a) + np.log(d) - np.log(b) - np.log(c)
rslt = np.empty(self.table.shape, np.float64)
rslt *= np.nan
rslt[0:-1, 0:-1] = tab
if isinstance(self.table_orig, pd.DataFrame):
rslt = pd.DataFrame(rslt, index=self.table_orig.index,
columns=self.table_orig.columns)
return rslt | Returns local log odds ratios.
The local log odds ratios are the log odds ratios
calculated for contiguous 2x2 sub-tables. | local_log_oddsratios | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def local_oddsratios(self):
"""
Returns local odds ratios.
See documentation for local_log_oddsratios.
"""
return np.exp(self.local_log_oddsratios) | Returns local odds ratios.
See documentation for local_log_oddsratios. | local_oddsratios | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def cumulative_log_oddsratios(self):
"""
Returns cumulative log odds ratios.
The cumulative log odds ratios for a contingency table
with ordered rows and columns are calculated by collapsing
all cells to the left/right and above/below a given point,
to obtain a 2x2 table from which a log odds ratio can be
calculated.
"""
ta = self.table.cumsum(0).cumsum(1)
a = ta[0:-1, 0:-1]
b = ta[0:-1, -1:] - a
c = ta[-1:, 0:-1] - a
d = ta[-1, -1] - (a + b + c)
tab = np.log(a) + np.log(d) - np.log(b) - np.log(c)
rslt = np.empty(self.table.shape, np.float64)
rslt *= np.nan
rslt[0:-1, 0:-1] = tab
if isinstance(self.table_orig, pd.DataFrame):
rslt = pd.DataFrame(rslt, index=self.table_orig.index,
columns=self.table_orig.columns)
return rslt | Returns cumulative log odds ratios.
The cumulative log odds ratios for a contingency table
with ordered rows and columns are calculated by collapsing
all cells to the left/right and above/below a given point,
to obtain a 2x2 table from which a log odds ratio can be
calculated. | cumulative_log_oddsratios | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def cumulative_oddsratios(self):
"""
Returns the cumulative odds ratios for a contingency table.
See documentation for cumulative_log_oddsratio.
"""
return np.exp(self.cumulative_log_oddsratios) | Returns the cumulative odds ratios for a contingency table.
See documentation for cumulative_log_oddsratio. | cumulative_oddsratios | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def symmetry(self, method="bowker"):
"""
Test for symmetry of a joint distribution.
This procedure tests the null hypothesis that the joint
distribution is symmetric around the main diagonal, that is
.. math::
p_{i, j} = p_{j, i} for all i, j
Returns
-------
Bunch
A bunch with attributes
* statistic : float
chisquare test statistic
* p-value : float
p-value of the test statistic based on chisquare distribution
* df : int
degrees of freedom of the chisquare distribution
Notes
-----
The implementation is based on the SAS documentation. R includes
it in `mcnemar.test` if the table is not 2 by 2. However a more
direct generalization of the McNemar test to larger tables is
provided by the homogeneity test (TableSymmetry.homogeneity).
The p-value is based on the chi-square distribution which requires
that the sample size is not very small to be a good approximation
of the true distribution. For 2x2 contingency tables the exact
distribution can be obtained with `mcnemar`
See Also
--------
mcnemar
homogeneity
"""
if method.lower() != "bowker":
raise ValueError("method for symmetry testing must be 'bowker'")
k = self.table.shape[0]
upp_idx = np.triu_indices(k, 1)
tril = self.table.T[upp_idx] # lower triangle in column order
triu = self.table[upp_idx] # upper triangle in row order
statistic = ((tril - triu)**2 / (tril + triu + 1e-20)).sum()
df = k * (k-1) / 2.
pvalue = stats.chi2.sf(statistic, df)
b = _Bunch()
b.statistic = statistic
b.pvalue = pvalue
b.df = df
return b | Test for symmetry of a joint distribution.
This procedure tests the null hypothesis that the joint
distribution is symmetric around the main diagonal, that is
.. math::
p_{i, j} = p_{j, i} for all i, j
Returns
-------
Bunch
A bunch with attributes
* statistic : float
chisquare test statistic
* p-value : float
p-value of the test statistic based on chisquare distribution
* df : int
degrees of freedom of the chisquare distribution
Notes
-----
The implementation is based on the SAS documentation. R includes
it in `mcnemar.test` if the table is not 2 by 2. However a more
direct generalization of the McNemar test to larger tables is
provided by the homogeneity test (TableSymmetry.homogeneity).
The p-value is based on the chi-square distribution which requires
that the sample size is not very small to be a good approximation
of the true distribution. For 2x2 contingency tables the exact
distribution can be obtained with `mcnemar`
See Also
--------
mcnemar
homogeneity | symmetry | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def homogeneity(self, method="stuart_maxwell"):
"""
Compare row and column marginal distributions.
Parameters
----------
method : str
Either 'stuart_maxwell' or 'bhapkar', leading to two different
estimates of the covariance matrix for the estimated
difference between the row margins and the column margins.
Returns
-------
Bunch
A bunch with attributes:
* statistic : float
The chi^2 test statistic
* pvalue : float
The p-value of the test statistic
* df : int
The degrees of freedom of the reference distribution
Notes
-----
For a 2x2 table this is equivalent to McNemar's test. More
generally the procedure tests the null hypothesis that the
marginal distribution of the row factor is equal to the
marginal distribution of the column factor. For this to be
meaningful, the two factors must have the same sample space
(i.e. the same categories).
"""
if self.table.shape[0] < 1:
raise ValueError('table is empty')
elif self.table.shape[0] == 1:
b = _Bunch()
b.statistic = 0
b.pvalue = 1
b.df = 0
return b
method = method.lower()
if method not in ["bhapkar", "stuart_maxwell"]:
raise ValueError("method '%s' for homogeneity not known" % method)
n_obs = self.table.sum()
pr = self.table.astype(np.float64) / n_obs
# Compute margins, eliminate last row/column so there is no
# degeneracy
row = pr.sum(1)[0:-1]
col = pr.sum(0)[0:-1]
pr = pr[0:-1, 0:-1]
# The estimated difference between row and column margins.
d = col - row
# The degrees of freedom of the chi^2 reference distribution.
df = pr.shape[0]
if method == "bhapkar":
vmat = -(pr + pr.T) - np.outer(d, d)
dv = col + row - 2*np.diag(pr) - d**2
np.fill_diagonal(vmat, dv)
elif method == "stuart_maxwell":
vmat = -(pr + pr.T)
dv = row + col - 2*np.diag(pr)
np.fill_diagonal(vmat, dv)
try:
statistic = n_obs * np.dot(d, np.linalg.solve(vmat, d))
except np.linalg.LinAlgError:
warnings.warn("Unable to invert covariance matrix",
sm_exceptions.SingularMatrixWarning)
b = _Bunch()
b.statistic = np.nan
b.pvalue = np.nan
b.df = df
return b
pvalue = 1 - stats.chi2.cdf(statistic, df)
b = _Bunch()
b.statistic = statistic
b.pvalue = pvalue
b.df = df
return b | Compare row and column marginal distributions.
Parameters
----------
method : str
Either 'stuart_maxwell' or 'bhapkar', leading to two different
estimates of the covariance matrix for the estimated
difference between the row margins and the column margins.
Returns
-------
Bunch
A bunch with attributes:
* statistic : float
The chi^2 test statistic
* pvalue : float
The p-value of the test statistic
* df : int
The degrees of freedom of the reference distribution
Notes
-----
For a 2x2 table this is equivalent to McNemar's test. More
generally the procedure tests the null hypothesis that the
marginal distribution of the row factor is equal to the
marginal distribution of the column factor. For this to be
meaningful, the two factors must have the same sample space
(i.e. the same categories). | homogeneity | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def summary(self, alpha=0.05, float_format="%.3f"):
"""
Produce a summary of the analysis.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the interval.
float_format : str
Used to format numeric values in the table.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
"""
fmt = float_format
headers = ["Statistic", "P-value", "DF"]
stubs = ["Symmetry", "Homogeneity"]
sy = self.symmetry()
hm = self.homogeneity()
data = [[fmt % sy.statistic, fmt % sy.pvalue, '%d' % sy.df],
[fmt % hm.statistic, fmt % hm.pvalue, '%d' % hm.df]]
tab = iolib.SimpleTable(data, headers, stubs, data_aligns="r",
table_dec_above='')
return tab | Produce a summary of the analysis.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the interval.
float_format : str
Used to format numeric values in the table.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation. | summary | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def from_data(cls, data, shift_zeros=True):
"""
Construct a Table object from data.
Parameters
----------
data : array_like
The raw data, the first column defines the rows and the
second column defines the columns.
shift_zeros : bool
If True, and if there are any zeros in the contingency
table, add 0.5 to all four cells of the table.
"""
if isinstance(data, pd.DataFrame):
table = pd.crosstab(data.iloc[:, 0], data.iloc[:, 1])
else:
table = pd.crosstab(data[:, 0], data[:, 1])
return cls(table, shift_zeros) | Construct a Table object from data.
Parameters
----------
data : array_like
The raw data, the first column defines the rows and the
second column defines the columns.
shift_zeros : bool
If True, and if there are any zeros in the contingency
table, add 0.5 to all four cells of the table. | from_data | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_oddsratio(self):
"""
Returns the log odds ratio for a 2x2 table.
"""
f = self.table.flatten()
return np.dot(np.log(f), np.r_[1, -1, -1, 1]) | Returns the log odds ratio for a 2x2 table. | log_oddsratio | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def oddsratio(self):
"""
Returns the odds ratio for a 2x2 table.
"""
return (self.table[0, 0] * self.table[1, 1] /
(self.table[0, 1] * self.table[1, 0])) | Returns the odds ratio for a 2x2 table. | oddsratio | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_oddsratio_se(self):
"""
Returns the standard error for the log odds ratio.
"""
return np.sqrt(np.sum(1 / self.table)) | Returns the standard error for the log odds ratio. | log_oddsratio_se | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def oddsratio_pvalue(self, null=1):
"""
P-value for a hypothesis test about the odds ratio.
Parameters
----------
null : float
The null value of the odds ratio.
"""
return self.log_oddsratio_pvalue(np.log(null)) | P-value for a hypothesis test about the odds ratio.
Parameters
----------
null : float
The null value of the odds ratio. | oddsratio_pvalue | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_oddsratio_pvalue(self, null=0):
"""
P-value for a hypothesis test about the log odds ratio.
Parameters
----------
null : float
The null value of the log odds ratio.
"""
zscore = (self.log_oddsratio - null) / self.log_oddsratio_se
pvalue = 2 * stats.norm.cdf(-np.abs(zscore))
return pvalue | P-value for a hypothesis test about the log odds ratio.
Parameters
----------
null : float
The null value of the log odds ratio. | log_oddsratio_pvalue | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_oddsratio_confint(self, alpha=0.05, method="normal"):
"""
A confidence level for the log odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
"""
f = -stats.norm.ppf(alpha / 2)
lor = self.log_oddsratio
se = self.log_oddsratio_se
lcb = lor - f * se
ucb = lor + f * se
return lcb, ucb | A confidence level for the log odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation. | log_oddsratio_confint | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def oddsratio_confint(self, alpha=0.05, method="normal"):
"""
A confidence interval for the odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
"""
lcb, ucb = self.log_oddsratio_confint(alpha, method=method)
return np.exp(lcb), np.exp(ucb) | A confidence interval for the odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation. | oddsratio_confint | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def riskratio(self):
"""
Returns the risk ratio for a 2x2 table.
The risk ratio is calculated with respect to the rows.
"""
p = self.table[:, 0] / self.table.sum(1)
return p[0] / p[1] | Returns the risk ratio for a 2x2 table.
The risk ratio is calculated with respect to the rows. | riskratio | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_riskratio(self):
"""
Returns the log of the risk ratio.
"""
return np.log(self.riskratio) | Returns the log of the risk ratio. | log_riskratio | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_riskratio_se(self):
"""
Returns the standard error of the log of the risk ratio.
"""
n = self.table.sum(1)
p = self.table[:, 0] / n
va = np.sum((1 - p) / (n*p))
return np.sqrt(va) | Returns the standard error of the log of the risk ratio. | log_riskratio_se | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def riskratio_pvalue(self, null=1):
"""
p-value for a hypothesis test about the risk ratio.
Parameters
----------
null : float
The null value of the risk ratio.
"""
return self.log_riskratio_pvalue(np.log(null)) | p-value for a hypothesis test about the risk ratio.
Parameters
----------
null : float
The null value of the risk ratio. | riskratio_pvalue | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_riskratio_pvalue(self, null=0):
"""
p-value for a hypothesis test about the log risk ratio.
Parameters
----------
null : float
The null value of the log risk ratio.
"""
zscore = (self.log_riskratio - null) / self.log_riskratio_se
pvalue = 2 * stats.norm.cdf(-np.abs(zscore))
return pvalue | p-value for a hypothesis test about the log risk ratio.
Parameters
----------
null : float
The null value of the log risk ratio. | log_riskratio_pvalue | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_riskratio_confint(self, alpha=0.05, method="normal"):
"""
A confidence interval for the log risk ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
"""
f = -stats.norm.ppf(alpha / 2)
lrr = self.log_riskratio
se = self.log_riskratio_se
lcb = lrr - f * se
ucb = lrr + f * se
return lcb, ucb | A confidence interval for the log risk ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation. | log_riskratio_confint | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def riskratio_confint(self, alpha=0.05, method="normal"):
"""
A confidence interval for the risk ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
"""
lcb, ucb = self.log_riskratio_confint(alpha, method=method)
return np.exp(lcb), np.exp(ucb) | A confidence interval for the risk ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation. | riskratio_confint | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def summary(self, alpha=0.05, float_format="%.3f", method="normal"):
"""
Summarizes results for a 2x2 table analysis.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the confidence
intervals.
float_format : str
Used to format the numeric values in the table.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
"""
def fmt(x):
if isinstance(x, str):
return x
return float_format % x
headers = ["Estimate", "SE", "LCB", "UCB", "p-value"]
stubs = ["Odds ratio", "Log odds ratio", "Risk ratio",
"Log risk ratio"]
lcb1, ucb1 = self.oddsratio_confint(alpha, method)
lcb2, ucb2 = self.log_oddsratio_confint(alpha, method)
lcb3, ucb3 = self.riskratio_confint(alpha, method)
lcb4, ucb4 = self.log_riskratio_confint(alpha, method)
data = [[fmt(x) for x in [self.oddsratio, "", lcb1, ucb1,
self.oddsratio_pvalue()]],
[fmt(x) for x in [self.log_oddsratio, self.log_oddsratio_se,
lcb2, ucb2, self.oddsratio_pvalue()]],
[fmt(x) for x in [self.riskratio, "", lcb3, ucb3,
self.riskratio_pvalue()]],
[fmt(x) for x in [self.log_riskratio, self.log_riskratio_se,
lcb4, ucb4, self.riskratio_pvalue()]]]
tab = iolib.SimpleTable(data, headers, stubs, data_aligns="r",
table_dec_above='')
return tab | Summarizes results for a 2x2 table analysis.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the confidence
intervals.
float_format : str
Used to format the numeric values in the table.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation. | summary | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def from_data(cls, var1, var2, strata, data):
"""
Construct a StratifiedTable object from data.
Parameters
----------
var1 : int or string
The column index or name of `data` specifying the variable
defining the rows of the contingency table. The variable
must have only two distinct values.
var2 : int or string
The column index or name of `data` specifying the variable
defining the columns of the contingency table. The variable
must have only two distinct values.
strata : int or string
The column index or name of `data` specifying the variable
defining the strata.
data : array_like
The raw data. A cross-table for analysis is constructed
from the first two columns.
Returns
-------
StratifiedTable
"""
if not isinstance(data, pd.DataFrame):
data1 = pd.DataFrame(index=np.arange(data.shape[0]),
columns=[var1, var2, strata])
data1[data1.columns[var1]] = data[:, var1]
data1[data1.columns[var2]] = data[:, var2]
data1[data1.columns[strata]] = data[:, strata]
else:
data1 = data[[var1, var2, strata]]
gb = data1.groupby(strata).groups
tables = []
for g in gb:
ii = gb[g]
tab = pd.crosstab(data1.loc[ii, var1], data1.loc[ii, var2])
if (tab.shape != np.r_[2, 2]).any():
msg = "Invalid table dimensions"
raise ValueError(msg)
tables.append(np.asarray(tab))
return cls(tables) | Construct a StratifiedTable object from data.
Parameters
----------
var1 : int or string
The column index or name of `data` specifying the variable
defining the rows of the contingency table. The variable
must have only two distinct values.
var2 : int or string
The column index or name of `data` specifying the variable
defining the columns of the contingency table. The variable
must have only two distinct values.
strata : int or string
The column index or name of `data` specifying the variable
defining the strata.
data : array_like
The raw data. A cross-table for analysis is constructed
from the first two columns.
Returns
-------
StratifiedTable | from_data | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def test_null_odds(self, correction=False):
"""
Test that all tables have odds ratio equal to 1.
This is the 'Mantel-Haenszel' test.
Parameters
----------
correction : bool
If True, use the continuity correction when calculating the
test statistic.
Returns
-------
Bunch
A bunch containing the chi^2 test statistic and p-value.
"""
statistic = np.sum(self.table[0, 0, :] -
self._apb * self._apc / self._n)
statistic = np.abs(statistic)
if correction:
statistic -= 0.5
statistic = statistic**2
denom = self._apb * self._apc * self._bpd * self._cpd
denom /= (self._n**2 * (self._n - 1))
denom = np.sum(denom)
statistic /= denom
# df is always 1
pvalue = 1 - stats.chi2.cdf(statistic, 1)
b = _Bunch()
b.statistic = statistic
b.pvalue = pvalue
return b | Test that all tables have odds ratio equal to 1.
This is the 'Mantel-Haenszel' test.
Parameters
----------
correction : bool
If True, use the continuity correction when calculating the
test statistic.
Returns
-------
Bunch
A bunch containing the chi^2 test statistic and p-value. | test_null_odds | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def oddsratio_pooled(self):
"""
The pooled odds ratio.
The value is an estimate of a common odds ratio across all of the
stratified tables.
"""
odds_ratio = np.sum(self._ad / self._n) / np.sum(self._bc / self._n)
return odds_ratio | The pooled odds ratio.
The value is an estimate of a common odds ratio across all of the
stratified tables. | oddsratio_pooled | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def logodds_pooled(self):
"""
Returns the logarithm of the pooled odds ratio.
See oddsratio_pooled for more information.
"""
return np.log(self.oddsratio_pooled) | Returns the logarithm of the pooled odds ratio.
See oddsratio_pooled for more information. | logodds_pooled | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def riskratio_pooled(self):
"""
Estimate of the pooled risk ratio.
"""
acd = self.table[0, 0, :] * self._cpd
cab = self.table[1, 0, :] * self._apb
rr = np.sum(acd / self._n) / np.sum(cab / self._n)
return rr | Estimate of the pooled risk ratio. | riskratio_pooled | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def logodds_pooled_se(self):
"""
Estimated standard error of the pooled log odds ratio
References
----------
J. Robins, N. Breslow, S. Greenland. "Estimators of the
Mantel-Haenszel Variance Consistent in Both Sparse Data and
Large-Strata Limiting Models." Biometrics 42, no. 2 (1986): 311-23.
"""
adns = np.sum(self._ad / self._n)
bcns = np.sum(self._bc / self._n)
lor_va = np.sum(self._apd * self._ad / self._n**2) / adns**2
mid = self._apd * self._bc / self._n**2
mid += (1 - self._apd / self._n) * self._ad / self._n
mid = np.sum(mid)
mid /= (adns * bcns)
lor_va += mid
lor_va += np.sum((1 - self._apd / self._n) *
self._bc / self._n) / bcns**2
lor_va /= 2
lor_se = np.sqrt(lor_va)
return lor_se | Estimated standard error of the pooled log odds ratio
References
----------
J. Robins, N. Breslow, S. Greenland. "Estimators of the
Mantel-Haenszel Variance Consistent in Both Sparse Data and
Large-Strata Limiting Models." Biometrics 42, no. 2 (1986): 311-23. | logodds_pooled_se | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def logodds_pooled_confint(self, alpha=0.05, method="normal"):
"""
A confidence interval for the pooled log odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
Returns
-------
lcb : float
The lower confidence limit.
ucb : float
The upper confidence limit.
"""
lor = np.log(self.oddsratio_pooled)
lor_se = self.logodds_pooled_se
f = -stats.norm.ppf(alpha / 2)
lcb = lor - f * lor_se
ucb = lor + f * lor_se
return lcb, ucb | A confidence interval for the pooled log odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
Returns
-------
lcb : float
The lower confidence limit.
ucb : float
The upper confidence limit. | logodds_pooled_confint | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def oddsratio_pooled_confint(self, alpha=0.05, method="normal"):
"""
A confidence interval for the pooled odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
Returns
-------
lcb : float
The lower confidence limit.
ucb : float
The upper confidence limit.
"""
lcb, ucb = self.logodds_pooled_confint(alpha, method=method)
lcb = np.exp(lcb)
ucb = np.exp(ucb)
return lcb, ucb | A confidence interval for the pooled odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
Returns
-------
lcb : float
The lower confidence limit.
ucb : float
The upper confidence limit. | oddsratio_pooled_confint | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def test_equal_odds(self, adjust=False):
"""
Test that all odds ratios are identical.
This is the 'Breslow-Day' testing procedure.
Parameters
----------
adjust : bool
Use the 'Tarone' adjustment to achieve the chi^2
asymptotic distribution.
Returns
-------
A bunch containing the following attributes:
statistic : float
The chi^2 test statistic.
p-value : float
The p-value for the test.
"""
table = self.table
r = self.oddsratio_pooled
a = 1 - r
b = r * (self._apb + self._apc) + self._dma
c = -r * self._apb * self._apc
# Expected value of first cell
dr = np.sqrt(b**2 - 4*a*c)
e11 = (-b + dr) / (2*a)
# Variance of the first cell
v11 = (1 / e11 + 1 / (self._apc - e11) + 1 / (self._apb - e11) +
1 / (self._dma + e11))
v11 = 1 / v11
statistic = np.sum((table[0, 0, :] - e11)**2 / v11)
if adjust:
adj = table[0, 0, :].sum() - e11.sum()
adj = adj**2
adj /= np.sum(v11)
statistic -= adj
pvalue = 1 - stats.chi2.cdf(statistic, table.shape[2] - 1)
b = _Bunch()
b.statistic = statistic
b.pvalue = pvalue
return b | Test that all odds ratios are identical.
This is the 'Breslow-Day' testing procedure.
Parameters
----------
adjust : bool
Use the 'Tarone' adjustment to achieve the chi^2
asymptotic distribution.
Returns
-------
A bunch containing the following attributes:
statistic : float
The chi^2 test statistic.
p-value : float
The p-value for the test. | test_equal_odds | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def summary(self, alpha=0.05, float_format="%.3f", method="normal"):
"""
A summary of all the main results.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence intervals.
float_format : str
Used for formatting numeric values in the summary.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
"""
def fmt(x):
if isinstance(x, str):
return x
return float_format % x
co_lcb, co_ucb = self.oddsratio_pooled_confint(
alpha=alpha, method=method)
clo_lcb, clo_ucb = self.logodds_pooled_confint(
alpha=alpha, method=method)
headers = ["Estimate", "LCB", "UCB"]
stubs = ["Pooled odds", "Pooled log odds", "Pooled risk ratio", ""]
data = [[fmt(x) for x in [self.oddsratio_pooled, co_lcb, co_ucb]],
[fmt(x) for x in [self.logodds_pooled, clo_lcb, clo_ucb]],
[fmt(x) for x in [self.riskratio_pooled, "", ""]],
['', '', '']]
tab1 = iolib.SimpleTable(data, headers, stubs, data_aligns="r",
table_dec_above='')
headers = ["Statistic", "P-value", ""]
stubs = ["Test of OR=1", "Test constant OR"]
rslt1 = self.test_null_odds()
rslt2 = self.test_equal_odds()
data = [[fmt(x) for x in [rslt1.statistic, rslt1.pvalue, ""]],
[fmt(x) for x in [rslt2.statistic, rslt2.pvalue, ""]]]
tab2 = iolib.SimpleTable(data, headers, stubs, data_aligns="r")
tab1.extend(tab2)
headers = ["", "", ""]
stubs = ["Number of tables", "Min n", "Max n", "Avg n", "Total n"]
ss = self.table.sum(0).sum(0)
data = [["%d" % self.table.shape[2], '', ''],
["%d" % min(ss), '', ''],
["%d" % max(ss), '', ''],
["%.0f" % np.mean(ss), '', ''],
["%d" % sum(ss), '', '', '']]
tab3 = iolib.SimpleTable(data, headers, stubs, data_aligns="r")
tab1.extend(tab3)
return tab1 | A summary of all the main results.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence intervals.
float_format : str
Used for formatting numeric values in the summary.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation. | summary | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def mcnemar(table, exact=True, correction=True):
"""
McNemar test of homogeneity.
Parameters
----------
table : array_like
A square contingency table.
exact : bool
If exact is true, then the binomial distribution will be used.
If exact is false, then the chisquare distribution will be
used, which is the approximation to the distribution of the
test statistic for large sample sizes.
correction : bool
If true, then a continuity correction is used for the chisquare
distribution (if exact is false.)
Returns
-------
A bunch with attributes:
statistic : float or int, array
The test statistic is the chisquare statistic if exact is
false. If the exact binomial distribution is used, then this
contains the min(n1, n2), where n1, n2 are cases that are zero
in one sample but one in the other sample.
pvalue : float or array
p-value of the null hypothesis of equal marginal distributions.
Notes
-----
This is a special case of Cochran's Q test, and of the homogeneity
test. The results when the chisquare distribution is used are
identical, except for continuity correction.
"""
table = _make_df_square(table)
table = np.asarray(table, dtype=np.float64)
n1, n2 = table[0, 1], table[1, 0]
if exact:
statistic = np.minimum(n1, n2)
# binom is symmetric with p=0.5
# SciPy 1.7+ requires int arguments
int_sum = int(n1 + n2)
if int_sum != (n1 + n2):
raise ValueError(
"exact can only be used with tables containing integers."
)
pvalue = stats.binom.cdf(statistic, int_sum, 0.5) * 2
pvalue = np.minimum(pvalue, 1) # limit to 1 if n1==n2
else:
corr = int(correction) # convert bool to 0 or 1
statistic = (np.abs(n1 - n2) - corr)**2 / (1. * (n1 + n2))
df = 1
pvalue = stats.chi2.sf(statistic, df)
b = _Bunch()
b.statistic = statistic
b.pvalue = pvalue
return b | McNemar test of homogeneity.
Parameters
----------
table : array_like
A square contingency table.
exact : bool
If exact is true, then the binomial distribution will be used.
If exact is false, then the chisquare distribution will be
used, which is the approximation to the distribution of the
test statistic for large sample sizes.
correction : bool
If true, then a continuity correction is used for the chisquare
distribution (if exact is false.)
Returns
-------
A bunch with attributes:
statistic : float or int, array
The test statistic is the chisquare statistic if exact is
false. If the exact binomial distribution is used, then this
contains the min(n1, n2), where n1, n2 are cases that are zero
in one sample but one in the other sample.
pvalue : float or array
p-value of the null hypothesis of equal marginal distributions.
Notes
-----
This is a special case of Cochran's Q test, and of the homogeneity
test. The results when the chisquare distribution is used are
identical, except for continuity correction. | mcnemar | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def cochrans_q(x, return_object=True):
"""
Cochran's Q test for identical binomial proportions.
Parameters
----------
x : array_like, 2d (N, k)
data with N cases and k variables
return_object : bool
Return values as bunch instead of as individual values.
Returns
-------
Returns a bunch containing the following attributes, or the
individual values according to the value of `return_object`.
statistic : float
test statistic
pvalue : float
pvalue from the chisquare distribution
Notes
-----
Cochran's Q is a k-sample extension of the McNemar test. If there
are only two groups, then Cochran's Q test and the McNemar test
are equivalent.
The procedure tests that the probability of success is the same
for every group. The alternative hypothesis is that at least two
groups have a different probability of success.
In Wikipedia terminology, rows are blocks and columns are
treatments. The number of rows N, should be large for the
chisquare distribution to be a good approximation.
The Null hypothesis of the test is that all treatments have the
same effect.
References
----------
https://en.wikipedia.org/wiki/Cochran_test
SAS Manual for NPAR TESTS
"""
x = np.asarray(x, dtype=np.float64)
gruni = np.unique(x)
N, k = x.shape
count_row_success = (x == gruni[-1]).sum(1, float)
count_col_success = (x == gruni[-1]).sum(0, float)
count_row_ss = count_row_success.sum()
count_col_ss = count_col_success.sum()
assert count_row_ss == count_col_ss # just a calculation check
# From the SAS manual
q_stat = ((k-1) * (k * np.sum(count_col_success**2) - count_col_ss**2)
/ (k * count_row_ss - np.sum(count_row_success**2)))
# Note: the denominator looks just like k times the variance of
# the columns
# Wikipedia uses a different, but equivalent expression
# q_stat = (k-1) * (k * np.sum(count_row_success**2) - count_row_ss**2)
# / (k * count_col_ss - np.sum(count_col_success**2))
df = k - 1
pvalue = stats.chi2.sf(q_stat, df)
if return_object:
b = _Bunch()
b.statistic = q_stat
b.df = df
b.pvalue = pvalue
return b
return q_stat, pvalue, df | Cochran's Q test for identical binomial proportions.
Parameters
----------
x : array_like, 2d (N, k)
data with N cases and k variables
return_object : bool
Return values as bunch instead of as individual values.
Returns
-------
Returns a bunch containing the following attributes, or the
individual values according to the value of `return_object`.
statistic : float
test statistic
pvalue : float
pvalue from the chisquare distribution
Notes
-----
Cochran's Q is a k-sample extension of the McNemar test. If there
are only two groups, then Cochran's Q test and the McNemar test
are equivalent.
The procedure tests that the probability of success is the same
for every group. The alternative hypothesis is that at least two
groups have a different probability of success.
In Wikipedia terminology, rows are blocks and columns are
treatments. The number of rows N, should be large for the
chisquare distribution to be a good approximation.
The Null hypothesis of the test is that all treatments have the
same effect.
References
----------
https://en.wikipedia.org/wiki/Cochran_test
SAS Manual for NPAR TESTS | cochrans_q | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def _ecdf(x):
'''no frills empirical cdf used in fdrcorrection
'''
nobs = len(x)
return np.arange(1,nobs+1)/float(nobs) | no frills empirical cdf used in fdrcorrection | _ecdf | python | statsmodels/statsmodels | statsmodels/stats/multitest.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multitest.py | BSD-3-Clause |
def multipletests(pvals, alpha=0.05, method='hs',
maxiter=1,
is_sorted=False,
returnsorted=False):
"""
Test results and p-value correction for multiple tests
Parameters
----------
pvals : array_like, 1-d
uncorrected p-values. Must be 1-dimensional.
alpha : float
FWER, family-wise error rate, e.g. 0.1
method : str
Method used for testing and adjustment of pvalues. Can be either the
full name or initial letters. Available methods are:
- `bonferroni` : one-step correction
- `sidak` : one-step correction
- `holm-sidak` : step down method using Sidak adjustments
- `holm` : step-down method using Bonferroni adjustments
- `simes-hochberg` : step-up method (independent)
- `hommel` : closed method based on Simes tests (non-negative)
- `fdr_bh` : Benjamini/Hochberg (non-negative)
- `fdr_by` : Benjamini/Yekutieli (negative)
- `fdr_tsbh` : two stage fdr correction (non-negative)
- `fdr_tsbky` : two stage fdr correction (non-negative)
maxiter : int or bool
Maximum number of iterations for two-stage fdr, `fdr_tsbh` and
`fdr_tsbky`. It is ignored by all other methods.
maxiter=1 (default) corresponds to the two stage method.
maxiter=-1 corresponds to full iterations which is maxiter=len(pvals).
maxiter=0 uses only a single stage fdr correction using a 'bh' or 'bky'
prior fraction of assumed true hypotheses.
is_sorted : bool
If False (default), the p_values will be sorted, but the corrected
pvalues are in the original order. If True, then it assumed that the
pvalues are already sorted in ascending order.
returnsorted : bool
not tested, return sorted p-values instead of original sequence
Returns
-------
reject : ndarray, boolean
true for hypothesis that can be rejected for given alpha
pvals_corrected : ndarray
p-values corrected for multiple tests
alphacSidak : float
corrected alpha for Sidak method
alphacBonf : float
corrected alpha for Bonferroni method
Notes
-----
There may be API changes for this function in the future.
Except for 'fdr_twostage', the p-value correction is independent of the
alpha specified as argument. In these cases the corrected p-values
can also be compared with a different alpha. In the case of 'fdr_twostage',
the corrected p-values are specific to the given alpha, see
``fdrcorrection_twostage``.
The 'fdr_gbs' procedure is not verified against another package, p-values
are derived from scratch and are not derived in the reference. In Monte
Carlo experiments the method worked correctly and maintained the false
discovery rate.
All procedures that are included, control FWER or FDR in the independent
case, and most are robust in the positively correlated case.
`fdr_gbs`: high power, fdr control for independent case and only small
violation in positively correlated case
**Timing**:
Most of the time with large arrays is spent in `argsort`. When
we want to calculate the p-value for several methods, then it is more
efficient to presort the pvalues, and put the results back into the
original order outside of the function.
Method='hommel' is very slow for large arrays, since it requires the
evaluation of n partitions, where n is the number of p-values.
"""
import gc
pvals = np.asarray(pvals)
alphaf = alpha # Notation ?
if not is_sorted:
sortind = np.argsort(pvals)
pvals = np.take(pvals, sortind)
ntests = len(pvals)
alphacSidak = 1 - np.power((1. - alphaf), 1./ntests)
alphacBonf = alphaf / float(ntests)
if method.lower() in ['b', 'bonf', 'bonferroni']:
reject = pvals <= alphacBonf
pvals_corrected = pvals * float(ntests)
elif method.lower() in ['s', 'sidak']:
reject = pvals <= alphacSidak
pvals_corrected = -np.expm1(ntests * np.log1p(-pvals))
elif method.lower() in ['hs', 'holm-sidak']:
alphacSidak_all = 1 - np.power((1. - alphaf),
1./np.arange(ntests, 0, -1))
notreject = pvals > alphacSidak_all
del alphacSidak_all
nr_index = np.nonzero(notreject)[0]
if nr_index.size == 0:
# nonreject is empty, all rejected
notrejectmin = len(pvals)
else:
notrejectmin = np.min(nr_index)
notreject[notrejectmin:] = True
reject = ~notreject
del notreject
# It's eqivalent to 1 - np.power((1. - pvals),
# np.arange(ntests, 0, -1))
# but prevents the issue of the floating point precision
pvals_corrected_raw = -np.expm1(np.arange(ntests, 0, -1) *
np.log1p(-pvals))
pvals_corrected = np.maximum.accumulate(pvals_corrected_raw)
del pvals_corrected_raw
elif method.lower() in ['h', 'holm']:
notreject = pvals > alphaf / np.arange(ntests, 0, -1)
nr_index = np.nonzero(notreject)[0]
if nr_index.size == 0:
# nonreject is empty, all rejected
notrejectmin = len(pvals)
else:
notrejectmin = np.min(nr_index)
notreject[notrejectmin:] = True
reject = ~notreject
pvals_corrected_raw = pvals * np.arange(ntests, 0, -1)
pvals_corrected = np.maximum.accumulate(pvals_corrected_raw)
del pvals_corrected_raw
gc.collect()
elif method.lower() in ['sh', 'simes-hochberg']:
alphash = alphaf / np.arange(ntests, 0, -1)
reject = pvals <= alphash
rejind = np.nonzero(reject)
if rejind[0].size > 0:
rejectmax = np.max(np.nonzero(reject))
reject[:rejectmax] = True
pvals_corrected_raw = np.arange(ntests, 0, -1) * pvals
pvals_corrected = np.minimum.accumulate(pvals_corrected_raw[::-1])[::-1]
del pvals_corrected_raw
elif method.lower() in ['ho', 'hommel']:
# we need a copy because we overwrite it in a loop
a = pvals.copy()
for m in range(ntests, 1, -1):
cim = np.min(m * pvals[-m:] / np.arange(1,m+1.))
a[-m:] = np.maximum(a[-m:], cim)
a[:-m] = np.maximum(a[:-m], np.minimum(m * pvals[:-m], cim))
pvals_corrected = a
reject = a <= alphaf
elif method.lower() in ['fdr_bh', 'fdr_i', 'fdr_p', 'fdri', 'fdrp']:
# delegate, call with sorted pvals
reject, pvals_corrected = fdrcorrection(pvals, alpha=alpha,
method='indep',
is_sorted=True)
elif method.lower() in ['fdr_by', 'fdr_n', 'fdr_c', 'fdrn', 'fdrcorr']:
# delegate, call with sorted pvals
reject, pvals_corrected = fdrcorrection(pvals, alpha=alpha,
method='n',
is_sorted=True)
elif method.lower() in ['fdr_tsbky', 'fdr_2sbky', 'fdr_twostage']:
# delegate, call with sorted pvals
reject, pvals_corrected = fdrcorrection_twostage(pvals, alpha=alpha,
method='bky',
maxiter=maxiter,
is_sorted=True)[:2]
elif method.lower() in ['fdr_tsbh', 'fdr_2sbh']:
# delegate, call with sorted pvals
reject, pvals_corrected = fdrcorrection_twostage(pvals, alpha=alpha,
method='bh',
maxiter=maxiter,
is_sorted=True)[:2]
elif method.lower() in ['fdr_gbs']:
#adaptive stepdown in Gavrilov, Benjamini, Sarkar, Annals of Statistics 2009
## notreject = pvals > alphaf / np.arange(ntests, 0, -1) #alphacSidak
## notrejectmin = np.min(np.nonzero(notreject))
## notreject[notrejectmin:] = True
## reject = ~notreject
ii = np.arange(1, ntests + 1)
q = (ntests + 1. - ii)/ii * pvals / (1. - pvals)
pvals_corrected_raw = np.maximum.accumulate(q) #up requirementd
pvals_corrected = np.minimum.accumulate(pvals_corrected_raw[::-1])[::-1]
del pvals_corrected_raw
reject = pvals_corrected <= alpha
else:
raise ValueError('method not recognized')
if pvals_corrected is not None: #not necessary anymore
pvals_corrected[pvals_corrected>1] = 1
if is_sorted or returnsorted:
return reject, pvals_corrected, alphacSidak, alphacBonf
else:
pvals_corrected_ = np.empty_like(pvals_corrected)
pvals_corrected_[sortind] = pvals_corrected
del pvals_corrected
reject_ = np.empty_like(reject)
reject_[sortind] = reject
return reject_, pvals_corrected_, alphacSidak, alphacBonf | Test results and p-value correction for multiple tests
Parameters
----------
pvals : array_like, 1-d
uncorrected p-values. Must be 1-dimensional.
alpha : float
FWER, family-wise error rate, e.g. 0.1
method : str
Method used for testing and adjustment of pvalues. Can be either the
full name or initial letters. Available methods are:
- `bonferroni` : one-step correction
- `sidak` : one-step correction
- `holm-sidak` : step down method using Sidak adjustments
- `holm` : step-down method using Bonferroni adjustments
- `simes-hochberg` : step-up method (independent)
- `hommel` : closed method based on Simes tests (non-negative)
- `fdr_bh` : Benjamini/Hochberg (non-negative)
- `fdr_by` : Benjamini/Yekutieli (negative)
- `fdr_tsbh` : two stage fdr correction (non-negative)
- `fdr_tsbky` : two stage fdr correction (non-negative)
maxiter : int or bool
Maximum number of iterations for two-stage fdr, `fdr_tsbh` and
`fdr_tsbky`. It is ignored by all other methods.
maxiter=1 (default) corresponds to the two stage method.
maxiter=-1 corresponds to full iterations which is maxiter=len(pvals).
maxiter=0 uses only a single stage fdr correction using a 'bh' or 'bky'
prior fraction of assumed true hypotheses.
is_sorted : bool
If False (default), the p_values will be sorted, but the corrected
pvalues are in the original order. If True, then it assumed that the
pvalues are already sorted in ascending order.
returnsorted : bool
not tested, return sorted p-values instead of original sequence
Returns
-------
reject : ndarray, boolean
true for hypothesis that can be rejected for given alpha
pvals_corrected : ndarray
p-values corrected for multiple tests
alphacSidak : float
corrected alpha for Sidak method
alphacBonf : float
corrected alpha for Bonferroni method
Notes
-----
There may be API changes for this function in the future.
Except for 'fdr_twostage', the p-value correction is independent of the
alpha specified as argument. In these cases the corrected p-values
can also be compared with a different alpha. In the case of 'fdr_twostage',
the corrected p-values are specific to the given alpha, see
``fdrcorrection_twostage``.
The 'fdr_gbs' procedure is not verified against another package, p-values
are derived from scratch and are not derived in the reference. In Monte
Carlo experiments the method worked correctly and maintained the false
discovery rate.
All procedures that are included, control FWER or FDR in the independent
case, and most are robust in the positively correlated case.
`fdr_gbs`: high power, fdr control for independent case and only small
violation in positively correlated case
**Timing**:
Most of the time with large arrays is spent in `argsort`. When
we want to calculate the p-value for several methods, then it is more
efficient to presort the pvalues, and put the results back into the
original order outside of the function.
Method='hommel' is very slow for large arrays, since it requires the
evaluation of n partitions, where n is the number of p-values. | multipletests | python | statsmodels/statsmodels | statsmodels/stats/multitest.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multitest.py | BSD-3-Clause |
def fdrcorrection(pvals, alpha=0.05, method='indep', is_sorted=False):
'''
pvalue correction for false discovery rate.
This covers Benjamini/Hochberg for independent or positively correlated and
Benjamini/Yekutieli for general or negatively correlated tests.
Parameters
----------
pvals : array_like, 1d
Set of p-values of the individual tests.
alpha : float, optional
Family-wise error rate. Defaults to ``0.05``.
method : {'i', 'indep', 'p', 'poscorr', 'n', 'negcorr'}, optional
Which method to use for FDR correction.
``{'i', 'indep', 'p', 'poscorr'}`` all refer to ``fdr_bh``
(Benjamini/Hochberg for independent or positively
correlated tests). ``{'n', 'negcorr'}`` both refer to ``fdr_by``
(Benjamini/Yekutieli for general or negatively correlated tests).
Defaults to ``'indep'``.
is_sorted : bool, optional
If False (default), the p_values will be sorted, but the corrected
pvalues are in the original order. If True, then it assumed that the
pvalues are already sorted in ascending order.
Returns
-------
rejected : ndarray, bool
True if a hypothesis is rejected, False if not
pvalue-corrected : ndarray
pvalues adjusted for multiple hypothesis testing to limit FDR
Notes
-----
If there is prior information on the fraction of true hypothesis, then alpha
should be set to ``alpha * m/m_0`` where m is the number of tests,
given by the p-values, and m_0 is an estimate of the true hypothesis.
(see Benjamini, Krieger and Yekuteli)
The two-step method of Benjamini, Krieger and Yekutiel that estimates the number
of false hypotheses will be available (soon).
Both methods exposed via this function (Benjamini/Hochberg, Benjamini/Yekutieli)
are also available in the function ``multipletests``, as ``method="fdr_bh"`` and
``method="fdr_by"``, respectively.
See also
--------
multipletests
'''
pvals = np.asarray(pvals)
assert pvals.ndim == 1, "pvals must be 1-dimensional, that is of shape (n,)"
if not is_sorted:
pvals_sortind = np.argsort(pvals)
pvals_sorted = np.take(pvals, pvals_sortind)
else:
pvals_sorted = pvals # alias
if method in ['i', 'indep', 'p', 'poscorr']:
ecdffactor = _ecdf(pvals_sorted)
elif method in ['n', 'negcorr']:
cm = np.sum(1./np.arange(1, len(pvals_sorted)+1)) #corrected this
ecdffactor = _ecdf(pvals_sorted) / cm
## elif method in ['n', 'negcorr']:
## cm = np.sum(np.arange(len(pvals)))
## ecdffactor = ecdf(pvals_sorted)/cm
else:
raise ValueError('only indep and negcorr implemented')
reject = pvals_sorted <= ecdffactor*alpha
if reject.any():
rejectmax = max(np.nonzero(reject)[0])
reject[:rejectmax] = True
pvals_corrected_raw = pvals_sorted / ecdffactor
pvals_corrected = np.minimum.accumulate(pvals_corrected_raw[::-1])[::-1]
del pvals_corrected_raw
pvals_corrected[pvals_corrected>1] = 1
if not is_sorted:
pvals_corrected_ = np.empty_like(pvals_corrected)
pvals_corrected_[pvals_sortind] = pvals_corrected
del pvals_corrected
reject_ = np.empty_like(reject)
reject_[pvals_sortind] = reject
return reject_, pvals_corrected_
else:
return reject, pvals_corrected | pvalue correction for false discovery rate.
This covers Benjamini/Hochberg for independent or positively correlated and
Benjamini/Yekutieli for general or negatively correlated tests.
Parameters
----------
pvals : array_like, 1d
Set of p-values of the individual tests.
alpha : float, optional
Family-wise error rate. Defaults to ``0.05``.
method : {'i', 'indep', 'p', 'poscorr', 'n', 'negcorr'}, optional
Which method to use for FDR correction.
``{'i', 'indep', 'p', 'poscorr'}`` all refer to ``fdr_bh``
(Benjamini/Hochberg for independent or positively
correlated tests). ``{'n', 'negcorr'}`` both refer to ``fdr_by``
(Benjamini/Yekutieli for general or negatively correlated tests).
Defaults to ``'indep'``.
is_sorted : bool, optional
If False (default), the p_values will be sorted, but the corrected
pvalues are in the original order. If True, then it assumed that the
pvalues are already sorted in ascending order.
Returns
-------
rejected : ndarray, bool
True if a hypothesis is rejected, False if not
pvalue-corrected : ndarray
pvalues adjusted for multiple hypothesis testing to limit FDR
Notes
-----
If there is prior information on the fraction of true hypothesis, then alpha
should be set to ``alpha * m/m_0`` where m is the number of tests,
given by the p-values, and m_0 is an estimate of the true hypothesis.
(see Benjamini, Krieger and Yekuteli)
The two-step method of Benjamini, Krieger and Yekutiel that estimates the number
of false hypotheses will be available (soon).
Both methods exposed via this function (Benjamini/Hochberg, Benjamini/Yekutieli)
are also available in the function ``multipletests``, as ``method="fdr_bh"`` and
``method="fdr_by"``, respectively.
See also
--------
multipletests | fdrcorrection | python | statsmodels/statsmodels | statsmodels/stats/multitest.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multitest.py | BSD-3-Clause |
def fdrcorrection_twostage(pvals, alpha=0.05, method='bky',
maxiter=1,
iter=None,
is_sorted=False):
'''(iterated) two stage linear step-up procedure with estimation of number of true
hypotheses
Benjamini, Krieger and Yekuteli, procedure in Definition 6
Parameters
----------
pvals : array_like
set of p-values of the individual tests.
alpha : float
error rate
method : {'bky', 'bh')
see Notes for details
* 'bky' - implements the procedure in Definition 6 of Benjamini, Krieger
and Yekuteli 2006
* 'bh' - the two stage method of Benjamini and Hochberg
maxiter : int or bool
Maximum number of iterations.
maxiter=1 (default) corresponds to the two stage method.
maxiter=-1 corresponds to full iterations which is maxiter=len(pvals).
maxiter=0 uses only a single stage fdr correction using a 'bh' or 'bky'
prior fraction of assumed true hypotheses.
Boolean maxiter is allowed for backwards compatibility with the
deprecated ``iter`` keyword.
maxiter=False is two-stage fdr (maxiter=1)
maxiter=True is full iteration (maxiter=-1 or maxiter=len(pvals))
.. versionadded:: 0.14
Replacement for ``iter`` with additional features.
iter : bool
``iter`` is deprecated use ``maxiter`` instead.
If iter is True, then only one iteration step is used, this is the
two-step method.
If iter is False, then iterations are stopped at convergence which
occurs in a finite number of steps (at most len(pvals) steps).
.. deprecated:: 0.14
Use ``maxiter`` instead of ``iter``.
Returns
-------
rejected : ndarray, bool
True if a hypothesis is rejected, False if not
pvalue-corrected : ndarray
pvalues adjusted for multiple hypotheses testing to limit FDR
m0 : int
ntest - rej, estimated number of true (not rejected) hypotheses
alpha_stages : list of floats
A list of alphas that have been used at each stage
Notes
-----
The returned corrected p-values are specific to the given alpha, they
cannot be used for a different alpha.
The returned corrected p-values are from the last stage of the fdr_bh
linear step-up procedure (fdrcorrection0 with method='indep') corrected
for the estimated fraction of true hypotheses.
This means that the rejection decision can be obtained with
``pval_corrected <= alpha``, where ``alpha`` is the original significance
level.
(Note: This has changed from earlier versions (<0.5.0) of statsmodels.)
BKY described several other multi-stage methods, which would be easy to implement.
However, in their simulation the simple two-stage method (with iter=False) was the
most robust to the presence of positive correlation
TODO: What should be returned?
'''
pvals = np.asarray(pvals)
if iter is not None:
import warnings
msg = "iter keyword is deprecated, use maxiter keyword instead."
warnings.warn(msg, FutureWarning)
if iter is False:
maxiter = 1
elif iter is True or maxiter in [-1, None] :
maxiter = len(pvals)
# otherwise we use maxiter
if not is_sorted:
pvals_sortind = np.argsort(pvals)
pvals = np.take(pvals, pvals_sortind)
ntests = len(pvals)
if method == 'bky':
fact = (1.+alpha)
alpha_prime = alpha / fact
elif method == 'bh':
fact = 1.
alpha_prime = alpha
else:
raise ValueError("only 'bky' and 'bh' are available as method")
alpha_stages = [alpha_prime]
rej, pvalscorr = fdrcorrection(pvals, alpha=alpha_prime, method='indep',
is_sorted=True)
r1 = rej.sum()
if (r1 == 0) or (r1 == ntests):
# return rej, pvalscorr * fact, ntests - r1, alpha_stages
reject = rej
pvalscorr *= fact
ri = r1
else:
ri_old = ri = r1
ntests0 = ntests # needed if maxiter=0
# while True:
for it in range(maxiter):
ntests0 = 1.0 * ntests - ri_old
alpha_star = alpha_prime * ntests / ntests0
alpha_stages.append(alpha_star)
#print ntests0, alpha_star
rej, pvalscorr = fdrcorrection(pvals, alpha=alpha_star, method='indep',
is_sorted=True)
ri = rej.sum()
if (it >= maxiter - 1) or ri == ri_old:
break
elif ri < ri_old:
# prevent cycles and endless loops
raise RuntimeError(" oops - should not be here")
ri_old = ri
# make adjustment to pvalscorr to reflect estimated number of Non-Null cases
# decision is then pvalscorr < alpha (or <=)
pvalscorr *= ntests0 * 1.0 / ntests
if method == 'bky':
pvalscorr *= (1. + alpha)
pvalscorr[pvalscorr>1] = 1
if not is_sorted:
pvalscorr_ = np.empty_like(pvalscorr)
pvalscorr_[pvals_sortind] = pvalscorr
del pvalscorr
reject = np.empty_like(rej)
reject[pvals_sortind] = rej
return reject, pvalscorr_, ntests - ri, alpha_stages
else:
return rej, pvalscorr, ntests - ri, alpha_stages | (iterated) two stage linear step-up procedure with estimation of number of true
hypotheses
Benjamini, Krieger and Yekuteli, procedure in Definition 6
Parameters
----------
pvals : array_like
set of p-values of the individual tests.
alpha : float
error rate
method : {'bky', 'bh')
see Notes for details
* 'bky' - implements the procedure in Definition 6 of Benjamini, Krieger
and Yekuteli 2006
* 'bh' - the two stage method of Benjamini and Hochberg
maxiter : int or bool
Maximum number of iterations.
maxiter=1 (default) corresponds to the two stage method.
maxiter=-1 corresponds to full iterations which is maxiter=len(pvals).
maxiter=0 uses only a single stage fdr correction using a 'bh' or 'bky'
prior fraction of assumed true hypotheses.
Boolean maxiter is allowed for backwards compatibility with the
deprecated ``iter`` keyword.
maxiter=False is two-stage fdr (maxiter=1)
maxiter=True is full iteration (maxiter=-1 or maxiter=len(pvals))
.. versionadded:: 0.14
Replacement for ``iter`` with additional features.
iter : bool
``iter`` is deprecated use ``maxiter`` instead.
If iter is True, then only one iteration step is used, this is the
two-step method.
If iter is False, then iterations are stopped at convergence which
occurs in a finite number of steps (at most len(pvals) steps).
.. deprecated:: 0.14
Use ``maxiter`` instead of ``iter``.
Returns
-------
rejected : ndarray, bool
True if a hypothesis is rejected, False if not
pvalue-corrected : ndarray
pvalues adjusted for multiple hypotheses testing to limit FDR
m0 : int
ntest - rej, estimated number of true (not rejected) hypotheses
alpha_stages : list of floats
A list of alphas that have been used at each stage
Notes
-----
The returned corrected p-values are specific to the given alpha, they
cannot be used for a different alpha.
The returned corrected p-values are from the last stage of the fdr_bh
linear step-up procedure (fdrcorrection0 with method='indep') corrected
for the estimated fraction of true hypotheses.
This means that the rejection decision can be obtained with
``pval_corrected <= alpha``, where ``alpha`` is the original significance
level.
(Note: This has changed from earlier versions (<0.5.0) of statsmodels.)
BKY described several other multi-stage methods, which would be easy to implement.
However, in their simulation the simple two-stage method (with iter=False) was the
most robust to the presence of positive correlation
TODO: What should be returned? | fdrcorrection_twostage | python | statsmodels/statsmodels | statsmodels/stats/multitest.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multitest.py | BSD-3-Clause |
def local_fdr(zscores, null_proportion=1.0, null_pdf=None, deg=7,
nbins=30, alpha=0):
"""
Calculate local FDR values for a list of Z-scores.
Parameters
----------
zscores : array_like
A vector of Z-scores
null_proportion : float
The assumed proportion of true null hypotheses
null_pdf : function mapping reals to positive reals
The density of null Z-scores; if None, use standard normal
deg : int
The maximum exponent in the polynomial expansion of the
density of non-null Z-scores
nbins : int
The number of bins for estimating the marginal density
of Z-scores.
alpha : float
Use Poisson ridge regression with parameter alpha to estimate
the density of non-null Z-scores.
Returns
-------
fdr : array_like
A vector of FDR values
References
----------
B Efron (2008). Microarrays, Empirical Bayes, and the Two-Groups
Model. Statistical Science 23:1, 1-22.
Examples
--------
Basic use (the null Z-scores are taken to be standard normal):
>>> from statsmodels.stats.multitest import local_fdr
>>> import numpy as np
>>> zscores = np.random.randn(30)
>>> fdr = local_fdr(zscores)
Use a Gaussian null distribution estimated from the data:
>>> null = EmpiricalNull(zscores)
>>> fdr = local_fdr(zscores, null_pdf=null.pdf)
"""
from statsmodels.genmod.generalized_linear_model import GLM
from statsmodels.genmod.generalized_linear_model import families
from statsmodels.regression.linear_model import OLS
# Bins for Poisson modeling of the marginal Z-score density
minz = min(zscores)
maxz = max(zscores)
bins = np.linspace(minz, maxz, nbins)
# Bin counts
zhist = np.histogram(zscores, bins)[0]
# Bin centers
zbins = (bins[:-1] + bins[1:]) / 2
# The design matrix at bin centers
dmat = np.vander(zbins, deg + 1)
# Rescale the design matrix
sd = dmat.std(0)
ii = sd >1e-8
dmat[:, ii] /= sd[ii]
start = OLS(np.log(1 + zhist), dmat).fit().params
# Poisson regression
if alpha > 0:
md = GLM(zhist, dmat, family=families.Poisson()).fit_regularized(L1_wt=0, alpha=alpha, start_params=start)
else:
md = GLM(zhist, dmat, family=families.Poisson()).fit(start_params=start)
# The design matrix for all Z-scores
dmat_full = np.vander(zscores, deg + 1)
dmat_full[:, ii] /= sd[ii]
# The height of the estimated marginal density of Z-scores,
# evaluated at every observed Z-score.
fz = md.predict(dmat_full) / (len(zscores) * (bins[1] - bins[0]))
# The null density.
if null_pdf is None:
f0 = np.exp(-0.5 * zscores**2) / np.sqrt(2 * np.pi)
else:
f0 = null_pdf(zscores)
# The local FDR values
fdr = null_proportion * f0 / fz
fdr = np.clip(fdr, 0, 1)
return fdr | Calculate local FDR values for a list of Z-scores.
Parameters
----------
zscores : array_like
A vector of Z-scores
null_proportion : float
The assumed proportion of true null hypotheses
null_pdf : function mapping reals to positive reals
The density of null Z-scores; if None, use standard normal
deg : int
The maximum exponent in the polynomial expansion of the
density of non-null Z-scores
nbins : int
The number of bins for estimating the marginal density
of Z-scores.
alpha : float
Use Poisson ridge regression with parameter alpha to estimate
the density of non-null Z-scores.
Returns
-------
fdr : array_like
A vector of FDR values
References
----------
B Efron (2008). Microarrays, Empirical Bayes, and the Two-Groups
Model. Statistical Science 23:1, 1-22.
Examples
--------
Basic use (the null Z-scores are taken to be standard normal):
>>> from statsmodels.stats.multitest import local_fdr
>>> import numpy as np
>>> zscores = np.random.randn(30)
>>> fdr = local_fdr(zscores)
Use a Gaussian null distribution estimated from the data:
>>> null = EmpiricalNull(zscores)
>>> fdr = local_fdr(zscores, null_pdf=null.pdf) | local_fdr | python | statsmodels/statsmodels | statsmodels/stats/multitest.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multitest.py | BSD-3-Clause |
def fun(params):
"""
Negative log-likelihood of z-scores.
The function has three arguments, packed into a vector:
mean : location parameter
logscale : log of the scale parameter
logitprop : logit of the proportion of true nulls
The implementation follows section 4 from Efron 2008.
"""
d, s, p = xform(params)
# Mass within the central region
central_mass = (norm.cdf((null_ub - d) / s) -
norm.cdf((null_lb - d) / s))
# Probability that a Z-score is null and is in the central region
cp = p * central_mass
# Binomial term
rval = n_zs0 * np.log(cp) + (n_zs - n_zs0) * np.log(1 - cp)
# Truncated Gaussian term for null Z-scores
zv = (zscores0 - d) / s
rval += np.sum(-zv**2 / 2) - n_zs0 * np.log(s)
rval -= n_zs0 * np.log(central_mass)
return -rval | Negative log-likelihood of z-scores.
The function has three arguments, packed into a vector:
mean : location parameter
logscale : log of the scale parameter
logitprop : logit of the proportion of true nulls
The implementation follows section 4 from Efron 2008. | __init__.fun | python | statsmodels/statsmodels | statsmodels/stats/multitest.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multitest.py | BSD-3-Clause |
def pdf(self, zscores):
"""
Evaluates the fitted empirical null Z-score density.
Parameters
----------
zscores : scalar or array_like
The point or points at which the density is to be
evaluated.
Returns
-------
The empirical null Z-score density evaluated at the given
points.
"""
zval = (zscores - self.mean) / self.sd
return np.exp(-0.5*zval**2 - np.log(self.sd) - 0.5*np.log(2*np.pi)) | Evaluates the fitted empirical null Z-score density.
Parameters
----------
zscores : scalar or array_like
The point or points at which the density is to be
evaluated.
Returns
-------
The empirical null Z-score density evaluated at the given
points. | pdf | python | statsmodels/statsmodels | statsmodels/stats/multitest.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multitest.py | BSD-3-Clause |
def pairwise_tukeyhsd(endog, groups, alpha=0.05, use_var='equal'):
"""
Calculate all pairwise comparisons with TukeyHSD or Games-Howell.
Parameters
----------
endog : ndarray, float, 1d
response variable
groups : ndarray, 1d
array with groups, can be string or integers
alpha : float
significance level for the test
use_var : {"unequal", "equal"}
If ``use_var`` is "equal", then the Tukey-hsd pvalues are returned.
Tukey-hsd assumes that (within) variances are the same across groups.
If ``use_var`` is "unequal", then the Games-Howell pvalues are
returned. This uses Welch's t-test for unequal variances with
Satterthwaite's corrected degrees of freedom for each pairwise
comparison.
Returns
-------
results : TukeyHSDResults instance
A results class containing relevant data and some post-hoc
calculations, including adjusted p-value
Notes
-----
This is just a wrapper around tukeyhsd method of MultiComparison.
Tukey-hsd is not robust to heteroscedasticity, i.e. variance differ across
groups, especially if group sizes also vary. In those cases, the actual
size (rejection rate under the Null hypothesis) might be far from the
nominal size of the test.
The Games-Howell method uses pairwise t-tests that are robust to differences
in variances and approximately maintains size unless samples are very
small.
.. versionadded:: 0.15
` The `use_var` keyword and option for Games-Howell test.
See Also
--------
MultiComparison
tukeyhsd
statsmodels.sandbox.stats.multicomp.TukeyHSDResults
"""
return MultiComparison(endog, groups).tukeyhsd(alpha=alpha,
use_var=use_var) | Calculate all pairwise comparisons with TukeyHSD or Games-Howell.
Parameters
----------
endog : ndarray, float, 1d
response variable
groups : ndarray, 1d
array with groups, can be string or integers
alpha : float
significance level for the test
use_var : {"unequal", "equal"}
If ``use_var`` is "equal", then the Tukey-hsd pvalues are returned.
Tukey-hsd assumes that (within) variances are the same across groups.
If ``use_var`` is "unequal", then the Games-Howell pvalues are
returned. This uses Welch's t-test for unequal variances with
Satterthwaite's corrected degrees of freedom for each pairwise
comparison.
Returns
-------
results : TukeyHSDResults instance
A results class containing relevant data and some post-hoc
calculations, including adjusted p-value
Notes
-----
This is just a wrapper around tukeyhsd method of MultiComparison.
Tukey-hsd is not robust to heteroscedasticity, i.e. variance differ across
groups, especially if group sizes also vary. In those cases, the actual
size (rejection rate under the Null hypothesis) might be far from the
nominal size of the test.
The Games-Howell method uses pairwise t-tests that are robust to differences
in variances and approximately maintains size unless samples are very
small.
.. versionadded:: 0.15
` The `use_var` keyword and option for Games-Howell test.
See Also
--------
MultiComparison
tukeyhsd
statsmodels.sandbox.stats.multicomp.TukeyHSDResults | pairwise_tukeyhsd | python | statsmodels/statsmodels | statsmodels/stats/multicomp.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multicomp.py | BSD-3-Clause |
def ttest_power(effect_size, nobs, alpha, df=None, alternative='two-sided'):
'''Calculate power of a ttest
'''
d = effect_size
if df is None:
df = nobs - 1
if alternative in ['two-sided', '2s']:
alpha_ = alpha / 2. #no inplace changes, does not work
elif alternative in ['smaller', 'larger']:
alpha_ = alpha
else:
raise ValueError("alternative has to be 'two-sided', 'larger' " +
"or 'smaller'")
pow_ = 0
if alternative in ['two-sided', '2s', 'larger']:
crit_upp = stats.t.isf(alpha_, df)
#print crit_upp, df, d*np.sqrt(nobs)
# use private methods, generic methods return nan with negative d
if np.any(np.isnan(crit_upp)):
# avoid endless loop, https://github.com/scipy/scipy/issues/2667
pow_ = np.nan
else:
# pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs))
# use scipy.special
pow_ = nct_sf(crit_upp, df, d*np.sqrt(nobs))
if alternative in ['two-sided', '2s', 'smaller']:
crit_low = stats.t.ppf(alpha_, df)
#print crit_low, df, d*np.sqrt(nobs)
if np.any(np.isnan(crit_low)):
pow_ = np.nan
else:
# pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs))
pow_ += nct_cdf(crit_low, df, d*np.sqrt(nobs))
return pow_ | Calculate power of a ttest | ttest_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def normal_power(effect_size, nobs, alpha, alternative='two-sided', sigma=1.):
"""Calculate power of a normal distributed test statistic
This is an generalization of `normal_power` when variance under Null and
Alternative differ.
Parameters
----------
effect size : float
difference in the estimated means or statistics under the alternative
normalized by the standard deviation (without division by sqrt(nobs).
nobs : float or int
number of observations
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
"""
d = effect_size
if alternative in ['two-sided', '2s']:
alpha_ = alpha / 2. #no inplace changes, does not work
elif alternative in ['smaller', 'larger']:
alpha_ = alpha
else:
raise ValueError("alternative has to be 'two-sided', 'larger' " +
"or 'smaller'")
pow_ = 0
if alternative in ['two-sided', '2s', 'larger']:
crit = stats.norm.isf(alpha_)
pow_ = stats.norm.sf(crit - d*np.sqrt(nobs)/sigma)
if alternative in ['two-sided', '2s', 'smaller']:
crit = stats.norm.ppf(alpha_)
pow_ += stats.norm.cdf(crit - d*np.sqrt(nobs)/sigma)
return pow_ | Calculate power of a normal distributed test statistic
This is an generalization of `normal_power` when variance under Null and
Alternative differ.
Parameters
----------
effect size : float
difference in the estimated means or statistics under the alternative
normalized by the standard deviation (without division by sqrt(nobs).
nobs : float or int
number of observations
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'. | normal_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def normal_power_het(diff, nobs, alpha, std_null=1., std_alternative=None,
alternative='two-sided'):
"""Calculate power of a normal distributed test statistic
This is an generalization of `normal_power` when variance under Null and
Alternative differ.
Parameters
----------
diff : float
difference in the estimated means or statistics under the alternative.
nobs : float or int
number of observations
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
std_null : float
standard deviation under the Null hypothesis without division by
sqrt(nobs)
std_alternative : float
standard deviation under the Alternative hypothesis without division
by sqrt(nobs)
alternative : string, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
Returns
-------
power : float
"""
d = diff
if std_alternative is None:
std_alternative = std_null
if alternative in ['two-sided', '2s']:
alpha_ = alpha / 2. #no inplace changes, does not work
elif alternative in ['smaller', 'larger']:
alpha_ = alpha
else:
raise ValueError("alternative has to be 'two-sided', 'larger' " +
"or 'smaller'")
std_ratio = std_null / std_alternative
pow_ = 0
if alternative in ['two-sided', '2s', 'larger']:
crit = stats.norm.isf(alpha_)
pow_ = stats.norm.sf(crit * std_ratio -
d*np.sqrt(nobs) / std_alternative)
if alternative in ['two-sided', '2s', 'smaller']:
crit = stats.norm.ppf(alpha_)
pow_ += stats.norm.cdf(crit * std_ratio -
d*np.sqrt(nobs) / std_alternative)
return pow_ | Calculate power of a normal distributed test statistic
This is an generalization of `normal_power` when variance under Null and
Alternative differ.
Parameters
----------
diff : float
difference in the estimated means or statistics under the alternative.
nobs : float or int
number of observations
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
std_null : float
standard deviation under the Null hypothesis without division by
sqrt(nobs)
std_alternative : float
standard deviation under the Alternative hypothesis without division
by sqrt(nobs)
alternative : string, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
Returns
-------
power : float | normal_power_het | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def normal_sample_size_one_tail(diff, power, alpha, std_null=1.,
std_alternative=None):
"""explicit sample size computation if only one tail is relevant
The sample size is based on the power in one tail assuming that the
alternative is in the tail where the test has power that increases
with sample size.
Use alpha/2 to compute the one tail approximation to the two-sided
test, i.e. consider only one tail of two-sided test.
Parameters
----------
diff : float
difference in the estimated means or statistics under the alternative.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a type II
error. Power is the probability that the test correctly rejects the
Null Hypothesis if the Alternative Hypothesis is true.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
Note: alpha is used for one tail. Use alpha/2 for two-sided
alternative.
std_null : float
standard deviation under the Null hypothesis without division by
sqrt(nobs)
std_alternative : float
standard deviation under the Alternative hypothesis without division
by sqrt(nobs). Defaults to None. If None, ``std_alternative`` is set
to the value of ``std_null``.
Returns
-------
nobs : float
Sample size to achieve (at least) the desired power.
If the minimum power is satisfied for all positive sample sizes, then
``nobs`` will be zero. This will be the case when power <= alpha if
std_alternative is equal to std_null.
"""
if std_alternative is None:
std_alternative = std_null
crit_power = stats.norm.isf(power)
crit = stats.norm.isf(alpha)
n1 = (np.maximum(crit * std_null - crit_power * std_alternative, 0)
/ diff)**2
return n1 | explicit sample size computation if only one tail is relevant
The sample size is based on the power in one tail assuming that the
alternative is in the tail where the test has power that increases
with sample size.
Use alpha/2 to compute the one tail approximation to the two-sided
test, i.e. consider only one tail of two-sided test.
Parameters
----------
diff : float
difference in the estimated means or statistics under the alternative.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a type II
error. Power is the probability that the test correctly rejects the
Null Hypothesis if the Alternative Hypothesis is true.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
Note: alpha is used for one tail. Use alpha/2 for two-sided
alternative.
std_null : float
standard deviation under the Null hypothesis without division by
sqrt(nobs)
std_alternative : float
standard deviation under the Alternative hypothesis without division
by sqrt(nobs). Defaults to None. If None, ``std_alternative`` is set
to the value of ``std_null``.
Returns
-------
nobs : float
Sample size to achieve (at least) the desired power.
If the minimum power is satisfied for all positive sample sizes, then
``nobs`` will be zero. This will be the case when power <= alpha if
std_alternative is equal to std_null. | normal_sample_size_one_tail | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def ftest_anova_power(effect_size, nobs, alpha, k_groups=2, df=None):
'''power for ftest for one way anova with k equal sized groups
nobs total sample size, sum over all groups
should be general nobs observations, k_groups restrictions ???
'''
df_num = k_groups - 1
df_denom = nobs - k_groups
crit = stats.f.isf(alpha, df_num, df_denom)
pow_ = ncf_sf(crit, df_num, df_denom, effect_size**2 * nobs)
return pow_ | power for ftest for one way anova with k equal sized groups
nobs total sample size, sum over all groups
should be general nobs observations, k_groups restrictions ??? | ftest_anova_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def ftest_power(effect_size, df2, df1, alpha, ncc=1):
'''Calculate the power of a F-test.
Parameters
----------
effect_size : float
The effect size is here Cohen's ``f``, the square root of ``f2``.
df2 : int or float
Denominator degrees of freedom.
This corresponds to the df_resid in Wald tests.
df1 : int or float
Numerator degrees of freedom.
This corresponds to the number of constraints in Wald tests.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ncc : int
degrees of freedom correction for non-centrality parameter.
see Notes
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Notes
-----
changed in 0.14: use df2, df1 instead of df_num, df_denom as arg names.
The latter had reversed meaning.
The sample size is given implicitly by ``df2`` with fixed number of
constraints given by numerator degrees of freedom ``df1``:
nobs = df2 + df1 + ncc
Set ncc=0 to match t-test, or f-test in LikelihoodModelResults.
ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test
ftest_power with ncc=0 should also be correct for f_test in regression
models, with df_num (df1) as number of constraints and d_denom (df2) as
df_resid.
'''
df_num, df_denom = df1, df2
nc = effect_size**2 * (df_denom + df_num + ncc)
crit = stats.f.isf(alpha, df_num, df_denom)
# pow_ = stats.ncf.sf(crit, df_num, df_denom, nc)
# use scipy.special for ncf
pow_ = ncf_sf(crit, df_num, df_denom, nc)
return pow_ #, crit, nc | Calculate the power of a F-test.
Parameters
----------
effect_size : float
The effect size is here Cohen's ``f``, the square root of ``f2``.
df2 : int or float
Denominator degrees of freedom.
This corresponds to the df_resid in Wald tests.
df1 : int or float
Numerator degrees of freedom.
This corresponds to the number of constraints in Wald tests.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ncc : int
degrees of freedom correction for non-centrality parameter.
see Notes
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Notes
-----
changed in 0.14: use df2, df1 instead of df_num, df_denom as arg names.
The latter had reversed meaning.
The sample size is given implicitly by ``df2`` with fixed number of
constraints given by numerator degrees of freedom ``df1``:
nobs = df2 + df1 + ncc
Set ncc=0 to match t-test, or f-test in LikelihoodModelResults.
ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test
ftest_power with ncc=0 should also be correct for f_test in regression
models, with df_num (df1) as number of constraints and d_denom (df2) as
df_resid. | ftest_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def ftest_power_f2(effect_size, df_num, df_denom, alpha, ncc=1):
'''Calculate the power of a F-test.
Based on Cohen's `f^2` effect size.
This assumes
df_num : numerator degrees of freedom, (number of constraints)
df_denom : denominator degrees of freedom (df_resid in regression)
nobs = df_denom + df_num + ncc
nc = effect_size * nobs (noncentrality index)
Power is computed one-sided in the upper tail.
Parameters
----------
effect_size : float
Cohen's f2 effect size or noncentrality divided by nobs.
df_num : int or float
Numerator degrees of freedom.
This corresponds to the number of constraints in Wald tests.
df_denom : int or float
Denominator degrees of freedom.
This corresponds to the df_resid in Wald tests.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ncc : int
degrees of freedom correction for non-centrality parameter.
see Notes
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Notes
The sample size is given implicitly by ``df_denom`` with fixed number of
constraints given by numerator degrees of freedom ``df_num``:
nobs = df_denom + df_num + ncc
Set ncc=0 to match t-test, or f-test in LikelihoodModelResults.
ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test
ftest_power with ncc=0 should also be correct for f_test in regression
models, with df_num (df1) as number of constraints and d_denom (df2) as
df_resid.
'''
nc = effect_size * (df_denom + df_num + ncc)
crit = stats.f.isf(alpha, df_num, df_denom)
# pow_ = stats.ncf.sf(crit, df_num, df_denom, nc)
# use scipy.special for ncf
pow_ = ncf_sf(crit, df_num, df_denom, nc)
return pow_ | Calculate the power of a F-test.
Based on Cohen's `f^2` effect size.
This assumes
df_num : numerator degrees of freedom, (number of constraints)
df_denom : denominator degrees of freedom (df_resid in regression)
nobs = df_denom + df_num + ncc
nc = effect_size * nobs (noncentrality index)
Power is computed one-sided in the upper tail.
Parameters
----------
effect_size : float
Cohen's f2 effect size or noncentrality divided by nobs.
df_num : int or float
Numerator degrees of freedom.
This corresponds to the number of constraints in Wald tests.
df_denom : int or float
Denominator degrees of freedom.
This corresponds to the df_resid in Wald tests.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ncc : int
degrees of freedom correction for non-centrality parameter.
see Notes
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Notes
The sample size is given implicitly by ``df_denom`` with fixed number of
constraints given by numerator degrees of freedom ``df_num``:
nobs = df_denom + df_num + ncc
Set ncc=0 to match t-test, or f-test in LikelihoodModelResults.
ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test
ftest_power with ncc=0 should also be correct for f_test in regression
models, with df_num (df1) as number of constraints and d_denom (df2) as
df_resid. | ftest_power_f2 | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def solve_power(self, **kwds):
'''solve for any one of the parameters of a t-test
for t-test the keywords are:
effect_size, nobs, alpha, power
exactly one needs to be ``None``, all others need numeric values
*attaches*
cache_fit_res : list
Cache of the result of the root finding procedure for the latest
call to ``solve_power``, mainly for debugging purposes.
The first element is the success indicator, one if successful.
The remaining elements contain the return information of the up to
three solvers that have been tried.
'''
#TODO: maybe use explicit kwds,
# nicer but requires inspect? and not generic across tests
# I'm duplicating this in the subclass to get informative docstring
key = [k for k,v in kwds.items() if v is None]
#print kwds, key
if len(key) != 1:
raise ValueError('need exactly one keyword that is None')
key = key[0]
if key == 'power':
del kwds['power']
return self.power(**kwds)
if kwds['effect_size'] == 0:
import warnings
from statsmodels.tools.sm_exceptions import HypothesisTestWarning
warnings.warn('Warning: Effect size of 0 detected', HypothesisTestWarning)
if key == 'power':
return kwds['alpha']
if key == 'alpha':
return kwds['power']
else:
raise ValueError('Cannot detect an effect-size of 0. Try changing your effect-size.')
self._counter = 0
def func(x):
kwds[key] = x
fval = self._power_identity(**kwds)
self._counter += 1
#print self._counter,
if self._counter > 500:
raise RuntimeError('possible endless loop (500 NaNs)')
if np.isnan(fval):
return np.inf
else:
return fval
#TODO: I'm using the following so I get a warning when start_ttp is not defined
try:
start_value = self.start_ttp[key]
except KeyError:
start_value = 0.9
import warnings
from statsmodels.tools.sm_exceptions import ValueWarning
warnings.warn(f'Warning: using default start_value for {key}', ValueWarning)
fit_kwds = self.start_bqexp[key]
fit_res = []
#print vars()
try:
val, res = brentq_expanding(func, full_output=True, **fit_kwds)
failed = False
fit_res.append(res)
except ValueError:
failed = True
fit_res.append(None)
success = None
if (not failed) and res.converged:
success = 1
else:
# try backup
# TODO: check more cases to make this robust
if not np.isnan(start_value):
val, infodict, ier, msg = optimize.fsolve(func, start_value,
full_output=True) #scalar
#val = optimize.newton(func, start_value) #scalar
fval = infodict['fvec']
fit_res.append(infodict)
else:
ier = -1
fval = 1
fit_res.append([None])
if ier == 1 and np.abs(fval) < 1e-4 :
success = 1
else:
#print infodict
if key in ['alpha', 'power', 'effect_size']:
val, r = optimize.brentq(func, 1e-8, 1-1e-8,
full_output=True) #scalar
success = 1 if r.converged else 0
fit_res.append(r)
else:
success = 0
if not success == 1:
import warnings
from statsmodels.tools.sm_exceptions import (ConvergenceWarning,
convergence_doc)
warnings.warn(convergence_doc, ConvergenceWarning)
#attach fit_res, for reading only, should be needed only for debugging
fit_res.insert(0, success)
self.cache_fit_res = fit_res
return val | solve for any one of the parameters of a t-test
for t-test the keywords are:
effect_size, nobs, alpha, power
exactly one needs to be ``None``, all others need numeric values
*attaches*
cache_fit_res : list
Cache of the result of the root finding procedure for the latest
call to ``solve_power``, mainly for debugging purposes.
The first element is the success indicator, one if successful.
The remaining elements contain the return information of the up to
three solvers that have been tried. | solve_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def plot_power(self, dep_var='nobs', nobs=None, effect_size=None,
alpha=0.05, ax=None, title=None, plt_kwds=None, **kwds):
"""
Plot power with number of observations or effect size on x-axis
Parameters
----------
dep_var : {'nobs', 'effect_size', 'alpha'}
This specifies which variable is used for the horizontal axis.
If dep_var='nobs' (default), then one curve is created for each
value of ``effect_size``. If dep_var='effect_size' or alpha, then
one curve is created for each value of ``nobs``.
nobs : {scalar, array_like}
specifies the values of the number of observations in the plot
effect_size : {scalar, array_like}
specifies the values of the effect_size in the plot
alpha : {float, array_like}
The significance level (type I error) used in the power
calculation. Can only be more than a scalar, if ``dep_var='alpha'``
ax : None or axis instance
If ax is None, than a matplotlib figure is created. If ax is a
matplotlib axis instance, then it is reused, and the plot elements
are created with it.
title : str
title for the axis. Use an empty string, ``''``, to avoid a title.
plt_kwds : {None, dict}
not used yet
kwds : dict
These remaining keyword arguments are used as arguments to the
power function. Many power function support ``alternative`` as a
keyword argument, two-sample test support ``ratio``.
Returns
-------
Figure
If `ax` is None, the created figure. Otherwise the figure to which
`ax` is connected.
Notes
-----
This works only for classes where the ``power`` method has
``effect_size``, ``nobs`` and ``alpha`` as the first three arguments.
If the second argument is ``nobs1``, then the number of observations
in the plot are those for the first sample.
TODO: fix this for FTestPower and GofChisquarePower
TODO: maybe add line variable, if we want more than nobs and effectsize
"""
#if pwr_kwds is None:
# pwr_kwds = {}
from statsmodels.graphics import utils
from statsmodels.graphics.plottools import rainbow
fig, ax = utils.create_mpl_ax(ax)
import matplotlib.pyplot as plt
colormap = plt.cm.Dark2 #pylint: disable-msg=E1101
plt_alpha = 1 #0.75
lw = 2
if dep_var == 'nobs':
colors = rainbow(len(effect_size))
colors = [colormap(i) for i in np.linspace(0, 0.9, len(effect_size))]
for ii, es in enumerate(effect_size):
power = self.power(es, nobs, alpha, **kwds)
ax.plot(nobs, power, lw=lw, alpha=plt_alpha,
color=colors[ii], label='es=%4.2F' % es)
xlabel = 'Number of Observations'
elif dep_var in ['effect size', 'effect_size', 'es']:
colors = rainbow(len(nobs))
colors = [colormap(i) for i in np.linspace(0, 0.9, len(nobs))]
for ii, n in enumerate(nobs):
power = self.power(effect_size, n, alpha, **kwds)
ax.plot(effect_size, power, lw=lw, alpha=plt_alpha,
color=colors[ii], label='N=%4.2F' % n)
xlabel = 'Effect Size'
elif dep_var in ['alpha']:
# experimental nobs as defining separate lines
colors = rainbow(len(nobs))
for ii, n in enumerate(nobs):
power = self.power(effect_size, n, alpha, **kwds)
ax.plot(alpha, power, lw=lw, alpha=plt_alpha,
color=colors[ii], label='N=%4.2F' % n)
xlabel = 'alpha'
else:
raise ValueError('depvar not implemented')
if title is None:
title = 'Power of Test'
ax.set_xlabel(xlabel)
ax.set_title(title)
ax.legend(loc='lower right')
return fig | Plot power with number of observations or effect size on x-axis
Parameters
----------
dep_var : {'nobs', 'effect_size', 'alpha'}
This specifies which variable is used for the horizontal axis.
If dep_var='nobs' (default), then one curve is created for each
value of ``effect_size``. If dep_var='effect_size' or alpha, then
one curve is created for each value of ``nobs``.
nobs : {scalar, array_like}
specifies the values of the number of observations in the plot
effect_size : {scalar, array_like}
specifies the values of the effect_size in the plot
alpha : {float, array_like}
The significance level (type I error) used in the power
calculation. Can only be more than a scalar, if ``dep_var='alpha'``
ax : None or axis instance
If ax is None, than a matplotlib figure is created. If ax is a
matplotlib axis instance, then it is reused, and the plot elements
are created with it.
title : str
title for the axis. Use an empty string, ``''``, to avoid a title.
plt_kwds : {None, dict}
not used yet
kwds : dict
These remaining keyword arguments are used as arguments to the
power function. Many power function support ``alternative`` as a
keyword argument, two-sample test support ``ratio``.
Returns
-------
Figure
If `ax` is None, the created figure. Otherwise the figure to which
`ax` is connected.
Notes
-----
This works only for classes where the ``power`` method has
``effect_size``, ``nobs`` and ``alpha`` as the first three arguments.
If the second argument is ``nobs1``, then the number of observations
in the plot are those for the first sample.
TODO: fix this for FTestPower and GofChisquarePower
TODO: maybe add line variable, if we want more than nobs and effectsize | plot_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def power(self, effect_size, nobs, alpha, df=None, alternative='two-sided'):
'''Calculate the power of a t-test for one sample or paired samples.
Parameters
----------
effect_size : float
standardized effect size, mean divided by the standard deviation.
effect size has to be positive.
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
df : int or float
degrees of freedom. By default this is None, and the df from the
one sample or paired ttest is used, ``df = nobs1 - 1``
alternative : str, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
'''
# for debugging
#print 'calling ttest power with', (effect_size, nobs, alpha, df, alternative)
return ttest_power(effect_size, nobs, alpha, df=df,
alternative=alternative) | Calculate the power of a t-test for one sample or paired samples.
Parameters
----------
effect_size : float
standardized effect size, mean divided by the standard deviation.
effect size has to be positive.
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
df : int or float
degrees of freedom. By default this is None, and the df from the
one sample or paired ttest is used, ``df = nobs1 - 1``
alternative : str, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true. | power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def solve_power(self, effect_size=None, nobs=None, alpha=None, power=None,
alternative='two-sided'):
'''solve for any one parameter of the power of a one sample t-test
for the one sample t-test the keywords are:
effect_size, nobs, alpha, power
Exactly one needs to be ``None``, all others need numeric values.
This test can also be used for a paired t-test, where effect size is
defined in terms of the mean difference, and nobs is the number of
pairs.
Parameters
----------
effect_size : float
Standardized effect size.The effect size is here Cohen's f, square
root of "f2".
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
alternative : str, 'two-sided' (default) or 'one-sided'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test.
'one-sided' assumes we are in the relevant tail.
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
*attaches*
cache_fit_res : list
Cache of the result of the root finding procedure for the latest
call to ``solve_power``, mainly for debugging purposes.
The first element is the success indicator, one if successful.
The remaining elements contain the return information of the up to
three solvers that have been tried.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails.
'''
# for debugging
#print 'calling ttest solve with', (effect_size, nobs, alpha, power, alternative)
return super().solve_power(effect_size=effect_size,
nobs=nobs,
alpha=alpha,
power=power,
alternative=alternative) | solve for any one parameter of the power of a one sample t-test
for the one sample t-test the keywords are:
effect_size, nobs, alpha, power
Exactly one needs to be ``None``, all others need numeric values.
This test can also be used for a paired t-test, where effect size is
defined in terms of the mean difference, and nobs is the number of
pairs.
Parameters
----------
effect_size : float
Standardized effect size.The effect size is here Cohen's f, square
root of "f2".
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
alternative : str, 'two-sided' (default) or 'one-sided'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test.
'one-sided' assumes we are in the relevant tail.
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
*attaches*
cache_fit_res : list
Cache of the result of the root finding procedure for the latest
call to ``solve_power``, mainly for debugging purposes.
The first element is the success indicator, one if successful.
The remaining elements contain the return information of the up to
three solvers that have been tried.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails. | solve_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def power(self, effect_size, nobs1, alpha, ratio=1, df=None,
alternative='two-sided'):
'''Calculate the power of a t-test for two independent sample
Parameters
----------
effect_size : float
standardized effect size, difference between the two means divided
by the standard deviation. `effect_size` has to be positive.
nobs1 : int or float
number of observations of sample 1. The number of observations of
sample two is ratio times the size of sample 1,
i.e. ``nobs2 = nobs1 * ratio``
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ratio : float
ratio of the number of observations in sample 2 relative to
sample 1. see description of nobs1
The default for ratio is 1; to solve for ratio given the other
arguments, it has to be explicitly set to None.
df : int or float
degrees of freedom. By default this is None, and the df from the
ttest with pooled variance is used, ``df = (nobs1 - 1 + nobs2 - 1)``
alternative : str, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
'''
nobs2 = nobs1*ratio
#pooled variance
if df is None:
df = (nobs1 - 1 + nobs2 - 1)
nobs = 1./ (1. / nobs1 + 1. / nobs2)
#print 'calling ttest power with', (effect_size, nobs, alpha, df, alternative)
return ttest_power(effect_size, nobs, alpha, df=df, alternative=alternative) | Calculate the power of a t-test for two independent sample
Parameters
----------
effect_size : float
standardized effect size, difference between the two means divided
by the standard deviation. `effect_size` has to be positive.
nobs1 : int or float
number of observations of sample 1. The number of observations of
sample two is ratio times the size of sample 1,
i.e. ``nobs2 = nobs1 * ratio``
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ratio : float
ratio of the number of observations in sample 2 relative to
sample 1. see description of nobs1
The default for ratio is 1; to solve for ratio given the other
arguments, it has to be explicitly set to None.
df : int or float
degrees of freedom. By default this is None, and the df from the
ttest with pooled variance is used, ``df = (nobs1 - 1 + nobs2 - 1)``
alternative : str, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true. | power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.