code
stringlengths 26
870k
| docstring
stringlengths 1
65.6k
| func_name
stringlengths 1
194
| language
stringclasses 1
value | repo
stringlengths 8
68
| path
stringlengths 5
194
| url
stringlengths 46
254
| license
stringclasses 4
values |
---|---|---|---|---|---|---|---|
def resid_pearson(self):
"""
Returns Pearson residuals.
The Pearson residuals are calculated under a model where
the rows and columns of the table are independent.
"""
fit = self.fittedvalues
resids = (self.table - fit) / np.sqrt(fit)
return resids | Returns Pearson residuals.
The Pearson residuals are calculated under a model where
the rows and columns of the table are independent. | resid_pearson | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def standardized_resids(self):
"""
Returns standardized residuals under independence.
"""
row, col = self.marginal_probabilities
sresids = self.resid_pearson / np.sqrt(np.outer(1 - row, 1 - col))
return sresids | Returns standardized residuals under independence. | standardized_resids | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def chi2_contribs(self):
"""
Returns the contributions to the chi^2 statistic for independence.
The returned table contains the contribution of each cell to the chi^2
test statistic for the null hypothesis that the rows and columns
are independent.
"""
return self.resid_pearson**2 | Returns the contributions to the chi^2 statistic for independence.
The returned table contains the contribution of each cell to the chi^2
test statistic for the null hypothesis that the rows and columns
are independent. | chi2_contribs | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def local_log_oddsratios(self):
"""
Returns local log odds ratios.
The local log odds ratios are the log odds ratios
calculated for contiguous 2x2 sub-tables.
"""
ta = self.table.copy()
a = ta[0:-1, 0:-1]
b = ta[0:-1, 1:]
c = ta[1:, 0:-1]
d = ta[1:, 1:]
tab = np.log(a) + np.log(d) - np.log(b) - np.log(c)
rslt = np.empty(self.table.shape, np.float64)
rslt *= np.nan
rslt[0:-1, 0:-1] = tab
if isinstance(self.table_orig, pd.DataFrame):
rslt = pd.DataFrame(rslt, index=self.table_orig.index,
columns=self.table_orig.columns)
return rslt | Returns local log odds ratios.
The local log odds ratios are the log odds ratios
calculated for contiguous 2x2 sub-tables. | local_log_oddsratios | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def local_oddsratios(self):
"""
Returns local odds ratios.
See documentation for local_log_oddsratios.
"""
return np.exp(self.local_log_oddsratios) | Returns local odds ratios.
See documentation for local_log_oddsratios. | local_oddsratios | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def cumulative_log_oddsratios(self):
"""
Returns cumulative log odds ratios.
The cumulative log odds ratios for a contingency table
with ordered rows and columns are calculated by collapsing
all cells to the left/right and above/below a given point,
to obtain a 2x2 table from which a log odds ratio can be
calculated.
"""
ta = self.table.cumsum(0).cumsum(1)
a = ta[0:-1, 0:-1]
b = ta[0:-1, -1:] - a
c = ta[-1:, 0:-1] - a
d = ta[-1, -1] - (a + b + c)
tab = np.log(a) + np.log(d) - np.log(b) - np.log(c)
rslt = np.empty(self.table.shape, np.float64)
rslt *= np.nan
rslt[0:-1, 0:-1] = tab
if isinstance(self.table_orig, pd.DataFrame):
rslt = pd.DataFrame(rslt, index=self.table_orig.index,
columns=self.table_orig.columns)
return rslt | Returns cumulative log odds ratios.
The cumulative log odds ratios for a contingency table
with ordered rows and columns are calculated by collapsing
all cells to the left/right and above/below a given point,
to obtain a 2x2 table from which a log odds ratio can be
calculated. | cumulative_log_oddsratios | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def cumulative_oddsratios(self):
"""
Returns the cumulative odds ratios for a contingency table.
See documentation for cumulative_log_oddsratio.
"""
return np.exp(self.cumulative_log_oddsratios) | Returns the cumulative odds ratios for a contingency table.
See documentation for cumulative_log_oddsratio. | cumulative_oddsratios | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def symmetry(self, method="bowker"):
"""
Test for symmetry of a joint distribution.
This procedure tests the null hypothesis that the joint
distribution is symmetric around the main diagonal, that is
.. math::
p_{i, j} = p_{j, i} for all i, j
Returns
-------
Bunch
A bunch with attributes
* statistic : float
chisquare test statistic
* p-value : float
p-value of the test statistic based on chisquare distribution
* df : int
degrees of freedom of the chisquare distribution
Notes
-----
The implementation is based on the SAS documentation. R includes
it in `mcnemar.test` if the table is not 2 by 2. However a more
direct generalization of the McNemar test to larger tables is
provided by the homogeneity test (TableSymmetry.homogeneity).
The p-value is based on the chi-square distribution which requires
that the sample size is not very small to be a good approximation
of the true distribution. For 2x2 contingency tables the exact
distribution can be obtained with `mcnemar`
See Also
--------
mcnemar
homogeneity
"""
if method.lower() != "bowker":
raise ValueError("method for symmetry testing must be 'bowker'")
k = self.table.shape[0]
upp_idx = np.triu_indices(k, 1)
tril = self.table.T[upp_idx] # lower triangle in column order
triu = self.table[upp_idx] # upper triangle in row order
statistic = ((tril - triu)**2 / (tril + triu + 1e-20)).sum()
df = k * (k-1) / 2.
pvalue = stats.chi2.sf(statistic, df)
b = _Bunch()
b.statistic = statistic
b.pvalue = pvalue
b.df = df
return b | Test for symmetry of a joint distribution.
This procedure tests the null hypothesis that the joint
distribution is symmetric around the main diagonal, that is
.. math::
p_{i, j} = p_{j, i} for all i, j
Returns
-------
Bunch
A bunch with attributes
* statistic : float
chisquare test statistic
* p-value : float
p-value of the test statistic based on chisquare distribution
* df : int
degrees of freedom of the chisquare distribution
Notes
-----
The implementation is based on the SAS documentation. R includes
it in `mcnemar.test` if the table is not 2 by 2. However a more
direct generalization of the McNemar test to larger tables is
provided by the homogeneity test (TableSymmetry.homogeneity).
The p-value is based on the chi-square distribution which requires
that the sample size is not very small to be a good approximation
of the true distribution. For 2x2 contingency tables the exact
distribution can be obtained with `mcnemar`
See Also
--------
mcnemar
homogeneity | symmetry | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def homogeneity(self, method="stuart_maxwell"):
"""
Compare row and column marginal distributions.
Parameters
----------
method : str
Either 'stuart_maxwell' or 'bhapkar', leading to two different
estimates of the covariance matrix for the estimated
difference between the row margins and the column margins.
Returns
-------
Bunch
A bunch with attributes:
* statistic : float
The chi^2 test statistic
* pvalue : float
The p-value of the test statistic
* df : int
The degrees of freedom of the reference distribution
Notes
-----
For a 2x2 table this is equivalent to McNemar's test. More
generally the procedure tests the null hypothesis that the
marginal distribution of the row factor is equal to the
marginal distribution of the column factor. For this to be
meaningful, the two factors must have the same sample space
(i.e. the same categories).
"""
if self.table.shape[0] < 1:
raise ValueError('table is empty')
elif self.table.shape[0] == 1:
b = _Bunch()
b.statistic = 0
b.pvalue = 1
b.df = 0
return b
method = method.lower()
if method not in ["bhapkar", "stuart_maxwell"]:
raise ValueError("method '%s' for homogeneity not known" % method)
n_obs = self.table.sum()
pr = self.table.astype(np.float64) / n_obs
# Compute margins, eliminate last row/column so there is no
# degeneracy
row = pr.sum(1)[0:-1]
col = pr.sum(0)[0:-1]
pr = pr[0:-1, 0:-1]
# The estimated difference between row and column margins.
d = col - row
# The degrees of freedom of the chi^2 reference distribution.
df = pr.shape[0]
if method == "bhapkar":
vmat = -(pr + pr.T) - np.outer(d, d)
dv = col + row - 2*np.diag(pr) - d**2
np.fill_diagonal(vmat, dv)
elif method == "stuart_maxwell":
vmat = -(pr + pr.T)
dv = row + col - 2*np.diag(pr)
np.fill_diagonal(vmat, dv)
try:
statistic = n_obs * np.dot(d, np.linalg.solve(vmat, d))
except np.linalg.LinAlgError:
warnings.warn("Unable to invert covariance matrix",
sm_exceptions.SingularMatrixWarning)
b = _Bunch()
b.statistic = np.nan
b.pvalue = np.nan
b.df = df
return b
pvalue = 1 - stats.chi2.cdf(statistic, df)
b = _Bunch()
b.statistic = statistic
b.pvalue = pvalue
b.df = df
return b | Compare row and column marginal distributions.
Parameters
----------
method : str
Either 'stuart_maxwell' or 'bhapkar', leading to two different
estimates of the covariance matrix for the estimated
difference between the row margins and the column margins.
Returns
-------
Bunch
A bunch with attributes:
* statistic : float
The chi^2 test statistic
* pvalue : float
The p-value of the test statistic
* df : int
The degrees of freedom of the reference distribution
Notes
-----
For a 2x2 table this is equivalent to McNemar's test. More
generally the procedure tests the null hypothesis that the
marginal distribution of the row factor is equal to the
marginal distribution of the column factor. For this to be
meaningful, the two factors must have the same sample space
(i.e. the same categories). | homogeneity | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def summary(self, alpha=0.05, float_format="%.3f"):
"""
Produce a summary of the analysis.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the interval.
float_format : str
Used to format numeric values in the table.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
"""
fmt = float_format
headers = ["Statistic", "P-value", "DF"]
stubs = ["Symmetry", "Homogeneity"]
sy = self.symmetry()
hm = self.homogeneity()
data = [[fmt % sy.statistic, fmt % sy.pvalue, '%d' % sy.df],
[fmt % hm.statistic, fmt % hm.pvalue, '%d' % hm.df]]
tab = iolib.SimpleTable(data, headers, stubs, data_aligns="r",
table_dec_above='')
return tab | Produce a summary of the analysis.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the interval.
float_format : str
Used to format numeric values in the table.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation. | summary | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def from_data(cls, data, shift_zeros=True):
"""
Construct a Table object from data.
Parameters
----------
data : array_like
The raw data, the first column defines the rows and the
second column defines the columns.
shift_zeros : bool
If True, and if there are any zeros in the contingency
table, add 0.5 to all four cells of the table.
"""
if isinstance(data, pd.DataFrame):
table = pd.crosstab(data.iloc[:, 0], data.iloc[:, 1])
else:
table = pd.crosstab(data[:, 0], data[:, 1])
return cls(table, shift_zeros) | Construct a Table object from data.
Parameters
----------
data : array_like
The raw data, the first column defines the rows and the
second column defines the columns.
shift_zeros : bool
If True, and if there are any zeros in the contingency
table, add 0.5 to all four cells of the table. | from_data | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_oddsratio(self):
"""
Returns the log odds ratio for a 2x2 table.
"""
f = self.table.flatten()
return np.dot(np.log(f), np.r_[1, -1, -1, 1]) | Returns the log odds ratio for a 2x2 table. | log_oddsratio | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def oddsratio(self):
"""
Returns the odds ratio for a 2x2 table.
"""
return (self.table[0, 0] * self.table[1, 1] /
(self.table[0, 1] * self.table[1, 0])) | Returns the odds ratio for a 2x2 table. | oddsratio | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_oddsratio_se(self):
"""
Returns the standard error for the log odds ratio.
"""
return np.sqrt(np.sum(1 / self.table)) | Returns the standard error for the log odds ratio. | log_oddsratio_se | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def oddsratio_pvalue(self, null=1):
"""
P-value for a hypothesis test about the odds ratio.
Parameters
----------
null : float
The null value of the odds ratio.
"""
return self.log_oddsratio_pvalue(np.log(null)) | P-value for a hypothesis test about the odds ratio.
Parameters
----------
null : float
The null value of the odds ratio. | oddsratio_pvalue | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_oddsratio_pvalue(self, null=0):
"""
P-value for a hypothesis test about the log odds ratio.
Parameters
----------
null : float
The null value of the log odds ratio.
"""
zscore = (self.log_oddsratio - null) / self.log_oddsratio_se
pvalue = 2 * stats.norm.cdf(-np.abs(zscore))
return pvalue | P-value for a hypothesis test about the log odds ratio.
Parameters
----------
null : float
The null value of the log odds ratio. | log_oddsratio_pvalue | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_oddsratio_confint(self, alpha=0.05, method="normal"):
"""
A confidence level for the log odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
"""
f = -stats.norm.ppf(alpha / 2)
lor = self.log_oddsratio
se = self.log_oddsratio_se
lcb = lor - f * se
ucb = lor + f * se
return lcb, ucb | A confidence level for the log odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation. | log_oddsratio_confint | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def oddsratio_confint(self, alpha=0.05, method="normal"):
"""
A confidence interval for the odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
"""
lcb, ucb = self.log_oddsratio_confint(alpha, method=method)
return np.exp(lcb), np.exp(ucb) | A confidence interval for the odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation. | oddsratio_confint | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def riskratio(self):
"""
Returns the risk ratio for a 2x2 table.
The risk ratio is calculated with respect to the rows.
"""
p = self.table[:, 0] / self.table.sum(1)
return p[0] / p[1] | Returns the risk ratio for a 2x2 table.
The risk ratio is calculated with respect to the rows. | riskratio | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_riskratio(self):
"""
Returns the log of the risk ratio.
"""
return np.log(self.riskratio) | Returns the log of the risk ratio. | log_riskratio | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_riskratio_se(self):
"""
Returns the standard error of the log of the risk ratio.
"""
n = self.table.sum(1)
p = self.table[:, 0] / n
va = np.sum((1 - p) / (n*p))
return np.sqrt(va) | Returns the standard error of the log of the risk ratio. | log_riskratio_se | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def riskratio_pvalue(self, null=1):
"""
p-value for a hypothesis test about the risk ratio.
Parameters
----------
null : float
The null value of the risk ratio.
"""
return self.log_riskratio_pvalue(np.log(null)) | p-value for a hypothesis test about the risk ratio.
Parameters
----------
null : float
The null value of the risk ratio. | riskratio_pvalue | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_riskratio_pvalue(self, null=0):
"""
p-value for a hypothesis test about the log risk ratio.
Parameters
----------
null : float
The null value of the log risk ratio.
"""
zscore = (self.log_riskratio - null) / self.log_riskratio_se
pvalue = 2 * stats.norm.cdf(-np.abs(zscore))
return pvalue | p-value for a hypothesis test about the log risk ratio.
Parameters
----------
null : float
The null value of the log risk ratio. | log_riskratio_pvalue | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def log_riskratio_confint(self, alpha=0.05, method="normal"):
"""
A confidence interval for the log risk ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
"""
f = -stats.norm.ppf(alpha / 2)
lrr = self.log_riskratio
se = self.log_riskratio_se
lcb = lrr - f * se
ucb = lrr + f * se
return lcb, ucb | A confidence interval for the log risk ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation. | log_riskratio_confint | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def riskratio_confint(self, alpha=0.05, method="normal"):
"""
A confidence interval for the risk ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
"""
lcb, ucb = self.log_riskratio_confint(alpha, method=method)
return np.exp(lcb), np.exp(ucb) | A confidence interval for the risk ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation. | riskratio_confint | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def summary(self, alpha=0.05, float_format="%.3f", method="normal"):
"""
Summarizes results for a 2x2 table analysis.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the confidence
intervals.
float_format : str
Used to format the numeric values in the table.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
"""
def fmt(x):
if isinstance(x, str):
return x
return float_format % x
headers = ["Estimate", "SE", "LCB", "UCB", "p-value"]
stubs = ["Odds ratio", "Log odds ratio", "Risk ratio",
"Log risk ratio"]
lcb1, ucb1 = self.oddsratio_confint(alpha, method)
lcb2, ucb2 = self.log_oddsratio_confint(alpha, method)
lcb3, ucb3 = self.riskratio_confint(alpha, method)
lcb4, ucb4 = self.log_riskratio_confint(alpha, method)
data = [[fmt(x) for x in [self.oddsratio, "", lcb1, ucb1,
self.oddsratio_pvalue()]],
[fmt(x) for x in [self.log_oddsratio, self.log_oddsratio_se,
lcb2, ucb2, self.oddsratio_pvalue()]],
[fmt(x) for x in [self.riskratio, "", lcb3, ucb3,
self.riskratio_pvalue()]],
[fmt(x) for x in [self.log_riskratio, self.log_riskratio_se,
lcb4, ucb4, self.riskratio_pvalue()]]]
tab = iolib.SimpleTable(data, headers, stubs, data_aligns="r",
table_dec_above='')
return tab | Summarizes results for a 2x2 table analysis.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the confidence
intervals.
float_format : str
Used to format the numeric values in the table.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation. | summary | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def from_data(cls, var1, var2, strata, data):
"""
Construct a StratifiedTable object from data.
Parameters
----------
var1 : int or string
The column index or name of `data` specifying the variable
defining the rows of the contingency table. The variable
must have only two distinct values.
var2 : int or string
The column index or name of `data` specifying the variable
defining the columns of the contingency table. The variable
must have only two distinct values.
strata : int or string
The column index or name of `data` specifying the variable
defining the strata.
data : array_like
The raw data. A cross-table for analysis is constructed
from the first two columns.
Returns
-------
StratifiedTable
"""
if not isinstance(data, pd.DataFrame):
data1 = pd.DataFrame(index=np.arange(data.shape[0]),
columns=[var1, var2, strata])
data1[data1.columns[var1]] = data[:, var1]
data1[data1.columns[var2]] = data[:, var2]
data1[data1.columns[strata]] = data[:, strata]
else:
data1 = data[[var1, var2, strata]]
gb = data1.groupby(strata).groups
tables = []
for g in gb:
ii = gb[g]
tab = pd.crosstab(data1.loc[ii, var1], data1.loc[ii, var2])
if (tab.shape != np.r_[2, 2]).any():
msg = "Invalid table dimensions"
raise ValueError(msg)
tables.append(np.asarray(tab))
return cls(tables) | Construct a StratifiedTable object from data.
Parameters
----------
var1 : int or string
The column index or name of `data` specifying the variable
defining the rows of the contingency table. The variable
must have only two distinct values.
var2 : int or string
The column index or name of `data` specifying the variable
defining the columns of the contingency table. The variable
must have only two distinct values.
strata : int or string
The column index or name of `data` specifying the variable
defining the strata.
data : array_like
The raw data. A cross-table for analysis is constructed
from the first two columns.
Returns
-------
StratifiedTable | from_data | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def test_null_odds(self, correction=False):
"""
Test that all tables have odds ratio equal to 1.
This is the 'Mantel-Haenszel' test.
Parameters
----------
correction : bool
If True, use the continuity correction when calculating the
test statistic.
Returns
-------
Bunch
A bunch containing the chi^2 test statistic and p-value.
"""
statistic = np.sum(self.table[0, 0, :] -
self._apb * self._apc / self._n)
statistic = np.abs(statistic)
if correction:
statistic -= 0.5
statistic = statistic**2
denom = self._apb * self._apc * self._bpd * self._cpd
denom /= (self._n**2 * (self._n - 1))
denom = np.sum(denom)
statistic /= denom
# df is always 1
pvalue = 1 - stats.chi2.cdf(statistic, 1)
b = _Bunch()
b.statistic = statistic
b.pvalue = pvalue
return b | Test that all tables have odds ratio equal to 1.
This is the 'Mantel-Haenszel' test.
Parameters
----------
correction : bool
If True, use the continuity correction when calculating the
test statistic.
Returns
-------
Bunch
A bunch containing the chi^2 test statistic and p-value. | test_null_odds | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def oddsratio_pooled(self):
"""
The pooled odds ratio.
The value is an estimate of a common odds ratio across all of the
stratified tables.
"""
odds_ratio = np.sum(self._ad / self._n) / np.sum(self._bc / self._n)
return odds_ratio | The pooled odds ratio.
The value is an estimate of a common odds ratio across all of the
stratified tables. | oddsratio_pooled | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def logodds_pooled(self):
"""
Returns the logarithm of the pooled odds ratio.
See oddsratio_pooled for more information.
"""
return np.log(self.oddsratio_pooled) | Returns the logarithm of the pooled odds ratio.
See oddsratio_pooled for more information. | logodds_pooled | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def riskratio_pooled(self):
"""
Estimate of the pooled risk ratio.
"""
acd = self.table[0, 0, :] * self._cpd
cab = self.table[1, 0, :] * self._apb
rr = np.sum(acd / self._n) / np.sum(cab / self._n)
return rr | Estimate of the pooled risk ratio. | riskratio_pooled | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def logodds_pooled_se(self):
"""
Estimated standard error of the pooled log odds ratio
References
----------
J. Robins, N. Breslow, S. Greenland. "Estimators of the
Mantel-Haenszel Variance Consistent in Both Sparse Data and
Large-Strata Limiting Models." Biometrics 42, no. 2 (1986): 311-23.
"""
adns = np.sum(self._ad / self._n)
bcns = np.sum(self._bc / self._n)
lor_va = np.sum(self._apd * self._ad / self._n**2) / adns**2
mid = self._apd * self._bc / self._n**2
mid += (1 - self._apd / self._n) * self._ad / self._n
mid = np.sum(mid)
mid /= (adns * bcns)
lor_va += mid
lor_va += np.sum((1 - self._apd / self._n) *
self._bc / self._n) / bcns**2
lor_va /= 2
lor_se = np.sqrt(lor_va)
return lor_se | Estimated standard error of the pooled log odds ratio
References
----------
J. Robins, N. Breslow, S. Greenland. "Estimators of the
Mantel-Haenszel Variance Consistent in Both Sparse Data and
Large-Strata Limiting Models." Biometrics 42, no. 2 (1986): 311-23. | logodds_pooled_se | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def logodds_pooled_confint(self, alpha=0.05, method="normal"):
"""
A confidence interval for the pooled log odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
Returns
-------
lcb : float
The lower confidence limit.
ucb : float
The upper confidence limit.
"""
lor = np.log(self.oddsratio_pooled)
lor_se = self.logodds_pooled_se
f = -stats.norm.ppf(alpha / 2)
lcb = lor - f * lor_se
ucb = lor + f * lor_se
return lcb, ucb | A confidence interval for the pooled log odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
Returns
-------
lcb : float
The lower confidence limit.
ucb : float
The upper confidence limit. | logodds_pooled_confint | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def oddsratio_pooled_confint(self, alpha=0.05, method="normal"):
"""
A confidence interval for the pooled odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
Returns
-------
lcb : float
The lower confidence limit.
ucb : float
The upper confidence limit.
"""
lcb, ucb = self.logodds_pooled_confint(alpha, method=method)
lcb = np.exp(lcb)
ucb = np.exp(ucb)
return lcb, ucb | A confidence interval for the pooled odds ratio.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
interval.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
Returns
-------
lcb : float
The lower confidence limit.
ucb : float
The upper confidence limit. | oddsratio_pooled_confint | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def test_equal_odds(self, adjust=False):
"""
Test that all odds ratios are identical.
This is the 'Breslow-Day' testing procedure.
Parameters
----------
adjust : bool
Use the 'Tarone' adjustment to achieve the chi^2
asymptotic distribution.
Returns
-------
A bunch containing the following attributes:
statistic : float
The chi^2 test statistic.
p-value : float
The p-value for the test.
"""
table = self.table
r = self.oddsratio_pooled
a = 1 - r
b = r * (self._apb + self._apc) + self._dma
c = -r * self._apb * self._apc
# Expected value of first cell
dr = np.sqrt(b**2 - 4*a*c)
e11 = (-b + dr) / (2*a)
# Variance of the first cell
v11 = (1 / e11 + 1 / (self._apc - e11) + 1 / (self._apb - e11) +
1 / (self._dma + e11))
v11 = 1 / v11
statistic = np.sum((table[0, 0, :] - e11)**2 / v11)
if adjust:
adj = table[0, 0, :].sum() - e11.sum()
adj = adj**2
adj /= np.sum(v11)
statistic -= adj
pvalue = 1 - stats.chi2.cdf(statistic, table.shape[2] - 1)
b = _Bunch()
b.statistic = statistic
b.pvalue = pvalue
return b | Test that all odds ratios are identical.
This is the 'Breslow-Day' testing procedure.
Parameters
----------
adjust : bool
Use the 'Tarone' adjustment to achieve the chi^2
asymptotic distribution.
Returns
-------
A bunch containing the following attributes:
statistic : float
The chi^2 test statistic.
p-value : float
The p-value for the test. | test_equal_odds | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def summary(self, alpha=0.05, float_format="%.3f", method="normal"):
"""
A summary of all the main results.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence intervals.
float_format : str
Used for formatting numeric values in the summary.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation.
"""
def fmt(x):
if isinstance(x, str):
return x
return float_format % x
co_lcb, co_ucb = self.oddsratio_pooled_confint(
alpha=alpha, method=method)
clo_lcb, clo_ucb = self.logodds_pooled_confint(
alpha=alpha, method=method)
headers = ["Estimate", "LCB", "UCB"]
stubs = ["Pooled odds", "Pooled log odds", "Pooled risk ratio", ""]
data = [[fmt(x) for x in [self.oddsratio_pooled, co_lcb, co_ucb]],
[fmt(x) for x in [self.logodds_pooled, clo_lcb, clo_ucb]],
[fmt(x) for x in [self.riskratio_pooled, "", ""]],
['', '', '']]
tab1 = iolib.SimpleTable(data, headers, stubs, data_aligns="r",
table_dec_above='')
headers = ["Statistic", "P-value", ""]
stubs = ["Test of OR=1", "Test constant OR"]
rslt1 = self.test_null_odds()
rslt2 = self.test_equal_odds()
data = [[fmt(x) for x in [rslt1.statistic, rslt1.pvalue, ""]],
[fmt(x) for x in [rslt2.statistic, rslt2.pvalue, ""]]]
tab2 = iolib.SimpleTable(data, headers, stubs, data_aligns="r")
tab1.extend(tab2)
headers = ["", "", ""]
stubs = ["Number of tables", "Min n", "Max n", "Avg n", "Total n"]
ss = self.table.sum(0).sum(0)
data = [["%d" % self.table.shape[2], '', ''],
["%d" % min(ss), '', ''],
["%d" % max(ss), '', ''],
["%.0f" % np.mean(ss), '', ''],
["%d" % sum(ss), '', '', '']]
tab3 = iolib.SimpleTable(data, headers, stubs, data_aligns="r")
tab1.extend(tab3)
return tab1 | A summary of all the main results.
Parameters
----------
alpha : float
`1 - alpha` is the nominal coverage probability of the
confidence intervals.
float_format : str
Used for formatting numeric values in the summary.
method : str
The method for producing the confidence interval. Currently
must be 'normal' which uses the normal approximation. | summary | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def mcnemar(table, exact=True, correction=True):
"""
McNemar test of homogeneity.
Parameters
----------
table : array_like
A square contingency table.
exact : bool
If exact is true, then the binomial distribution will be used.
If exact is false, then the chisquare distribution will be
used, which is the approximation to the distribution of the
test statistic for large sample sizes.
correction : bool
If true, then a continuity correction is used for the chisquare
distribution (if exact is false.)
Returns
-------
A bunch with attributes:
statistic : float or int, array
The test statistic is the chisquare statistic if exact is
false. If the exact binomial distribution is used, then this
contains the min(n1, n2), where n1, n2 are cases that are zero
in one sample but one in the other sample.
pvalue : float or array
p-value of the null hypothesis of equal marginal distributions.
Notes
-----
This is a special case of Cochran's Q test, and of the homogeneity
test. The results when the chisquare distribution is used are
identical, except for continuity correction.
"""
table = _make_df_square(table)
table = np.asarray(table, dtype=np.float64)
n1, n2 = table[0, 1], table[1, 0]
if exact:
statistic = np.minimum(n1, n2)
# binom is symmetric with p=0.5
# SciPy 1.7+ requires int arguments
int_sum = int(n1 + n2)
if int_sum != (n1 + n2):
raise ValueError(
"exact can only be used with tables containing integers."
)
pvalue = stats.binom.cdf(statistic, int_sum, 0.5) * 2
pvalue = np.minimum(pvalue, 1) # limit to 1 if n1==n2
else:
corr = int(correction) # convert bool to 0 or 1
statistic = (np.abs(n1 - n2) - corr)**2 / (1. * (n1 + n2))
df = 1
pvalue = stats.chi2.sf(statistic, df)
b = _Bunch()
b.statistic = statistic
b.pvalue = pvalue
return b | McNemar test of homogeneity.
Parameters
----------
table : array_like
A square contingency table.
exact : bool
If exact is true, then the binomial distribution will be used.
If exact is false, then the chisquare distribution will be
used, which is the approximation to the distribution of the
test statistic for large sample sizes.
correction : bool
If true, then a continuity correction is used for the chisquare
distribution (if exact is false.)
Returns
-------
A bunch with attributes:
statistic : float or int, array
The test statistic is the chisquare statistic if exact is
false. If the exact binomial distribution is used, then this
contains the min(n1, n2), where n1, n2 are cases that are zero
in one sample but one in the other sample.
pvalue : float or array
p-value of the null hypothesis of equal marginal distributions.
Notes
-----
This is a special case of Cochran's Q test, and of the homogeneity
test. The results when the chisquare distribution is used are
identical, except for continuity correction. | mcnemar | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def cochrans_q(x, return_object=True):
"""
Cochran's Q test for identical binomial proportions.
Parameters
----------
x : array_like, 2d (N, k)
data with N cases and k variables
return_object : bool
Return values as bunch instead of as individual values.
Returns
-------
Returns a bunch containing the following attributes, or the
individual values according to the value of `return_object`.
statistic : float
test statistic
pvalue : float
pvalue from the chisquare distribution
Notes
-----
Cochran's Q is a k-sample extension of the McNemar test. If there
are only two groups, then Cochran's Q test and the McNemar test
are equivalent.
The procedure tests that the probability of success is the same
for every group. The alternative hypothesis is that at least two
groups have a different probability of success.
In Wikipedia terminology, rows are blocks and columns are
treatments. The number of rows N, should be large for the
chisquare distribution to be a good approximation.
The Null hypothesis of the test is that all treatments have the
same effect.
References
----------
https://en.wikipedia.org/wiki/Cochran_test
SAS Manual for NPAR TESTS
"""
x = np.asarray(x, dtype=np.float64)
gruni = np.unique(x)
N, k = x.shape
count_row_success = (x == gruni[-1]).sum(1, float)
count_col_success = (x == gruni[-1]).sum(0, float)
count_row_ss = count_row_success.sum()
count_col_ss = count_col_success.sum()
assert count_row_ss == count_col_ss # just a calculation check
# From the SAS manual
q_stat = ((k-1) * (k * np.sum(count_col_success**2) - count_col_ss**2)
/ (k * count_row_ss - np.sum(count_row_success**2)))
# Note: the denominator looks just like k times the variance of
# the columns
# Wikipedia uses a different, but equivalent expression
# q_stat = (k-1) * (k * np.sum(count_row_success**2) - count_row_ss**2)
# / (k * count_col_ss - np.sum(count_col_success**2))
df = k - 1
pvalue = stats.chi2.sf(q_stat, df)
if return_object:
b = _Bunch()
b.statistic = q_stat
b.df = df
b.pvalue = pvalue
return b
return q_stat, pvalue, df | Cochran's Q test for identical binomial proportions.
Parameters
----------
x : array_like, 2d (N, k)
data with N cases and k variables
return_object : bool
Return values as bunch instead of as individual values.
Returns
-------
Returns a bunch containing the following attributes, or the
individual values according to the value of `return_object`.
statistic : float
test statistic
pvalue : float
pvalue from the chisquare distribution
Notes
-----
Cochran's Q is a k-sample extension of the McNemar test. If there
are only two groups, then Cochran's Q test and the McNemar test
are equivalent.
The procedure tests that the probability of success is the same
for every group. The alternative hypothesis is that at least two
groups have a different probability of success.
In Wikipedia terminology, rows are blocks and columns are
treatments. The number of rows N, should be large for the
chisquare distribution to be a good approximation.
The Null hypothesis of the test is that all treatments have the
same effect.
References
----------
https://en.wikipedia.org/wiki/Cochran_test
SAS Manual for NPAR TESTS | cochrans_q | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def _ecdf(x):
'''no frills empirical cdf used in fdrcorrection
'''
nobs = len(x)
return np.arange(1,nobs+1)/float(nobs) | no frills empirical cdf used in fdrcorrection | _ecdf | python | statsmodels/statsmodels | statsmodels/stats/multitest.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multitest.py | BSD-3-Clause |
def multipletests(pvals, alpha=0.05, method='hs',
maxiter=1,
is_sorted=False,
returnsorted=False):
"""
Test results and p-value correction for multiple tests
Parameters
----------
pvals : array_like, 1-d
uncorrected p-values. Must be 1-dimensional.
alpha : float
FWER, family-wise error rate, e.g. 0.1
method : str
Method used for testing and adjustment of pvalues. Can be either the
full name or initial letters. Available methods are:
- `bonferroni` : one-step correction
- `sidak` : one-step correction
- `holm-sidak` : step down method using Sidak adjustments
- `holm` : step-down method using Bonferroni adjustments
- `simes-hochberg` : step-up method (independent)
- `hommel` : closed method based on Simes tests (non-negative)
- `fdr_bh` : Benjamini/Hochberg (non-negative)
- `fdr_by` : Benjamini/Yekutieli (negative)
- `fdr_tsbh` : two stage fdr correction (non-negative)
- `fdr_tsbky` : two stage fdr correction (non-negative)
maxiter : int or bool
Maximum number of iterations for two-stage fdr, `fdr_tsbh` and
`fdr_tsbky`. It is ignored by all other methods.
maxiter=1 (default) corresponds to the two stage method.
maxiter=-1 corresponds to full iterations which is maxiter=len(pvals).
maxiter=0 uses only a single stage fdr correction using a 'bh' or 'bky'
prior fraction of assumed true hypotheses.
is_sorted : bool
If False (default), the p_values will be sorted, but the corrected
pvalues are in the original order. If True, then it assumed that the
pvalues are already sorted in ascending order.
returnsorted : bool
not tested, return sorted p-values instead of original sequence
Returns
-------
reject : ndarray, boolean
true for hypothesis that can be rejected for given alpha
pvals_corrected : ndarray
p-values corrected for multiple tests
alphacSidak : float
corrected alpha for Sidak method
alphacBonf : float
corrected alpha for Bonferroni method
Notes
-----
There may be API changes for this function in the future.
Except for 'fdr_twostage', the p-value correction is independent of the
alpha specified as argument. In these cases the corrected p-values
can also be compared with a different alpha. In the case of 'fdr_twostage',
the corrected p-values are specific to the given alpha, see
``fdrcorrection_twostage``.
The 'fdr_gbs' procedure is not verified against another package, p-values
are derived from scratch and are not derived in the reference. In Monte
Carlo experiments the method worked correctly and maintained the false
discovery rate.
All procedures that are included, control FWER or FDR in the independent
case, and most are robust in the positively correlated case.
`fdr_gbs`: high power, fdr control for independent case and only small
violation in positively correlated case
**Timing**:
Most of the time with large arrays is spent in `argsort`. When
we want to calculate the p-value for several methods, then it is more
efficient to presort the pvalues, and put the results back into the
original order outside of the function.
Method='hommel' is very slow for large arrays, since it requires the
evaluation of n partitions, where n is the number of p-values.
"""
import gc
pvals = np.asarray(pvals)
alphaf = alpha # Notation ?
if not is_sorted:
sortind = np.argsort(pvals)
pvals = np.take(pvals, sortind)
ntests = len(pvals)
alphacSidak = 1 - np.power((1. - alphaf), 1./ntests)
alphacBonf = alphaf / float(ntests)
if method.lower() in ['b', 'bonf', 'bonferroni']:
reject = pvals <= alphacBonf
pvals_corrected = pvals * float(ntests)
elif method.lower() in ['s', 'sidak']:
reject = pvals <= alphacSidak
pvals_corrected = -np.expm1(ntests * np.log1p(-pvals))
elif method.lower() in ['hs', 'holm-sidak']:
alphacSidak_all = 1 - np.power((1. - alphaf),
1./np.arange(ntests, 0, -1))
notreject = pvals > alphacSidak_all
del alphacSidak_all
nr_index = np.nonzero(notreject)[0]
if nr_index.size == 0:
# nonreject is empty, all rejected
notrejectmin = len(pvals)
else:
notrejectmin = np.min(nr_index)
notreject[notrejectmin:] = True
reject = ~notreject
del notreject
# It's eqivalent to 1 - np.power((1. - pvals),
# np.arange(ntests, 0, -1))
# but prevents the issue of the floating point precision
pvals_corrected_raw = -np.expm1(np.arange(ntests, 0, -1) *
np.log1p(-pvals))
pvals_corrected = np.maximum.accumulate(pvals_corrected_raw)
del pvals_corrected_raw
elif method.lower() in ['h', 'holm']:
notreject = pvals > alphaf / np.arange(ntests, 0, -1)
nr_index = np.nonzero(notreject)[0]
if nr_index.size == 0:
# nonreject is empty, all rejected
notrejectmin = len(pvals)
else:
notrejectmin = np.min(nr_index)
notreject[notrejectmin:] = True
reject = ~notreject
pvals_corrected_raw = pvals * np.arange(ntests, 0, -1)
pvals_corrected = np.maximum.accumulate(pvals_corrected_raw)
del pvals_corrected_raw
gc.collect()
elif method.lower() in ['sh', 'simes-hochberg']:
alphash = alphaf / np.arange(ntests, 0, -1)
reject = pvals <= alphash
rejind = np.nonzero(reject)
if rejind[0].size > 0:
rejectmax = np.max(np.nonzero(reject))
reject[:rejectmax] = True
pvals_corrected_raw = np.arange(ntests, 0, -1) * pvals
pvals_corrected = np.minimum.accumulate(pvals_corrected_raw[::-1])[::-1]
del pvals_corrected_raw
elif method.lower() in ['ho', 'hommel']:
# we need a copy because we overwrite it in a loop
a = pvals.copy()
for m in range(ntests, 1, -1):
cim = np.min(m * pvals[-m:] / np.arange(1,m+1.))
a[-m:] = np.maximum(a[-m:], cim)
a[:-m] = np.maximum(a[:-m], np.minimum(m * pvals[:-m], cim))
pvals_corrected = a
reject = a <= alphaf
elif method.lower() in ['fdr_bh', 'fdr_i', 'fdr_p', 'fdri', 'fdrp']:
# delegate, call with sorted pvals
reject, pvals_corrected = fdrcorrection(pvals, alpha=alpha,
method='indep',
is_sorted=True)
elif method.lower() in ['fdr_by', 'fdr_n', 'fdr_c', 'fdrn', 'fdrcorr']:
# delegate, call with sorted pvals
reject, pvals_corrected = fdrcorrection(pvals, alpha=alpha,
method='n',
is_sorted=True)
elif method.lower() in ['fdr_tsbky', 'fdr_2sbky', 'fdr_twostage']:
# delegate, call with sorted pvals
reject, pvals_corrected = fdrcorrection_twostage(pvals, alpha=alpha,
method='bky',
maxiter=maxiter,
is_sorted=True)[:2]
elif method.lower() in ['fdr_tsbh', 'fdr_2sbh']:
# delegate, call with sorted pvals
reject, pvals_corrected = fdrcorrection_twostage(pvals, alpha=alpha,
method='bh',
maxiter=maxiter,
is_sorted=True)[:2]
elif method.lower() in ['fdr_gbs']:
#adaptive stepdown in Gavrilov, Benjamini, Sarkar, Annals of Statistics 2009
## notreject = pvals > alphaf / np.arange(ntests, 0, -1) #alphacSidak
## notrejectmin = np.min(np.nonzero(notreject))
## notreject[notrejectmin:] = True
## reject = ~notreject
ii = np.arange(1, ntests + 1)
q = (ntests + 1. - ii)/ii * pvals / (1. - pvals)
pvals_corrected_raw = np.maximum.accumulate(q) #up requirementd
pvals_corrected = np.minimum.accumulate(pvals_corrected_raw[::-1])[::-1]
del pvals_corrected_raw
reject = pvals_corrected <= alpha
else:
raise ValueError('method not recognized')
if pvals_corrected is not None: #not necessary anymore
pvals_corrected[pvals_corrected>1] = 1
if is_sorted or returnsorted:
return reject, pvals_corrected, alphacSidak, alphacBonf
else:
pvals_corrected_ = np.empty_like(pvals_corrected)
pvals_corrected_[sortind] = pvals_corrected
del pvals_corrected
reject_ = np.empty_like(reject)
reject_[sortind] = reject
return reject_, pvals_corrected_, alphacSidak, alphacBonf | Test results and p-value correction for multiple tests
Parameters
----------
pvals : array_like, 1-d
uncorrected p-values. Must be 1-dimensional.
alpha : float
FWER, family-wise error rate, e.g. 0.1
method : str
Method used for testing and adjustment of pvalues. Can be either the
full name or initial letters. Available methods are:
- `bonferroni` : one-step correction
- `sidak` : one-step correction
- `holm-sidak` : step down method using Sidak adjustments
- `holm` : step-down method using Bonferroni adjustments
- `simes-hochberg` : step-up method (independent)
- `hommel` : closed method based on Simes tests (non-negative)
- `fdr_bh` : Benjamini/Hochberg (non-negative)
- `fdr_by` : Benjamini/Yekutieli (negative)
- `fdr_tsbh` : two stage fdr correction (non-negative)
- `fdr_tsbky` : two stage fdr correction (non-negative)
maxiter : int or bool
Maximum number of iterations for two-stage fdr, `fdr_tsbh` and
`fdr_tsbky`. It is ignored by all other methods.
maxiter=1 (default) corresponds to the two stage method.
maxiter=-1 corresponds to full iterations which is maxiter=len(pvals).
maxiter=0 uses only a single stage fdr correction using a 'bh' or 'bky'
prior fraction of assumed true hypotheses.
is_sorted : bool
If False (default), the p_values will be sorted, but the corrected
pvalues are in the original order. If True, then it assumed that the
pvalues are already sorted in ascending order.
returnsorted : bool
not tested, return sorted p-values instead of original sequence
Returns
-------
reject : ndarray, boolean
true for hypothesis that can be rejected for given alpha
pvals_corrected : ndarray
p-values corrected for multiple tests
alphacSidak : float
corrected alpha for Sidak method
alphacBonf : float
corrected alpha for Bonferroni method
Notes
-----
There may be API changes for this function in the future.
Except for 'fdr_twostage', the p-value correction is independent of the
alpha specified as argument. In these cases the corrected p-values
can also be compared with a different alpha. In the case of 'fdr_twostage',
the corrected p-values are specific to the given alpha, see
``fdrcorrection_twostage``.
The 'fdr_gbs' procedure is not verified against another package, p-values
are derived from scratch and are not derived in the reference. In Monte
Carlo experiments the method worked correctly and maintained the false
discovery rate.
All procedures that are included, control FWER or FDR in the independent
case, and most are robust in the positively correlated case.
`fdr_gbs`: high power, fdr control for independent case and only small
violation in positively correlated case
**Timing**:
Most of the time with large arrays is spent in `argsort`. When
we want to calculate the p-value for several methods, then it is more
efficient to presort the pvalues, and put the results back into the
original order outside of the function.
Method='hommel' is very slow for large arrays, since it requires the
evaluation of n partitions, where n is the number of p-values. | multipletests | python | statsmodels/statsmodels | statsmodels/stats/multitest.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multitest.py | BSD-3-Clause |
def fdrcorrection(pvals, alpha=0.05, method='indep', is_sorted=False):
'''
pvalue correction for false discovery rate.
This covers Benjamini/Hochberg for independent or positively correlated and
Benjamini/Yekutieli for general or negatively correlated tests.
Parameters
----------
pvals : array_like, 1d
Set of p-values of the individual tests.
alpha : float, optional
Family-wise error rate. Defaults to ``0.05``.
method : {'i', 'indep', 'p', 'poscorr', 'n', 'negcorr'}, optional
Which method to use for FDR correction.
``{'i', 'indep', 'p', 'poscorr'}`` all refer to ``fdr_bh``
(Benjamini/Hochberg for independent or positively
correlated tests). ``{'n', 'negcorr'}`` both refer to ``fdr_by``
(Benjamini/Yekutieli for general or negatively correlated tests).
Defaults to ``'indep'``.
is_sorted : bool, optional
If False (default), the p_values will be sorted, but the corrected
pvalues are in the original order. If True, then it assumed that the
pvalues are already sorted in ascending order.
Returns
-------
rejected : ndarray, bool
True if a hypothesis is rejected, False if not
pvalue-corrected : ndarray
pvalues adjusted for multiple hypothesis testing to limit FDR
Notes
-----
If there is prior information on the fraction of true hypothesis, then alpha
should be set to ``alpha * m/m_0`` where m is the number of tests,
given by the p-values, and m_0 is an estimate of the true hypothesis.
(see Benjamini, Krieger and Yekuteli)
The two-step method of Benjamini, Krieger and Yekutiel that estimates the number
of false hypotheses will be available (soon).
Both methods exposed via this function (Benjamini/Hochberg, Benjamini/Yekutieli)
are also available in the function ``multipletests``, as ``method="fdr_bh"`` and
``method="fdr_by"``, respectively.
See also
--------
multipletests
'''
pvals = np.asarray(pvals)
assert pvals.ndim == 1, "pvals must be 1-dimensional, that is of shape (n,)"
if not is_sorted:
pvals_sortind = np.argsort(pvals)
pvals_sorted = np.take(pvals, pvals_sortind)
else:
pvals_sorted = pvals # alias
if method in ['i', 'indep', 'p', 'poscorr']:
ecdffactor = _ecdf(pvals_sorted)
elif method in ['n', 'negcorr']:
cm = np.sum(1./np.arange(1, len(pvals_sorted)+1)) #corrected this
ecdffactor = _ecdf(pvals_sorted) / cm
## elif method in ['n', 'negcorr']:
## cm = np.sum(np.arange(len(pvals)))
## ecdffactor = ecdf(pvals_sorted)/cm
else:
raise ValueError('only indep and negcorr implemented')
reject = pvals_sorted <= ecdffactor*alpha
if reject.any():
rejectmax = max(np.nonzero(reject)[0])
reject[:rejectmax] = True
pvals_corrected_raw = pvals_sorted / ecdffactor
pvals_corrected = np.minimum.accumulate(pvals_corrected_raw[::-1])[::-1]
del pvals_corrected_raw
pvals_corrected[pvals_corrected>1] = 1
if not is_sorted:
pvals_corrected_ = np.empty_like(pvals_corrected)
pvals_corrected_[pvals_sortind] = pvals_corrected
del pvals_corrected
reject_ = np.empty_like(reject)
reject_[pvals_sortind] = reject
return reject_, pvals_corrected_
else:
return reject, pvals_corrected | pvalue correction for false discovery rate.
This covers Benjamini/Hochberg for independent or positively correlated and
Benjamini/Yekutieli for general or negatively correlated tests.
Parameters
----------
pvals : array_like, 1d
Set of p-values of the individual tests.
alpha : float, optional
Family-wise error rate. Defaults to ``0.05``.
method : {'i', 'indep', 'p', 'poscorr', 'n', 'negcorr'}, optional
Which method to use for FDR correction.
``{'i', 'indep', 'p', 'poscorr'}`` all refer to ``fdr_bh``
(Benjamini/Hochberg for independent or positively
correlated tests). ``{'n', 'negcorr'}`` both refer to ``fdr_by``
(Benjamini/Yekutieli for general or negatively correlated tests).
Defaults to ``'indep'``.
is_sorted : bool, optional
If False (default), the p_values will be sorted, but the corrected
pvalues are in the original order. If True, then it assumed that the
pvalues are already sorted in ascending order.
Returns
-------
rejected : ndarray, bool
True if a hypothesis is rejected, False if not
pvalue-corrected : ndarray
pvalues adjusted for multiple hypothesis testing to limit FDR
Notes
-----
If there is prior information on the fraction of true hypothesis, then alpha
should be set to ``alpha * m/m_0`` where m is the number of tests,
given by the p-values, and m_0 is an estimate of the true hypothesis.
(see Benjamini, Krieger and Yekuteli)
The two-step method of Benjamini, Krieger and Yekutiel that estimates the number
of false hypotheses will be available (soon).
Both methods exposed via this function (Benjamini/Hochberg, Benjamini/Yekutieli)
are also available in the function ``multipletests``, as ``method="fdr_bh"`` and
``method="fdr_by"``, respectively.
See also
--------
multipletests | fdrcorrection | python | statsmodels/statsmodels | statsmodels/stats/multitest.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multitest.py | BSD-3-Clause |
def fdrcorrection_twostage(pvals, alpha=0.05, method='bky',
maxiter=1,
iter=None,
is_sorted=False):
'''(iterated) two stage linear step-up procedure with estimation of number of true
hypotheses
Benjamini, Krieger and Yekuteli, procedure in Definition 6
Parameters
----------
pvals : array_like
set of p-values of the individual tests.
alpha : float
error rate
method : {'bky', 'bh')
see Notes for details
* 'bky' - implements the procedure in Definition 6 of Benjamini, Krieger
and Yekuteli 2006
* 'bh' - the two stage method of Benjamini and Hochberg
maxiter : int or bool
Maximum number of iterations.
maxiter=1 (default) corresponds to the two stage method.
maxiter=-1 corresponds to full iterations which is maxiter=len(pvals).
maxiter=0 uses only a single stage fdr correction using a 'bh' or 'bky'
prior fraction of assumed true hypotheses.
Boolean maxiter is allowed for backwards compatibility with the
deprecated ``iter`` keyword.
maxiter=False is two-stage fdr (maxiter=1)
maxiter=True is full iteration (maxiter=-1 or maxiter=len(pvals))
.. versionadded:: 0.14
Replacement for ``iter`` with additional features.
iter : bool
``iter`` is deprecated use ``maxiter`` instead.
If iter is True, then only one iteration step is used, this is the
two-step method.
If iter is False, then iterations are stopped at convergence which
occurs in a finite number of steps (at most len(pvals) steps).
.. deprecated:: 0.14
Use ``maxiter`` instead of ``iter``.
Returns
-------
rejected : ndarray, bool
True if a hypothesis is rejected, False if not
pvalue-corrected : ndarray
pvalues adjusted for multiple hypotheses testing to limit FDR
m0 : int
ntest - rej, estimated number of true (not rejected) hypotheses
alpha_stages : list of floats
A list of alphas that have been used at each stage
Notes
-----
The returned corrected p-values are specific to the given alpha, they
cannot be used for a different alpha.
The returned corrected p-values are from the last stage of the fdr_bh
linear step-up procedure (fdrcorrection0 with method='indep') corrected
for the estimated fraction of true hypotheses.
This means that the rejection decision can be obtained with
``pval_corrected <= alpha``, where ``alpha`` is the original significance
level.
(Note: This has changed from earlier versions (<0.5.0) of statsmodels.)
BKY described several other multi-stage methods, which would be easy to implement.
However, in their simulation the simple two-stage method (with iter=False) was the
most robust to the presence of positive correlation
TODO: What should be returned?
'''
pvals = np.asarray(pvals)
if iter is not None:
import warnings
msg = "iter keyword is deprecated, use maxiter keyword instead."
warnings.warn(msg, FutureWarning)
if iter is False:
maxiter = 1
elif iter is True or maxiter in [-1, None] :
maxiter = len(pvals)
# otherwise we use maxiter
if not is_sorted:
pvals_sortind = np.argsort(pvals)
pvals = np.take(pvals, pvals_sortind)
ntests = len(pvals)
if method == 'bky':
fact = (1.+alpha)
alpha_prime = alpha / fact
elif method == 'bh':
fact = 1.
alpha_prime = alpha
else:
raise ValueError("only 'bky' and 'bh' are available as method")
alpha_stages = [alpha_prime]
rej, pvalscorr = fdrcorrection(pvals, alpha=alpha_prime, method='indep',
is_sorted=True)
r1 = rej.sum()
if (r1 == 0) or (r1 == ntests):
# return rej, pvalscorr * fact, ntests - r1, alpha_stages
reject = rej
pvalscorr *= fact
ri = r1
else:
ri_old = ri = r1
ntests0 = ntests # needed if maxiter=0
# while True:
for it in range(maxiter):
ntests0 = 1.0 * ntests - ri_old
alpha_star = alpha_prime * ntests / ntests0
alpha_stages.append(alpha_star)
#print ntests0, alpha_star
rej, pvalscorr = fdrcorrection(pvals, alpha=alpha_star, method='indep',
is_sorted=True)
ri = rej.sum()
if (it >= maxiter - 1) or ri == ri_old:
break
elif ri < ri_old:
# prevent cycles and endless loops
raise RuntimeError(" oops - should not be here")
ri_old = ri
# make adjustment to pvalscorr to reflect estimated number of Non-Null cases
# decision is then pvalscorr < alpha (or <=)
pvalscorr *= ntests0 * 1.0 / ntests
if method == 'bky':
pvalscorr *= (1. + alpha)
pvalscorr[pvalscorr>1] = 1
if not is_sorted:
pvalscorr_ = np.empty_like(pvalscorr)
pvalscorr_[pvals_sortind] = pvalscorr
del pvalscorr
reject = np.empty_like(rej)
reject[pvals_sortind] = rej
return reject, pvalscorr_, ntests - ri, alpha_stages
else:
return rej, pvalscorr, ntests - ri, alpha_stages | (iterated) two stage linear step-up procedure with estimation of number of true
hypotheses
Benjamini, Krieger and Yekuteli, procedure in Definition 6
Parameters
----------
pvals : array_like
set of p-values of the individual tests.
alpha : float
error rate
method : {'bky', 'bh')
see Notes for details
* 'bky' - implements the procedure in Definition 6 of Benjamini, Krieger
and Yekuteli 2006
* 'bh' - the two stage method of Benjamini and Hochberg
maxiter : int or bool
Maximum number of iterations.
maxiter=1 (default) corresponds to the two stage method.
maxiter=-1 corresponds to full iterations which is maxiter=len(pvals).
maxiter=0 uses only a single stage fdr correction using a 'bh' or 'bky'
prior fraction of assumed true hypotheses.
Boolean maxiter is allowed for backwards compatibility with the
deprecated ``iter`` keyword.
maxiter=False is two-stage fdr (maxiter=1)
maxiter=True is full iteration (maxiter=-1 or maxiter=len(pvals))
.. versionadded:: 0.14
Replacement for ``iter`` with additional features.
iter : bool
``iter`` is deprecated use ``maxiter`` instead.
If iter is True, then only one iteration step is used, this is the
two-step method.
If iter is False, then iterations are stopped at convergence which
occurs in a finite number of steps (at most len(pvals) steps).
.. deprecated:: 0.14
Use ``maxiter`` instead of ``iter``.
Returns
-------
rejected : ndarray, bool
True if a hypothesis is rejected, False if not
pvalue-corrected : ndarray
pvalues adjusted for multiple hypotheses testing to limit FDR
m0 : int
ntest - rej, estimated number of true (not rejected) hypotheses
alpha_stages : list of floats
A list of alphas that have been used at each stage
Notes
-----
The returned corrected p-values are specific to the given alpha, they
cannot be used for a different alpha.
The returned corrected p-values are from the last stage of the fdr_bh
linear step-up procedure (fdrcorrection0 with method='indep') corrected
for the estimated fraction of true hypotheses.
This means that the rejection decision can be obtained with
``pval_corrected <= alpha``, where ``alpha`` is the original significance
level.
(Note: This has changed from earlier versions (<0.5.0) of statsmodels.)
BKY described several other multi-stage methods, which would be easy to implement.
However, in their simulation the simple two-stage method (with iter=False) was the
most robust to the presence of positive correlation
TODO: What should be returned? | fdrcorrection_twostage | python | statsmodels/statsmodels | statsmodels/stats/multitest.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multitest.py | BSD-3-Clause |
def local_fdr(zscores, null_proportion=1.0, null_pdf=None, deg=7,
nbins=30, alpha=0):
"""
Calculate local FDR values for a list of Z-scores.
Parameters
----------
zscores : array_like
A vector of Z-scores
null_proportion : float
The assumed proportion of true null hypotheses
null_pdf : function mapping reals to positive reals
The density of null Z-scores; if None, use standard normal
deg : int
The maximum exponent in the polynomial expansion of the
density of non-null Z-scores
nbins : int
The number of bins for estimating the marginal density
of Z-scores.
alpha : float
Use Poisson ridge regression with parameter alpha to estimate
the density of non-null Z-scores.
Returns
-------
fdr : array_like
A vector of FDR values
References
----------
B Efron (2008). Microarrays, Empirical Bayes, and the Two-Groups
Model. Statistical Science 23:1, 1-22.
Examples
--------
Basic use (the null Z-scores are taken to be standard normal):
>>> from statsmodels.stats.multitest import local_fdr
>>> import numpy as np
>>> zscores = np.random.randn(30)
>>> fdr = local_fdr(zscores)
Use a Gaussian null distribution estimated from the data:
>>> null = EmpiricalNull(zscores)
>>> fdr = local_fdr(zscores, null_pdf=null.pdf)
"""
from statsmodels.genmod.generalized_linear_model import GLM
from statsmodels.genmod.generalized_linear_model import families
from statsmodels.regression.linear_model import OLS
# Bins for Poisson modeling of the marginal Z-score density
minz = min(zscores)
maxz = max(zscores)
bins = np.linspace(minz, maxz, nbins)
# Bin counts
zhist = np.histogram(zscores, bins)[0]
# Bin centers
zbins = (bins[:-1] + bins[1:]) / 2
# The design matrix at bin centers
dmat = np.vander(zbins, deg + 1)
# Rescale the design matrix
sd = dmat.std(0)
ii = sd >1e-8
dmat[:, ii] /= sd[ii]
start = OLS(np.log(1 + zhist), dmat).fit().params
# Poisson regression
if alpha > 0:
md = GLM(zhist, dmat, family=families.Poisson()).fit_regularized(L1_wt=0, alpha=alpha, start_params=start)
else:
md = GLM(zhist, dmat, family=families.Poisson()).fit(start_params=start)
# The design matrix for all Z-scores
dmat_full = np.vander(zscores, deg + 1)
dmat_full[:, ii] /= sd[ii]
# The height of the estimated marginal density of Z-scores,
# evaluated at every observed Z-score.
fz = md.predict(dmat_full) / (len(zscores) * (bins[1] - bins[0]))
# The null density.
if null_pdf is None:
f0 = np.exp(-0.5 * zscores**2) / np.sqrt(2 * np.pi)
else:
f0 = null_pdf(zscores)
# The local FDR values
fdr = null_proportion * f0 / fz
fdr = np.clip(fdr, 0, 1)
return fdr | Calculate local FDR values for a list of Z-scores.
Parameters
----------
zscores : array_like
A vector of Z-scores
null_proportion : float
The assumed proportion of true null hypotheses
null_pdf : function mapping reals to positive reals
The density of null Z-scores; if None, use standard normal
deg : int
The maximum exponent in the polynomial expansion of the
density of non-null Z-scores
nbins : int
The number of bins for estimating the marginal density
of Z-scores.
alpha : float
Use Poisson ridge regression with parameter alpha to estimate
the density of non-null Z-scores.
Returns
-------
fdr : array_like
A vector of FDR values
References
----------
B Efron (2008). Microarrays, Empirical Bayes, and the Two-Groups
Model. Statistical Science 23:1, 1-22.
Examples
--------
Basic use (the null Z-scores are taken to be standard normal):
>>> from statsmodels.stats.multitest import local_fdr
>>> import numpy as np
>>> zscores = np.random.randn(30)
>>> fdr = local_fdr(zscores)
Use a Gaussian null distribution estimated from the data:
>>> null = EmpiricalNull(zscores)
>>> fdr = local_fdr(zscores, null_pdf=null.pdf) | local_fdr | python | statsmodels/statsmodels | statsmodels/stats/multitest.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multitest.py | BSD-3-Clause |
def fun(params):
"""
Negative log-likelihood of z-scores.
The function has three arguments, packed into a vector:
mean : location parameter
logscale : log of the scale parameter
logitprop : logit of the proportion of true nulls
The implementation follows section 4 from Efron 2008.
"""
d, s, p = xform(params)
# Mass within the central region
central_mass = (norm.cdf((null_ub - d) / s) -
norm.cdf((null_lb - d) / s))
# Probability that a Z-score is null and is in the central region
cp = p * central_mass
# Binomial term
rval = n_zs0 * np.log(cp) + (n_zs - n_zs0) * np.log(1 - cp)
# Truncated Gaussian term for null Z-scores
zv = (zscores0 - d) / s
rval += np.sum(-zv**2 / 2) - n_zs0 * np.log(s)
rval -= n_zs0 * np.log(central_mass)
return -rval | Negative log-likelihood of z-scores.
The function has three arguments, packed into a vector:
mean : location parameter
logscale : log of the scale parameter
logitprop : logit of the proportion of true nulls
The implementation follows section 4 from Efron 2008. | __init__.fun | python | statsmodels/statsmodels | statsmodels/stats/multitest.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multitest.py | BSD-3-Clause |
def __init__(self, zscores, null_lb=-1, null_ub=1, estimate_mean=True,
estimate_scale=True, estimate_null_proportion=False):
# Extract the null z-scores
ii = np.flatnonzero((zscores >= null_lb) & (zscores <= null_ub))
if len(ii) == 0:
raise RuntimeError("No Z-scores fall between null_lb and null_ub")
zscores0 = zscores[ii]
# Number of Z-scores, and null Z-scores
n_zs, n_zs0 = len(zscores), len(zscores0)
# Unpack and transform the parameters to the natural scale, hold
# parameters fixed as specified.
def xform(params):
mean = 0.
sd = 1.
prob = 1.
ii = 0
if estimate_mean:
mean = params[ii]
ii += 1
if estimate_scale:
sd = np.exp(params[ii])
ii += 1
if estimate_null_proportion:
prob = 1 / (1 + np.exp(-params[ii]))
return mean, sd, prob
from scipy.stats.distributions import norm
def fun(params):
"""
Negative log-likelihood of z-scores.
The function has three arguments, packed into a vector:
mean : location parameter
logscale : log of the scale parameter
logitprop : logit of the proportion of true nulls
The implementation follows section 4 from Efron 2008.
"""
d, s, p = xform(params)
# Mass within the central region
central_mass = (norm.cdf((null_ub - d) / s) -
norm.cdf((null_lb - d) / s))
# Probability that a Z-score is null and is in the central region
cp = p * central_mass
# Binomial term
rval = n_zs0 * np.log(cp) + (n_zs - n_zs0) * np.log(1 - cp)
# Truncated Gaussian term for null Z-scores
zv = (zscores0 - d) / s
rval += np.sum(-zv**2 / 2) - n_zs0 * np.log(s)
rval -= n_zs0 * np.log(central_mass)
return -rval
# Estimate the parameters
from scipy.optimize import minimize
# starting values are mean = 0, scale = 1, p0 ~ 1
mz = minimize(fun, np.r_[0., 0, 3], method="Nelder-Mead")
mean, sd, prob = xform(mz['x'])
self.mean = mean
self.sd = sd
self.null_proportion = prob | Negative log-likelihood of z-scores.
The function has three arguments, packed into a vector:
mean : location parameter
logscale : log of the scale parameter
logitprop : logit of the proportion of true nulls
The implementation follows section 4 from Efron 2008. | __init__ | python | statsmodels/statsmodels | statsmodels/stats/multitest.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multitest.py | BSD-3-Clause |
def pdf(self, zscores):
"""
Evaluates the fitted empirical null Z-score density.
Parameters
----------
zscores : scalar or array_like
The point or points at which the density is to be
evaluated.
Returns
-------
The empirical null Z-score density evaluated at the given
points.
"""
zval = (zscores - self.mean) / self.sd
return np.exp(-0.5*zval**2 - np.log(self.sd) - 0.5*np.log(2*np.pi)) | Evaluates the fitted empirical null Z-score density.
Parameters
----------
zscores : scalar or array_like
The point or points at which the density is to be
evaluated.
Returns
-------
The empirical null Z-score density evaluated at the given
points. | pdf | python | statsmodels/statsmodels | statsmodels/stats/multitest.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multitest.py | BSD-3-Clause |
def pairwise_tukeyhsd(endog, groups, alpha=0.05, use_var='equal'):
"""
Calculate all pairwise comparisons with TukeyHSD or Games-Howell.
Parameters
----------
endog : ndarray, float, 1d
response variable
groups : ndarray, 1d
array with groups, can be string or integers
alpha : float
significance level for the test
use_var : {"unequal", "equal"}
If ``use_var`` is "equal", then the Tukey-hsd pvalues are returned.
Tukey-hsd assumes that (within) variances are the same across groups.
If ``use_var`` is "unequal", then the Games-Howell pvalues are
returned. This uses Welch's t-test for unequal variances with
Satterthwaite's corrected degrees of freedom for each pairwise
comparison.
Returns
-------
results : TukeyHSDResults instance
A results class containing relevant data and some post-hoc
calculations, including adjusted p-value
Notes
-----
This is just a wrapper around tukeyhsd method of MultiComparison.
Tukey-hsd is not robust to heteroscedasticity, i.e. variance differ across
groups, especially if group sizes also vary. In those cases, the actual
size (rejection rate under the Null hypothesis) might be far from the
nominal size of the test.
The Games-Howell method uses pairwise t-tests that are robust to differences
in variances and approximately maintains size unless samples are very
small.
.. versionadded:: 0.15
` The `use_var` keyword and option for Games-Howell test.
See Also
--------
MultiComparison
tukeyhsd
statsmodels.sandbox.stats.multicomp.TukeyHSDResults
"""
return MultiComparison(endog, groups).tukeyhsd(alpha=alpha,
use_var=use_var) | Calculate all pairwise comparisons with TukeyHSD or Games-Howell.
Parameters
----------
endog : ndarray, float, 1d
response variable
groups : ndarray, 1d
array with groups, can be string or integers
alpha : float
significance level for the test
use_var : {"unequal", "equal"}
If ``use_var`` is "equal", then the Tukey-hsd pvalues are returned.
Tukey-hsd assumes that (within) variances are the same across groups.
If ``use_var`` is "unequal", then the Games-Howell pvalues are
returned. This uses Welch's t-test for unequal variances with
Satterthwaite's corrected degrees of freedom for each pairwise
comparison.
Returns
-------
results : TukeyHSDResults instance
A results class containing relevant data and some post-hoc
calculations, including adjusted p-value
Notes
-----
This is just a wrapper around tukeyhsd method of MultiComparison.
Tukey-hsd is not robust to heteroscedasticity, i.e. variance differ across
groups, especially if group sizes also vary. In those cases, the actual
size (rejection rate under the Null hypothesis) might be far from the
nominal size of the test.
The Games-Howell method uses pairwise t-tests that are robust to differences
in variances and approximately maintains size unless samples are very
small.
.. versionadded:: 0.15
` The `use_var` keyword and option for Games-Howell test.
See Also
--------
MultiComparison
tukeyhsd
statsmodels.sandbox.stats.multicomp.TukeyHSDResults | pairwise_tukeyhsd | python | statsmodels/statsmodels | statsmodels/stats/multicomp.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multicomp.py | BSD-3-Clause |
def ttest_power(effect_size, nobs, alpha, df=None, alternative='two-sided'):
'''Calculate power of a ttest
'''
d = effect_size
if df is None:
df = nobs - 1
if alternative in ['two-sided', '2s']:
alpha_ = alpha / 2. #no inplace changes, does not work
elif alternative in ['smaller', 'larger']:
alpha_ = alpha
else:
raise ValueError("alternative has to be 'two-sided', 'larger' " +
"or 'smaller'")
pow_ = 0
if alternative in ['two-sided', '2s', 'larger']:
crit_upp = stats.t.isf(alpha_, df)
#print crit_upp, df, d*np.sqrt(nobs)
# use private methods, generic methods return nan with negative d
if np.any(np.isnan(crit_upp)):
# avoid endless loop, https://github.com/scipy/scipy/issues/2667
pow_ = np.nan
else:
# pow_ = stats.nct._sf(crit_upp, df, d*np.sqrt(nobs))
# use scipy.special
pow_ = nct_sf(crit_upp, df, d*np.sqrt(nobs))
if alternative in ['two-sided', '2s', 'smaller']:
crit_low = stats.t.ppf(alpha_, df)
#print crit_low, df, d*np.sqrt(nobs)
if np.any(np.isnan(crit_low)):
pow_ = np.nan
else:
# pow_ += stats.nct._cdf(crit_low, df, d*np.sqrt(nobs))
pow_ += nct_cdf(crit_low, df, d*np.sqrt(nobs))
return pow_ | Calculate power of a ttest | ttest_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def normal_power(effect_size, nobs, alpha, alternative='two-sided', sigma=1.):
"""Calculate power of a normal distributed test statistic
This is an generalization of `normal_power` when variance under Null and
Alternative differ.
Parameters
----------
effect size : float
difference in the estimated means or statistics under the alternative
normalized by the standard deviation (without division by sqrt(nobs).
nobs : float or int
number of observations
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
"""
d = effect_size
if alternative in ['two-sided', '2s']:
alpha_ = alpha / 2. #no inplace changes, does not work
elif alternative in ['smaller', 'larger']:
alpha_ = alpha
else:
raise ValueError("alternative has to be 'two-sided', 'larger' " +
"or 'smaller'")
pow_ = 0
if alternative in ['two-sided', '2s', 'larger']:
crit = stats.norm.isf(alpha_)
pow_ = stats.norm.sf(crit - d*np.sqrt(nobs)/sigma)
if alternative in ['two-sided', '2s', 'smaller']:
crit = stats.norm.ppf(alpha_)
pow_ += stats.norm.cdf(crit - d*np.sqrt(nobs)/sigma)
return pow_ | Calculate power of a normal distributed test statistic
This is an generalization of `normal_power` when variance under Null and
Alternative differ.
Parameters
----------
effect size : float
difference in the estimated means or statistics under the alternative
normalized by the standard deviation (without division by sqrt(nobs).
nobs : float or int
number of observations
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'. | normal_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def normal_power_het(diff, nobs, alpha, std_null=1., std_alternative=None,
alternative='two-sided'):
"""Calculate power of a normal distributed test statistic
This is an generalization of `normal_power` when variance under Null and
Alternative differ.
Parameters
----------
diff : float
difference in the estimated means or statistics under the alternative.
nobs : float or int
number of observations
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
std_null : float
standard deviation under the Null hypothesis without division by
sqrt(nobs)
std_alternative : float
standard deviation under the Alternative hypothesis without division
by sqrt(nobs)
alternative : string, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
Returns
-------
power : float
"""
d = diff
if std_alternative is None:
std_alternative = std_null
if alternative in ['two-sided', '2s']:
alpha_ = alpha / 2. #no inplace changes, does not work
elif alternative in ['smaller', 'larger']:
alpha_ = alpha
else:
raise ValueError("alternative has to be 'two-sided', 'larger' " +
"or 'smaller'")
std_ratio = std_null / std_alternative
pow_ = 0
if alternative in ['two-sided', '2s', 'larger']:
crit = stats.norm.isf(alpha_)
pow_ = stats.norm.sf(crit * std_ratio -
d*np.sqrt(nobs) / std_alternative)
if alternative in ['two-sided', '2s', 'smaller']:
crit = stats.norm.ppf(alpha_)
pow_ += stats.norm.cdf(crit * std_ratio -
d*np.sqrt(nobs) / std_alternative)
return pow_ | Calculate power of a normal distributed test statistic
This is an generalization of `normal_power` when variance under Null and
Alternative differ.
Parameters
----------
diff : float
difference in the estimated means or statistics under the alternative.
nobs : float or int
number of observations
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
std_null : float
standard deviation under the Null hypothesis without division by
sqrt(nobs)
std_alternative : float
standard deviation under the Alternative hypothesis without division
by sqrt(nobs)
alternative : string, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
Returns
-------
power : float | normal_power_het | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def normal_sample_size_one_tail(diff, power, alpha, std_null=1.,
std_alternative=None):
"""explicit sample size computation if only one tail is relevant
The sample size is based on the power in one tail assuming that the
alternative is in the tail where the test has power that increases
with sample size.
Use alpha/2 to compute the one tail approximation to the two-sided
test, i.e. consider only one tail of two-sided test.
Parameters
----------
diff : float
difference in the estimated means or statistics under the alternative.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a type II
error. Power is the probability that the test correctly rejects the
Null Hypothesis if the Alternative Hypothesis is true.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
Note: alpha is used for one tail. Use alpha/2 for two-sided
alternative.
std_null : float
standard deviation under the Null hypothesis without division by
sqrt(nobs)
std_alternative : float
standard deviation under the Alternative hypothesis without division
by sqrt(nobs). Defaults to None. If None, ``std_alternative`` is set
to the value of ``std_null``.
Returns
-------
nobs : float
Sample size to achieve (at least) the desired power.
If the minimum power is satisfied for all positive sample sizes, then
``nobs`` will be zero. This will be the case when power <= alpha if
std_alternative is equal to std_null.
"""
if std_alternative is None:
std_alternative = std_null
crit_power = stats.norm.isf(power)
crit = stats.norm.isf(alpha)
n1 = (np.maximum(crit * std_null - crit_power * std_alternative, 0)
/ diff)**2
return n1 | explicit sample size computation if only one tail is relevant
The sample size is based on the power in one tail assuming that the
alternative is in the tail where the test has power that increases
with sample size.
Use alpha/2 to compute the one tail approximation to the two-sided
test, i.e. consider only one tail of two-sided test.
Parameters
----------
diff : float
difference in the estimated means or statistics under the alternative.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a type II
error. Power is the probability that the test correctly rejects the
Null Hypothesis if the Alternative Hypothesis is true.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
Note: alpha is used for one tail. Use alpha/2 for two-sided
alternative.
std_null : float
standard deviation under the Null hypothesis without division by
sqrt(nobs)
std_alternative : float
standard deviation under the Alternative hypothesis without division
by sqrt(nobs). Defaults to None. If None, ``std_alternative`` is set
to the value of ``std_null``.
Returns
-------
nobs : float
Sample size to achieve (at least) the desired power.
If the minimum power is satisfied for all positive sample sizes, then
``nobs`` will be zero. This will be the case when power <= alpha if
std_alternative is equal to std_null. | normal_sample_size_one_tail | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def ftest_anova_power(effect_size, nobs, alpha, k_groups=2, df=None):
'''power for ftest for one way anova with k equal sized groups
nobs total sample size, sum over all groups
should be general nobs observations, k_groups restrictions ???
'''
df_num = k_groups - 1
df_denom = nobs - k_groups
crit = stats.f.isf(alpha, df_num, df_denom)
pow_ = ncf_sf(crit, df_num, df_denom, effect_size**2 * nobs)
return pow_ | power for ftest for one way anova with k equal sized groups
nobs total sample size, sum over all groups
should be general nobs observations, k_groups restrictions ??? | ftest_anova_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def ftest_power(effect_size, df2, df1, alpha, ncc=1):
'''Calculate the power of a F-test.
Parameters
----------
effect_size : float
The effect size is here Cohen's ``f``, the square root of ``f2``.
df2 : int or float
Denominator degrees of freedom.
This corresponds to the df_resid in Wald tests.
df1 : int or float
Numerator degrees of freedom.
This corresponds to the number of constraints in Wald tests.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ncc : int
degrees of freedom correction for non-centrality parameter.
see Notes
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Notes
-----
changed in 0.14: use df2, df1 instead of df_num, df_denom as arg names.
The latter had reversed meaning.
The sample size is given implicitly by ``df2`` with fixed number of
constraints given by numerator degrees of freedom ``df1``:
nobs = df2 + df1 + ncc
Set ncc=0 to match t-test, or f-test in LikelihoodModelResults.
ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test
ftest_power with ncc=0 should also be correct for f_test in regression
models, with df_num (df1) as number of constraints and d_denom (df2) as
df_resid.
'''
df_num, df_denom = df1, df2
nc = effect_size**2 * (df_denom + df_num + ncc)
crit = stats.f.isf(alpha, df_num, df_denom)
# pow_ = stats.ncf.sf(crit, df_num, df_denom, nc)
# use scipy.special for ncf
pow_ = ncf_sf(crit, df_num, df_denom, nc)
return pow_ #, crit, nc | Calculate the power of a F-test.
Parameters
----------
effect_size : float
The effect size is here Cohen's ``f``, the square root of ``f2``.
df2 : int or float
Denominator degrees of freedom.
This corresponds to the df_resid in Wald tests.
df1 : int or float
Numerator degrees of freedom.
This corresponds to the number of constraints in Wald tests.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ncc : int
degrees of freedom correction for non-centrality parameter.
see Notes
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Notes
-----
changed in 0.14: use df2, df1 instead of df_num, df_denom as arg names.
The latter had reversed meaning.
The sample size is given implicitly by ``df2`` with fixed number of
constraints given by numerator degrees of freedom ``df1``:
nobs = df2 + df1 + ncc
Set ncc=0 to match t-test, or f-test in LikelihoodModelResults.
ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test
ftest_power with ncc=0 should also be correct for f_test in regression
models, with df_num (df1) as number of constraints and d_denom (df2) as
df_resid. | ftest_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def ftest_power_f2(effect_size, df_num, df_denom, alpha, ncc=1):
'''Calculate the power of a F-test.
Based on Cohen's `f^2` effect size.
This assumes
df_num : numerator degrees of freedom, (number of constraints)
df_denom : denominator degrees of freedom (df_resid in regression)
nobs = df_denom + df_num + ncc
nc = effect_size * nobs (noncentrality index)
Power is computed one-sided in the upper tail.
Parameters
----------
effect_size : float
Cohen's f2 effect size or noncentrality divided by nobs.
df_num : int or float
Numerator degrees of freedom.
This corresponds to the number of constraints in Wald tests.
df_denom : int or float
Denominator degrees of freedom.
This corresponds to the df_resid in Wald tests.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ncc : int
degrees of freedom correction for non-centrality parameter.
see Notes
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Notes
The sample size is given implicitly by ``df_denom`` with fixed number of
constraints given by numerator degrees of freedom ``df_num``:
nobs = df_denom + df_num + ncc
Set ncc=0 to match t-test, or f-test in LikelihoodModelResults.
ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test
ftest_power with ncc=0 should also be correct for f_test in regression
models, with df_num (df1) as number of constraints and d_denom (df2) as
df_resid.
'''
nc = effect_size * (df_denom + df_num + ncc)
crit = stats.f.isf(alpha, df_num, df_denom)
# pow_ = stats.ncf.sf(crit, df_num, df_denom, nc)
# use scipy.special for ncf
pow_ = ncf_sf(crit, df_num, df_denom, nc)
return pow_ | Calculate the power of a F-test.
Based on Cohen's `f^2` effect size.
This assumes
df_num : numerator degrees of freedom, (number of constraints)
df_denom : denominator degrees of freedom (df_resid in regression)
nobs = df_denom + df_num + ncc
nc = effect_size * nobs (noncentrality index)
Power is computed one-sided in the upper tail.
Parameters
----------
effect_size : float
Cohen's f2 effect size or noncentrality divided by nobs.
df_num : int or float
Numerator degrees of freedom.
This corresponds to the number of constraints in Wald tests.
df_denom : int or float
Denominator degrees of freedom.
This corresponds to the df_resid in Wald tests.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ncc : int
degrees of freedom correction for non-centrality parameter.
see Notes
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Notes
The sample size is given implicitly by ``df_denom`` with fixed number of
constraints given by numerator degrees of freedom ``df_num``:
nobs = df_denom + df_num + ncc
Set ncc=0 to match t-test, or f-test in LikelihoodModelResults.
ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test
ftest_power with ncc=0 should also be correct for f_test in regression
models, with df_num (df1) as number of constraints and d_denom (df2) as
df_resid. | ftest_power_f2 | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def solve_power(self, **kwds):
'''solve for any one of the parameters of a t-test
for t-test the keywords are:
effect_size, nobs, alpha, power
exactly one needs to be ``None``, all others need numeric values
*attaches*
cache_fit_res : list
Cache of the result of the root finding procedure for the latest
call to ``solve_power``, mainly for debugging purposes.
The first element is the success indicator, one if successful.
The remaining elements contain the return information of the up to
three solvers that have been tried.
'''
#TODO: maybe use explicit kwds,
# nicer but requires inspect? and not generic across tests
# I'm duplicating this in the subclass to get informative docstring
key = [k for k,v in kwds.items() if v is None]
#print kwds, key
if len(key) != 1:
raise ValueError('need exactly one keyword that is None')
key = key[0]
if key == 'power':
del kwds['power']
return self.power(**kwds)
if kwds['effect_size'] == 0:
import warnings
from statsmodels.tools.sm_exceptions import HypothesisTestWarning
warnings.warn('Warning: Effect size of 0 detected', HypothesisTestWarning)
if key == 'power':
return kwds['alpha']
if key == 'alpha':
return kwds['power']
else:
raise ValueError('Cannot detect an effect-size of 0. Try changing your effect-size.')
self._counter = 0
def func(x):
kwds[key] = x
fval = self._power_identity(**kwds)
self._counter += 1
#print self._counter,
if self._counter > 500:
raise RuntimeError('possible endless loop (500 NaNs)')
if np.isnan(fval):
return np.inf
else:
return fval
#TODO: I'm using the following so I get a warning when start_ttp is not defined
try:
start_value = self.start_ttp[key]
except KeyError:
start_value = 0.9
import warnings
from statsmodels.tools.sm_exceptions import ValueWarning
warnings.warn(f'Warning: using default start_value for {key}', ValueWarning)
fit_kwds = self.start_bqexp[key]
fit_res = []
#print vars()
try:
val, res = brentq_expanding(func, full_output=True, **fit_kwds)
failed = False
fit_res.append(res)
except ValueError:
failed = True
fit_res.append(None)
success = None
if (not failed) and res.converged:
success = 1
else:
# try backup
# TODO: check more cases to make this robust
if not np.isnan(start_value):
val, infodict, ier, msg = optimize.fsolve(func, start_value,
full_output=True) #scalar
#val = optimize.newton(func, start_value) #scalar
fval = infodict['fvec']
fit_res.append(infodict)
else:
ier = -1
fval = 1
fit_res.append([None])
if ier == 1 and np.abs(fval) < 1e-4 :
success = 1
else:
#print infodict
if key in ['alpha', 'power', 'effect_size']:
val, r = optimize.brentq(func, 1e-8, 1-1e-8,
full_output=True) #scalar
success = 1 if r.converged else 0
fit_res.append(r)
else:
success = 0
if not success == 1:
import warnings
from statsmodels.tools.sm_exceptions import (ConvergenceWarning,
convergence_doc)
warnings.warn(convergence_doc, ConvergenceWarning)
#attach fit_res, for reading only, should be needed only for debugging
fit_res.insert(0, success)
self.cache_fit_res = fit_res
return val | solve for any one of the parameters of a t-test
for t-test the keywords are:
effect_size, nobs, alpha, power
exactly one needs to be ``None``, all others need numeric values
*attaches*
cache_fit_res : list
Cache of the result of the root finding procedure for the latest
call to ``solve_power``, mainly for debugging purposes.
The first element is the success indicator, one if successful.
The remaining elements contain the return information of the up to
three solvers that have been tried. | solve_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def plot_power(self, dep_var='nobs', nobs=None, effect_size=None,
alpha=0.05, ax=None, title=None, plt_kwds=None, **kwds):
"""
Plot power with number of observations or effect size on x-axis
Parameters
----------
dep_var : {'nobs', 'effect_size', 'alpha'}
This specifies which variable is used for the horizontal axis.
If dep_var='nobs' (default), then one curve is created for each
value of ``effect_size``. If dep_var='effect_size' or alpha, then
one curve is created for each value of ``nobs``.
nobs : {scalar, array_like}
specifies the values of the number of observations in the plot
effect_size : {scalar, array_like}
specifies the values of the effect_size in the plot
alpha : {float, array_like}
The significance level (type I error) used in the power
calculation. Can only be more than a scalar, if ``dep_var='alpha'``
ax : None or axis instance
If ax is None, than a matplotlib figure is created. If ax is a
matplotlib axis instance, then it is reused, and the plot elements
are created with it.
title : str
title for the axis. Use an empty string, ``''``, to avoid a title.
plt_kwds : {None, dict}
not used yet
kwds : dict
These remaining keyword arguments are used as arguments to the
power function. Many power function support ``alternative`` as a
keyword argument, two-sample test support ``ratio``.
Returns
-------
Figure
If `ax` is None, the created figure. Otherwise the figure to which
`ax` is connected.
Notes
-----
This works only for classes where the ``power`` method has
``effect_size``, ``nobs`` and ``alpha`` as the first three arguments.
If the second argument is ``nobs1``, then the number of observations
in the plot are those for the first sample.
TODO: fix this for FTestPower and GofChisquarePower
TODO: maybe add line variable, if we want more than nobs and effectsize
"""
#if pwr_kwds is None:
# pwr_kwds = {}
from statsmodels.graphics import utils
from statsmodels.graphics.plottools import rainbow
fig, ax = utils.create_mpl_ax(ax)
import matplotlib.pyplot as plt
colormap = plt.cm.Dark2 #pylint: disable-msg=E1101
plt_alpha = 1 #0.75
lw = 2
if dep_var == 'nobs':
colors = rainbow(len(effect_size))
colors = [colormap(i) for i in np.linspace(0, 0.9, len(effect_size))]
for ii, es in enumerate(effect_size):
power = self.power(es, nobs, alpha, **kwds)
ax.plot(nobs, power, lw=lw, alpha=plt_alpha,
color=colors[ii], label='es=%4.2F' % es)
xlabel = 'Number of Observations'
elif dep_var in ['effect size', 'effect_size', 'es']:
colors = rainbow(len(nobs))
colors = [colormap(i) for i in np.linspace(0, 0.9, len(nobs))]
for ii, n in enumerate(nobs):
power = self.power(effect_size, n, alpha, **kwds)
ax.plot(effect_size, power, lw=lw, alpha=plt_alpha,
color=colors[ii], label='N=%4.2F' % n)
xlabel = 'Effect Size'
elif dep_var in ['alpha']:
# experimental nobs as defining separate lines
colors = rainbow(len(nobs))
for ii, n in enumerate(nobs):
power = self.power(effect_size, n, alpha, **kwds)
ax.plot(alpha, power, lw=lw, alpha=plt_alpha,
color=colors[ii], label='N=%4.2F' % n)
xlabel = 'alpha'
else:
raise ValueError('depvar not implemented')
if title is None:
title = 'Power of Test'
ax.set_xlabel(xlabel)
ax.set_title(title)
ax.legend(loc='lower right')
return fig | Plot power with number of observations or effect size on x-axis
Parameters
----------
dep_var : {'nobs', 'effect_size', 'alpha'}
This specifies which variable is used for the horizontal axis.
If dep_var='nobs' (default), then one curve is created for each
value of ``effect_size``. If dep_var='effect_size' or alpha, then
one curve is created for each value of ``nobs``.
nobs : {scalar, array_like}
specifies the values of the number of observations in the plot
effect_size : {scalar, array_like}
specifies the values of the effect_size in the plot
alpha : {float, array_like}
The significance level (type I error) used in the power
calculation. Can only be more than a scalar, if ``dep_var='alpha'``
ax : None or axis instance
If ax is None, than a matplotlib figure is created. If ax is a
matplotlib axis instance, then it is reused, and the plot elements
are created with it.
title : str
title for the axis. Use an empty string, ``''``, to avoid a title.
plt_kwds : {None, dict}
not used yet
kwds : dict
These remaining keyword arguments are used as arguments to the
power function. Many power function support ``alternative`` as a
keyword argument, two-sample test support ``ratio``.
Returns
-------
Figure
If `ax` is None, the created figure. Otherwise the figure to which
`ax` is connected.
Notes
-----
This works only for classes where the ``power`` method has
``effect_size``, ``nobs`` and ``alpha`` as the first three arguments.
If the second argument is ``nobs1``, then the number of observations
in the plot are those for the first sample.
TODO: fix this for FTestPower and GofChisquarePower
TODO: maybe add line variable, if we want more than nobs and effectsize | plot_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def power(self, effect_size, nobs, alpha, df=None, alternative='two-sided'):
'''Calculate the power of a t-test for one sample or paired samples.
Parameters
----------
effect_size : float
standardized effect size, mean divided by the standard deviation.
effect size has to be positive.
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
df : int or float
degrees of freedom. By default this is None, and the df from the
one sample or paired ttest is used, ``df = nobs1 - 1``
alternative : str, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
'''
# for debugging
#print 'calling ttest power with', (effect_size, nobs, alpha, df, alternative)
return ttest_power(effect_size, nobs, alpha, df=df,
alternative=alternative) | Calculate the power of a t-test for one sample or paired samples.
Parameters
----------
effect_size : float
standardized effect size, mean divided by the standard deviation.
effect size has to be positive.
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
df : int or float
degrees of freedom. By default this is None, and the df from the
one sample or paired ttest is used, ``df = nobs1 - 1``
alternative : str, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true. | power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def solve_power(self, effect_size=None, nobs=None, alpha=None, power=None,
alternative='two-sided'):
'''solve for any one parameter of the power of a one sample t-test
for the one sample t-test the keywords are:
effect_size, nobs, alpha, power
Exactly one needs to be ``None``, all others need numeric values.
This test can also be used for a paired t-test, where effect size is
defined in terms of the mean difference, and nobs is the number of
pairs.
Parameters
----------
effect_size : float
Standardized effect size.The effect size is here Cohen's f, square
root of "f2".
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
alternative : str, 'two-sided' (default) or 'one-sided'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test.
'one-sided' assumes we are in the relevant tail.
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
*attaches*
cache_fit_res : list
Cache of the result of the root finding procedure for the latest
call to ``solve_power``, mainly for debugging purposes.
The first element is the success indicator, one if successful.
The remaining elements contain the return information of the up to
three solvers that have been tried.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails.
'''
# for debugging
#print 'calling ttest solve with', (effect_size, nobs, alpha, power, alternative)
return super().solve_power(effect_size=effect_size,
nobs=nobs,
alpha=alpha,
power=power,
alternative=alternative) | solve for any one parameter of the power of a one sample t-test
for the one sample t-test the keywords are:
effect_size, nobs, alpha, power
Exactly one needs to be ``None``, all others need numeric values.
This test can also be used for a paired t-test, where effect size is
defined in terms of the mean difference, and nobs is the number of
pairs.
Parameters
----------
effect_size : float
Standardized effect size.The effect size is here Cohen's f, square
root of "f2".
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
alternative : str, 'two-sided' (default) or 'one-sided'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test.
'one-sided' assumes we are in the relevant tail.
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
*attaches*
cache_fit_res : list
Cache of the result of the root finding procedure for the latest
call to ``solve_power``, mainly for debugging purposes.
The first element is the success indicator, one if successful.
The remaining elements contain the return information of the up to
three solvers that have been tried.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails. | solve_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def power(self, effect_size, nobs1, alpha, ratio=1, df=None,
alternative='two-sided'):
'''Calculate the power of a t-test for two independent sample
Parameters
----------
effect_size : float
standardized effect size, difference between the two means divided
by the standard deviation. `effect_size` has to be positive.
nobs1 : int or float
number of observations of sample 1. The number of observations of
sample two is ratio times the size of sample 1,
i.e. ``nobs2 = nobs1 * ratio``
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ratio : float
ratio of the number of observations in sample 2 relative to
sample 1. see description of nobs1
The default for ratio is 1; to solve for ratio given the other
arguments, it has to be explicitly set to None.
df : int or float
degrees of freedom. By default this is None, and the df from the
ttest with pooled variance is used, ``df = (nobs1 - 1 + nobs2 - 1)``
alternative : str, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
'''
nobs2 = nobs1*ratio
#pooled variance
if df is None:
df = (nobs1 - 1 + nobs2 - 1)
nobs = 1./ (1. / nobs1 + 1. / nobs2)
#print 'calling ttest power with', (effect_size, nobs, alpha, df, alternative)
return ttest_power(effect_size, nobs, alpha, df=df, alternative=alternative) | Calculate the power of a t-test for two independent sample
Parameters
----------
effect_size : float
standardized effect size, difference between the two means divided
by the standard deviation. `effect_size` has to be positive.
nobs1 : int or float
number of observations of sample 1. The number of observations of
sample two is ratio times the size of sample 1,
i.e. ``nobs2 = nobs1 * ratio``
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ratio : float
ratio of the number of observations in sample 2 relative to
sample 1. see description of nobs1
The default for ratio is 1; to solve for ratio given the other
arguments, it has to be explicitly set to None.
df : int or float
degrees of freedom. By default this is None, and the df from the
ttest with pooled variance is used, ``df = (nobs1 - 1 + nobs2 - 1)``
alternative : str, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true. | power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def solve_power(self, effect_size=None, nobs1=None, alpha=None, power=None,
ratio=1., alternative='two-sided'):
'''solve for any one parameter of the power of a two sample t-test
for t-test the keywords are:
effect_size, nobs1, alpha, power, ratio
exactly one needs to be ``None``, all others need numeric values
Parameters
----------
effect_size : float
standardized effect size, difference between the two means divided
by the standard deviation. `effect_size` has to be positive.
nobs1 : int or float
number of observations of sample 1. The number of observations of
sample two is ratio times the size of sample 1,
i.e. ``nobs2 = nobs1 * ratio``
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
ratio : float
ratio of the number of observations in sample 2 relative to
sample 1. see description of nobs1
The default for ratio is 1; to solve for ratio given the other
arguments it has to be explicitly set to None.
alternative : str, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails.
'''
return super().solve_power(effect_size=effect_size,
nobs1=nobs1,
alpha=alpha,
power=power,
ratio=ratio,
alternative=alternative) | solve for any one parameter of the power of a two sample t-test
for t-test the keywords are:
effect_size, nobs1, alpha, power, ratio
exactly one needs to be ``None``, all others need numeric values
Parameters
----------
effect_size : float
standardized effect size, difference between the two means divided
by the standard deviation. `effect_size` has to be positive.
nobs1 : int or float
number of observations of sample 1. The number of observations of
sample two is ratio times the size of sample 1,
i.e. ``nobs2 = nobs1 * ratio``
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
ratio : float
ratio of the number of observations in sample 2 relative to
sample 1. see description of nobs1
The default for ratio is 1; to solve for ratio given the other
arguments it has to be explicitly set to None.
alternative : str, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails. | solve_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def power(self, effect_size, nobs1, alpha, ratio=1,
alternative='two-sided'):
'''Calculate the power of a z-test for two independent sample
Parameters
----------
effect_size : float
standardized effect size, difference between the two means divided
by the standard deviation. effect size has to be positive.
nobs1 : int or float
number of observations of sample 1. The number of observations of
sample two is ratio times the size of sample 1,
i.e. ``nobs2 = nobs1 * ratio``
``ratio`` can be set to zero in order to get the power for a
one sample test.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ratio : float
ratio of the number of observations in sample 2 relative to
sample 1. see description of nobs1
alternative : str, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
'''
ddof = self.ddof # for correlation, ddof=3
# get effective nobs, factor for std of test statistic
if ratio > 0:
nobs2 = nobs1*ratio
#equivalent to nobs = n1*n2/(n1+n2)=n1*ratio/(1+ratio)
nobs = 1./ (1. / (nobs1 - ddof) + 1. / (nobs2 - ddof))
else:
nobs = nobs1 - ddof
return normal_power(effect_size, nobs, alpha, alternative=alternative) | Calculate the power of a z-test for two independent sample
Parameters
----------
effect_size : float
standardized effect size, difference between the two means divided
by the standard deviation. effect size has to be positive.
nobs1 : int or float
number of observations of sample 1. The number of observations of
sample two is ratio times the size of sample 1,
i.e. ``nobs2 = nobs1 * ratio``
``ratio`` can be set to zero in order to get the power for a
one sample test.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ratio : float
ratio of the number of observations in sample 2 relative to
sample 1. see description of nobs1
alternative : str, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true. | power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def solve_power(self, effect_size=None, nobs1=None, alpha=None, power=None,
ratio=1., alternative='two-sided'):
'''solve for any one parameter of the power of a two sample z-test
for z-test the keywords are:
effect_size, nobs1, alpha, power, ratio
exactly one needs to be ``None``, all others need numeric values
Parameters
----------
effect_size : float
standardized effect size, difference between the two means divided
by the standard deviation.
If ratio=0, then this is the standardized mean in the one sample
test.
nobs1 : int or float
number of observations of sample 1. The number of observations of
sample two is ratio times the size of sample 1,
i.e. ``nobs2 = nobs1 * ratio``
``ratio`` can be set to zero in order to get the power for a
one sample test.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
ratio : float
ratio of the number of observations in sample 2 relative to
sample 1. see description of nobs1
The default for ratio is 1; to solve for ration given the other
arguments it has to be explicitly set to None.
alternative : str, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails.
'''
return super().solve_power(effect_size=effect_size,
nobs1=nobs1,
alpha=alpha,
power=power,
ratio=ratio,
alternative=alternative) | solve for any one parameter of the power of a two sample z-test
for z-test the keywords are:
effect_size, nobs1, alpha, power, ratio
exactly one needs to be ``None``, all others need numeric values
Parameters
----------
effect_size : float
standardized effect size, difference between the two means divided
by the standard deviation.
If ratio=0, then this is the standardized mean in the one sample
test.
nobs1 : int or float
number of observations of sample 1. The number of observations of
sample two is ratio times the size of sample 1,
i.e. ``nobs2 = nobs1 * ratio``
``ratio`` can be set to zero in order to get the power for a
one sample test.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
ratio : float
ratio of the number of observations in sample 2 relative to
sample 1. see description of nobs1
The default for ratio is 1; to solve for ration given the other
arguments it has to be explicitly set to None.
alternative : str, 'two-sided' (default), 'larger', 'smaller'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails. | solve_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def power(self, effect_size, df_num, df_denom, alpha, ncc=1):
'''Calculate the power of a F-test.
The effect size is Cohen's ``f``, square root of ``f2``.
The sample size is given by ``nobs = df_denom + df_num + ncc``
Warning: The meaning of df_num and df_denom is reversed.
Parameters
----------
effect_size : float
Standardized effect size. The effect size is here Cohen's ``f``,
square root of ``f2``.
df_num : int or float
Warning incorrect name
denominator degrees of freedom,
This corresponds to the number of constraints in Wald tests.
df_denom : int or float
Warning incorrect name
numerator degrees of freedom.
This corresponds to the df_resid in Wald tests.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ncc : int
degrees of freedom correction for non-centrality parameter.
see Notes
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Notes
-----
sample size is given implicitly by df_num
set ncc=0 to match t-test, or f-test in LikelihoodModelResults.
ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test
ftest_power with ncc=0 should also be correct for f_test in regression
models, with df_num and d_denom as defined there. (not verified yet)
'''
pow_ = ftest_power(effect_size, df_num, df_denom, alpha, ncc=ncc)
#print effect_size, df_num, df_denom, alpha, pow_
return pow_ | Calculate the power of a F-test.
The effect size is Cohen's ``f``, square root of ``f2``.
The sample size is given by ``nobs = df_denom + df_num + ncc``
Warning: The meaning of df_num and df_denom is reversed.
Parameters
----------
effect_size : float
Standardized effect size. The effect size is here Cohen's ``f``,
square root of ``f2``.
df_num : int or float
Warning incorrect name
denominator degrees of freedom,
This corresponds to the number of constraints in Wald tests.
df_denom : int or float
Warning incorrect name
numerator degrees of freedom.
This corresponds to the df_resid in Wald tests.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ncc : int
degrees of freedom correction for non-centrality parameter.
see Notes
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Notes
-----
sample size is given implicitly by df_num
set ncc=0 to match t-test, or f-test in LikelihoodModelResults.
ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test
ftest_power with ncc=0 should also be correct for f_test in regression
models, with df_num and d_denom as defined there. (not verified yet) | power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def solve_power(self, effect_size=None, df_num=None, df_denom=None,
alpha=None, power=None, ncc=1, **kwargs):
'''solve for any one parameter of the power of a F-test
for the one sample F-test the keywords are:
effect_size, df_num, df_denom, alpha, power
Exactly one needs to be ``None``, all others need numeric values.
The effect size is Cohen's ``f``, square root of ``f2``.
The sample size is given by ``nobs = df_denom + df_num + ncc``.
Warning: The meaning of df_num and df_denom is reversed.
Parameters
----------
effect_size : float
Standardized effect size. The effect size is here Cohen's ``f``,
square root of ``f2``.
df_num : int or float
Warning incorrect name
denominator degrees of freedom,
This corresponds to the number of constraints in Wald tests.
Sample size is given by ``nobs = df_denom + df_num + ncc``
df_denom : int or float
Warning incorrect name
numerator degrees of freedom.
This corresponds to the df_resid in Wald tests.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
ncc : int
degrees of freedom correction for non-centrality parameter.
see Notes
kwargs : empty
``kwargs`` are not used and included for backwards compatibility.
If ``nobs`` is used as keyword, then a warning is issued. All
other keywords in ``kwargs`` raise a ValueError.
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
Notes
-----
The method uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails.
'''
if kwargs:
if "nobs" in kwargs:
warnings.warn("nobs is not used")
else:
raise ValueError(f"incorrect keyword(s) {kwargs}")
return super().solve_power(effect_size=effect_size,
df_num=df_num,
df_denom=df_denom,
alpha=alpha,
power=power,
ncc=ncc) | solve for any one parameter of the power of a F-test
for the one sample F-test the keywords are:
effect_size, df_num, df_denom, alpha, power
Exactly one needs to be ``None``, all others need numeric values.
The effect size is Cohen's ``f``, square root of ``f2``.
The sample size is given by ``nobs = df_denom + df_num + ncc``.
Warning: The meaning of df_num and df_denom is reversed.
Parameters
----------
effect_size : float
Standardized effect size. The effect size is here Cohen's ``f``,
square root of ``f2``.
df_num : int or float
Warning incorrect name
denominator degrees of freedom,
This corresponds to the number of constraints in Wald tests.
Sample size is given by ``nobs = df_denom + df_num + ncc``
df_denom : int or float
Warning incorrect name
numerator degrees of freedom.
This corresponds to the df_resid in Wald tests.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
ncc : int
degrees of freedom correction for non-centrality parameter.
see Notes
kwargs : empty
``kwargs`` are not used and included for backwards compatibility.
If ``nobs`` is used as keyword, then a warning is issued. All
other keywords in ``kwargs`` raise a ValueError.
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
Notes
-----
The method uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails. | solve_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def power(self, effect_size, df_num, df_denom, alpha, ncc=1):
'''Calculate the power of a F-test.
The effect size is Cohen's ``f^2``.
The sample size is given by ``nobs = df_denom + df_num + ncc``
Parameters
----------
effect_size : float
The effect size is here Cohen's ``f2``. This is equal to
the noncentrality of an F-test divided by nobs.
df_num : int or float
Numerator degrees of freedom,
This corresponds to the number of constraints in Wald tests.
df_denom : int or float
Denominator degrees of freedom.
This corresponds to the df_resid in Wald tests.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ncc : int
Degrees of freedom correction for non-centrality parameter.
see Notes
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Notes
-----
The sample size is given implicitly by df_denom
set ncc=0 to match t-test, or f-test in LikelihoodModelResults.
ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test
ftest_power with ncc=0 should also be correct for f_test in regression
models, with df_num and d_denom as defined there. (not verified yet)
'''
pow_ = ftest_power_f2(effect_size, df_num, df_denom, alpha, ncc=ncc)
return pow_ | Calculate the power of a F-test.
The effect size is Cohen's ``f^2``.
The sample size is given by ``nobs = df_denom + df_num + ncc``
Parameters
----------
effect_size : float
The effect size is here Cohen's ``f2``. This is equal to
the noncentrality of an F-test divided by nobs.
df_num : int or float
Numerator degrees of freedom,
This corresponds to the number of constraints in Wald tests.
df_denom : int or float
Denominator degrees of freedom.
This corresponds to the df_resid in Wald tests.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ncc : int
Degrees of freedom correction for non-centrality parameter.
see Notes
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Notes
-----
The sample size is given implicitly by df_denom
set ncc=0 to match t-test, or f-test in LikelihoodModelResults.
ncc=1 matches the non-centrality parameter in R::pwr::pwr.f2.test
ftest_power with ncc=0 should also be correct for f_test in regression
models, with df_num and d_denom as defined there. (not verified yet) | power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def solve_power(self, effect_size=None, df_num=None, df_denom=None,
alpha=None, power=None, ncc=1):
'''Solve for any one parameter of the power of a F-test
for the one sample F-test the keywords are:
effect_size, df_num, df_denom, alpha, power
Exactly one needs to be ``None``, all others need numeric values.
The effect size is Cohen's ``f2``.
The sample size is given by ``nobs = df_denom + df_num + ncc``, and
can be found by solving for df_denom.
Parameters
----------
effect_size : float
The effect size is here Cohen's ``f2``. This is equal to
the noncentrality of an F-test divided by nobs.
df_num : int or float
Numerator degrees of freedom,
This corresponds to the number of constraints in Wald tests.
df_denom : int or float
Denominator degrees of freedom.
This corresponds to the df_resid in Wald tests.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
ncc : int
degrees of freedom correction for non-centrality parameter.
see Notes
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails.
'''
return super().solve_power(effect_size=effect_size,
df_num=df_num,
df_denom=df_denom,
alpha=alpha,
power=power,
ncc=ncc) | Solve for any one parameter of the power of a F-test
for the one sample F-test the keywords are:
effect_size, df_num, df_denom, alpha, power
Exactly one needs to be ``None``, all others need numeric values.
The effect size is Cohen's ``f2``.
The sample size is given by ``nobs = df_denom + df_num + ncc``, and
can be found by solving for df_denom.
Parameters
----------
effect_size : float
The effect size is here Cohen's ``f2``. This is equal to
the noncentrality of an F-test divided by nobs.
df_num : int or float
Numerator degrees of freedom,
This corresponds to the number of constraints in Wald tests.
df_denom : int or float
Denominator degrees of freedom.
This corresponds to the df_resid in Wald tests.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
ncc : int
degrees of freedom correction for non-centrality parameter.
see Notes
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails. | solve_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def power(self, effect_size, nobs, alpha, k_groups=2):
'''Calculate the power of a F-test for one factor ANOVA.
Parameters
----------
effect_size : float
standardized effect size. The effect size is here Cohen's f, square
root of "f2".
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
k_groups : int or float
number of groups in the ANOVA or k-sample comparison. Default is 2.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
'''
return ftest_anova_power(effect_size, nobs, alpha, k_groups=k_groups) | Calculate the power of a F-test for one factor ANOVA.
Parameters
----------
effect_size : float
standardized effect size. The effect size is here Cohen's f, square
root of "f2".
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
k_groups : int or float
number of groups in the ANOVA or k-sample comparison. Default is 2.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true. | power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def solve_power(self, effect_size=None, nobs=None, alpha=None, power=None,
k_groups=2):
'''solve for any one parameter of the power of a F-test
for the one sample F-test the keywords are:
effect_size, nobs, alpha, power
Exactly one needs to be ``None``, all others need numeric values.
Parameters
----------
effect_size : float
standardized effect size, mean divided by the standard deviation.
effect size has to be positive.
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails.
'''
# update start values for root finding
if k_groups is not None:
self.start_ttp['nobs'] = k_groups * 10
self.start_bqexp['nobs'] = dict(low=k_groups * 2,
start_upp=k_groups * 10)
# first attempt at special casing
if effect_size is None:
return self._solve_effect_size(effect_size=effect_size,
nobs=nobs,
alpha=alpha,
k_groups=k_groups,
power=power)
return super().solve_power(effect_size=effect_size,
nobs=nobs,
alpha=alpha,
k_groups=k_groups,
power=power) | solve for any one parameter of the power of a F-test
for the one sample F-test the keywords are:
effect_size, nobs, alpha, power
Exactly one needs to be ``None``, all others need numeric values.
Parameters
----------
effect_size : float
standardized effect size, mean divided by the standard deviation.
effect size has to be positive.
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails. | solve_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def _solve_effect_size(self, effect_size=None, nobs=None, alpha=None,
power=None, k_groups=2):
'''experimental, test failure in solve_power for effect_size
'''
def func(x):
effect_size = x
return self._power_identity(effect_size=effect_size,
nobs=nobs,
alpha=alpha,
k_groups=k_groups,
power=power)
val, r = optimize.brentq(func, 1e-8, 1-1e-8, full_output=True)
if not r.converged:
print(r)
return val | experimental, test failure in solve_power for effect_size | _solve_effect_size | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def power(self, effect_size, nobs, alpha, n_bins, ddof=0):#alternative='two-sided'):
'''Calculate the power of a chisquare test for one sample
Only two-sided alternative is implemented
Parameters
----------
effect_size : float
standardized effect size, according to Cohen's definition.
see :func:`statsmodels.stats.gof.chisquare_effectsize`
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
n_bins : int
number of bins or cells in the distribution.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
'''
from statsmodels.stats.gof import chisquare_power
return chisquare_power(effect_size, nobs, n_bins, alpha, ddof=0) | Calculate the power of a chisquare test for one sample
Only two-sided alternative is implemented
Parameters
----------
effect_size : float
standardized effect size, according to Cohen's definition.
see :func:`statsmodels.stats.gof.chisquare_effectsize`
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
n_bins : int
number of bins or cells in the distribution.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true. | power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def solve_power(self, effect_size=None, nobs=None, alpha=None,
power=None, n_bins=2):
'''solve for any one parameter of the power of a one sample chisquare-test
for the one sample chisquare-test the keywords are:
effect_size, nobs, alpha, power
Exactly one needs to be ``None``, all others need numeric values.
n_bins needs to be defined, a default=2 is used.
Parameters
----------
effect_size : float
standardized effect size, according to Cohen's definition.
see :func:`statsmodels.stats.gof.chisquare_effectsize`
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
n_bins : int
number of bins or cells in the distribution
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails.
'''
return super().solve_power(effect_size=effect_size,
nobs=nobs,
n_bins=n_bins,
alpha=alpha,
power=power) | solve for any one parameter of the power of a one sample chisquare-test
for the one sample chisquare-test the keywords are:
effect_size, nobs, alpha, power
Exactly one needs to be ``None``, all others need numeric values.
n_bins needs to be defined, a default=2 is used.
Parameters
----------
effect_size : float
standardized effect size, according to Cohen's definition.
see :func:`statsmodels.stats.gof.chisquare_effectsize`
nobs : int or float
sample size, number of observations.
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
n_bins : int
number of bins or cells in the distribution
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails. | solve_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def power(self, effect_size, nobs1, alpha, ratio=1,
alternative='two-sided'):
'''Calculate the power of a chisquare for two independent sample
Parameters
----------
effect_size : float
standardize effect size, difference between the two means divided
by the standard deviation. effect size has to be positive.
nobs1 : int or float
number of observations of sample 1. The number of observations of
sample two is ratio times the size of sample 1,
i.e. ``nobs2 = nobs1 * ratio``
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ratio : float
ratio of the number of observations in sample 2 relative to
sample 1. see description of nobs1
The default for ratio is 1; to solve for ration given the other
arguments it has to be explicitely set to None.
alternative : str, 'two-sided' (default) or 'one-sided'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test.
'one-sided' assumes we are in the relevant tail.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
'''
from statsmodels.stats.gof import chisquare_power
nobs2 = nobs1*ratio
#equivalent to nobs = n1*n2/(n1+n2)=n1*ratio/(1+ratio)
nobs = 1./ (1. / nobs1 + 1. / nobs2)
return chisquare_power(effect_size, nobs, alpha) | Calculate the power of a chisquare for two independent sample
Parameters
----------
effect_size : float
standardize effect size, difference between the two means divided
by the standard deviation. effect size has to be positive.
nobs1 : int or float
number of observations of sample 1. The number of observations of
sample two is ratio times the size of sample 1,
i.e. ``nobs2 = nobs1 * ratio``
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
ratio : float
ratio of the number of observations in sample 2 relative to
sample 1. see description of nobs1
The default for ratio is 1; to solve for ration given the other
arguments it has to be explicitely set to None.
alternative : str, 'two-sided' (default) or 'one-sided'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test.
'one-sided' assumes we are in the relevant tail.
Returns
-------
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true. | power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def solve_power(self, effect_size=None, nobs1=None, alpha=None, power=None,
ratio=1., alternative='two-sided'):
'''solve for any one parameter of the power of a two sample z-test
for z-test the keywords are:
effect_size, nobs1, alpha, power, ratio
exactly one needs to be ``None``, all others need numeric values
Parameters
----------
effect_size : float
standardize effect size, difference between the two means divided
by the standard deviation.
nobs1 : int or float
number of observations of sample 1. The number of observations of
sample two is ratio times the size of sample 1,
i.e. ``nobs2 = nobs1 * ratio``
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
ratio : float
ratio of the number of observations in sample 2 relative to
sample 1. see description of nobs1
The default for ratio is 1; to solve for ration given the other
arguments it has to be explicitely set to None.
alternative : str, 'two-sided' (default) or 'one-sided'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test.
'one-sided' assumes we are in the relevant tail.
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails.
'''
return super().solve_power(effect_size=effect_size,
nobs1=nobs1,
alpha=alpha,
power=power,
ratio=ratio,
alternative=alternative) | solve for any one parameter of the power of a two sample z-test
for z-test the keywords are:
effect_size, nobs1, alpha, power, ratio
exactly one needs to be ``None``, all others need numeric values
Parameters
----------
effect_size : float
standardize effect size, difference between the two means divided
by the standard deviation.
nobs1 : int or float
number of observations of sample 1. The number of observations of
sample two is ratio times the size of sample 1,
i.e. ``nobs2 = nobs1 * ratio``
alpha : float in interval (0,1)
significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
power : float in interval (0,1)
power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
ratio : float
ratio of the number of observations in sample 2 relative to
sample 1. see description of nobs1
The default for ratio is 1; to solve for ration given the other
arguments it has to be explicitely set to None.
alternative : str, 'two-sided' (default) or 'one-sided'
extra argument to choose whether the power is calculated for a
two-sided (default) or one sided test.
'one-sided' assumes we are in the relevant tail.
Returns
-------
value : float
The value of the parameter that was set to None in the call. The
value solves the power equation given the remaining parameters.
Notes
-----
The function uses scipy.optimize for finding the value that satisfies
the power equation. It first uses ``brentq`` with a prior search for
bounds. If this fails to find a root, ``fsolve`` is used. If ``fsolve``
also fails, then, for ``alpha``, ``power`` and ``effect_size``,
``brentq`` with fixed bounds is used. However, there can still be cases
where this fails. | solve_power | python | statsmodels/statsmodels | statsmodels/stats/power.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/power.py | BSD-3-Clause |
def _bound_proportion_confint(
func: Callable[[float], float], qi: float, lower: bool = True
) -> float:
"""
Try hard to find a bound different from eps/1 - eps in proportion_confint
Parameters
----------
func : callable
Callable function to use as the objective of the search
qi : float
The empirical success rate
lower : bool
Whether to fund a lower bound for the left side of the CI
Returns
-------
float
The coarse bound
"""
default = FLOAT_INFO.eps if lower else 1.0 - FLOAT_INFO.eps
def step(v):
return v / 8 if lower else v + (1.0 - v) / 8
x = step(qi)
w = func(x)
cnt = 1
while w > 0 and cnt < 10:
x = step(x)
w = func(x)
cnt += 1
return x if cnt < 10 else default | Try hard to find a bound different from eps/1 - eps in proportion_confint
Parameters
----------
func : callable
Callable function to use as the objective of the search
qi : float
The empirical success rate
lower : bool
Whether to fund a lower bound for the left side of the CI
Returns
-------
float
The coarse bound | _bound_proportion_confint | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def _bisection_search_conservative(
func: Callable[[float], float], lb: float, ub: float, steps: int = 27
) -> tuple[float, float]:
"""
Private function used as a fallback by proportion_confint
Used when brentq returns a non-conservative bound for the CI
Parameters
----------
func : callable
Callable function to use as the objective of the search
lb : float
Lower bound
ub : float
Upper bound
steps : int
Number of steps to use in the bisection
Returns
-------
est : float
The estimated value. Will always produce a negative value of func
func_val : float
The value of the function at the estimate
"""
upper = func(ub)
lower = func(lb)
best = upper if upper < 0 else lower
best_pt = ub if upper < 0 else lb
if np.sign(lower) == np.sign(upper):
raise ValueError("problem with signs")
mp = (ub + lb) / 2
mid = func(mp)
if (mid < 0) and (mid > best):
best = mid
best_pt = mp
for _ in range(steps):
if np.sign(mid) == np.sign(upper):
ub = mp
upper = mid
else:
lb = mp
mp = (ub + lb) / 2
mid = func(mp)
if (mid < 0) and (mid > best):
best = mid
best_pt = mp
return best_pt, best | Private function used as a fallback by proportion_confint
Used when brentq returns a non-conservative bound for the CI
Parameters
----------
func : callable
Callable function to use as the objective of the search
lb : float
Lower bound
ub : float
Upper bound
steps : int
Number of steps to use in the bisection
Returns
-------
est : float
The estimated value. Will always produce a negative value of func
func_val : float
The value of the function at the estimate | _bisection_search_conservative | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def proportion_confint(count, nobs, alpha:float=0.05, method="normal", alternative:str='two-sided'):
"""
Confidence interval for a binomial proportion
Parameters
----------
count : {int or float, array_like}
number of successes, can be pandas Series or DataFrame. Arrays
must contain integer values if method is "binom_test".
nobs : {int or float, array_like}
total number of trials. Arrays must contain integer values if method
is "binom_test".
alpha : float
Significance level, default 0.05. Must be in (0, 1)
method : {"normal", "agresti_coull", "beta", "wilson", "binom_test"}
default: "normal"
method to use for confidence interval. Supported methods:
- `normal` : asymptotic normal approximation
- `agresti_coull` : Agresti-Coull interval
- `beta` : Clopper-Pearson interval based on Beta distribution
- `wilson` : Wilson Score interval
- `jeffreys` : Jeffreys Bayesian Interval
- `binom_test` : Numerical inversion of binom_test
alternative : {"two-sided", "larger", "smaller"}
default: "two-sided"
specifies whether to calculate a two-sided or one-sided confidence interval.
Returns
-------
ci_low, ci_upp : {float, ndarray, Series DataFrame}
larger and smaller confidence level with coverage (approximately) 1-alpha.
When a pandas object is returned, then the index is taken from `count`.
When side is not "two-sided", lower or upper bound is set to 0 or 1 respectively.
Notes
-----
Beta, the Clopper-Pearson exact interval has coverage at least 1-alpha,
but is in general conservative. Most of the other methods have average
coverage equal to 1-alpha, but will have smaller coverage in some cases.
The "beta" and "jeffreys" interval are central, they use alpha/2 in each
tail, and alpha is not adjusted at the boundaries. In the extreme case
when `count` is zero or equal to `nobs`, then the coverage will be only
1 - alpha/2 in the case of "beta".
The confidence intervals are clipped to be in the [0, 1] interval in the
case of "normal" and "agresti_coull".
Method "binom_test" directly inverts the binomial test in scipy.stats.
which has discrete steps.
TODO: binom_test intervals raise an exception in small samples if one
interval bound is close to zero or one.
References
----------
.. [*] https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval
.. [*] Brown, Lawrence D.; Cai, T. Tony; DasGupta, Anirban (2001).
"Interval Estimation for a Binomial Proportion", Statistical
Science 16 (2): 101–133. doi:10.1214/ss/1009213286.
"""
is_scalar = np.isscalar(count) and np.isscalar(nobs)
is_pandas = isinstance(count, (pd.Series, pd.DataFrame))
count_a = array_like(count, "count", optional=False, ndim=None)
nobs_a = array_like(nobs, "nobs", optional=False, ndim=None)
def _check(x: np.ndarray, name: str) -> np.ndarray:
if np.issubdtype(x.dtype, np.integer):
return x
y = x.astype(np.int64, casting="unsafe")
if np.any(y != x):
raise ValueError(
f"{name} must have an integral dtype. Found data with "
f"dtype {x.dtype}"
)
return y
if method == "binom_test":
count_a = _check(np.asarray(count_a), "count")
nobs_a = _check(np.asarray(nobs_a), "count")
q_ = count_a / nobs_a
if alternative == 'two-sided':
if method != "binom_test":
alpha = alpha / 2.0
elif alternative not in ['larger', 'smaller']:
raise NotImplementedError(f"alternative {alternative} is not available")
if method == "normal":
std_ = np.sqrt(q_ * (1 - q_) / nobs_a)
dist = stats.norm.isf(alpha) * std_
ci_low = q_ - dist
ci_upp = q_ + dist
elif method == "binom_test" and alternative == 'two-sided':
def func_factory(count: int, nobs: int) -> Callable[[float], float]:
if hasattr(stats, "binomtest"):
def func(qi):
return stats.binomtest(count, nobs, p=qi).pvalue - alpha
else:
# Remove after min SciPy >= 1.7
def func(qi):
return stats.binom_test(count, nobs, p=qi) - alpha
return func
bcast = np.broadcast(count_a, nobs_a)
ci_low = np.zeros(bcast.shape)
ci_upp = np.zeros(bcast.shape)
index = bcast.index
for c, n in bcast:
# Enforce symmetry
reverse = False
_q = q_.flat[index]
if c > n // 2:
c = n - c
reverse = True
_q = 1 - _q
func = func_factory(c, n)
if c == 0:
ci_low.flat[index] = 0.0
else:
lower_bnd = _bound_proportion_confint(func, _q, lower=True)
val, _z = optimize.brentq(
func, lower_bnd, _q, full_output=True
)
if func(val) > 0:
power = 10
new_lb = val - (val - lower_bnd) / 2**power
while func(new_lb) > 0 and power >= 0:
power -= 1
new_lb = val - (val - lower_bnd) / 2**power
val, _ = _bisection_search_conservative(func, new_lb, _q)
ci_low.flat[index] = val
if c == n:
ci_upp.flat[index] = 1.0
else:
upper_bnd = _bound_proportion_confint(func, _q, lower=False)
val, _z = optimize.brentq(
func, _q, upper_bnd, full_output=True
)
if func(val) > 0:
power = 10
new_ub = val + (upper_bnd - val) / 2**power
while func(new_ub) > 0 and power >= 0:
power -= 1
new_ub = val - (upper_bnd - val) / 2**power
val, _ = _bisection_search_conservative(func, _q, new_ub)
ci_upp.flat[index] = val
if reverse:
temp = ci_upp.flat[index]
ci_upp.flat[index] = 1 - ci_low.flat[index]
ci_low.flat[index] = 1 - temp
index = bcast.index
elif method == "beta" or (method == "binom_test" and alternative != 'two-sided'):
ci_low = stats.beta.ppf(alpha, count_a, nobs_a - count_a + 1)
ci_upp = stats.beta.isf(alpha, count_a + 1, nobs_a - count_a)
if np.ndim(ci_low) > 0:
ci_low.flat[q_.flat == 0] = 0
ci_upp.flat[q_.flat == 1] = 1
else:
ci_low = 0 if q_ == 0 else ci_low
ci_upp = 1 if q_ == 1 else ci_upp
elif method == "agresti_coull":
crit = stats.norm.isf(alpha)
nobs_c = nobs_a + crit**2
q_c = (count_a + crit**2 / 2.0) / nobs_c
std_c = np.sqrt(q_c * (1.0 - q_c) / nobs_c)
dist = crit * std_c
ci_low = q_c - dist
ci_upp = q_c + dist
elif method == "wilson":
crit = stats.norm.isf(alpha)
crit2 = crit**2
denom = 1 + crit2 / nobs_a
center = (q_ + crit2 / (2 * nobs_a)) / denom
dist = crit * np.sqrt(
q_ * (1.0 - q_) / nobs_a + crit2 / (4.0 * nobs_a**2)
)
dist /= denom
ci_low = center - dist
ci_upp = center + dist
# method adjusted to be more forgiving of misspellings or incorrect option name
elif method[:4] == "jeff":
ci_low = stats.beta.ppf(alpha, count_a + 0.5, nobs_a - count_a + 0.5)
ci_upp = stats.beta.isf(alpha, count_a + 0.5, nobs_a - count_a + 0.5)
else:
raise NotImplementedError(f"method {method} is not available")
if method in ["normal", "agresti_coull"]:
ci_low = np.clip(ci_low, 0, 1)
ci_upp = np.clip(ci_upp, 0, 1)
if is_pandas:
container = pd.Series if isinstance(count, pd.Series) else pd.DataFrame
ci_low = container(ci_low, index=count.index)
ci_upp = container(ci_upp, index=count.index)
if alternative == 'larger':
ci_low = 0
elif alternative == 'smaller':
ci_upp = 1
if is_scalar:
return float(ci_low), float(ci_upp)
return ci_low, ci_upp | Confidence interval for a binomial proportion
Parameters
----------
count : {int or float, array_like}
number of successes, can be pandas Series or DataFrame. Arrays
must contain integer values if method is "binom_test".
nobs : {int or float, array_like}
total number of trials. Arrays must contain integer values if method
is "binom_test".
alpha : float
Significance level, default 0.05. Must be in (0, 1)
method : {"normal", "agresti_coull", "beta", "wilson", "binom_test"}
default: "normal"
method to use for confidence interval. Supported methods:
- `normal` : asymptotic normal approximation
- `agresti_coull` : Agresti-Coull interval
- `beta` : Clopper-Pearson interval based on Beta distribution
- `wilson` : Wilson Score interval
- `jeffreys` : Jeffreys Bayesian Interval
- `binom_test` : Numerical inversion of binom_test
alternative : {"two-sided", "larger", "smaller"}
default: "two-sided"
specifies whether to calculate a two-sided or one-sided confidence interval.
Returns
-------
ci_low, ci_upp : {float, ndarray, Series DataFrame}
larger and smaller confidence level with coverage (approximately) 1-alpha.
When a pandas object is returned, then the index is taken from `count`.
When side is not "two-sided", lower or upper bound is set to 0 or 1 respectively.
Notes
-----
Beta, the Clopper-Pearson exact interval has coverage at least 1-alpha,
but is in general conservative. Most of the other methods have average
coverage equal to 1-alpha, but will have smaller coverage in some cases.
The "beta" and "jeffreys" interval are central, they use alpha/2 in each
tail, and alpha is not adjusted at the boundaries. In the extreme case
when `count` is zero or equal to `nobs`, then the coverage will be only
1 - alpha/2 in the case of "beta".
The confidence intervals are clipped to be in the [0, 1] interval in the
case of "normal" and "agresti_coull".
Method "binom_test" directly inverts the binomial test in scipy.stats.
which has discrete steps.
TODO: binom_test intervals raise an exception in small samples if one
interval bound is close to zero or one.
References
----------
.. [*] https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval
.. [*] Brown, Lawrence D.; Cai, T. Tony; DasGupta, Anirban (2001).
"Interval Estimation for a Binomial Proportion", Statistical
Science 16 (2): 101–133. doi:10.1214/ss/1009213286. | proportion_confint | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def poisson_interval(interval, p):
"""
Compute P(b <= Z <= a) where Z ~ Poisson(p) and
`interval = (b, a)`.
"""
b, a = interval
prob = stats.poisson.cdf(a, p) - stats.poisson.cdf(b - 1, p)
return prob | Compute P(b <= Z <= a) where Z ~ Poisson(p) and
`interval = (b, a)`. | multinomial_proportions_confint.poisson_interval | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def truncated_poisson_factorial_moment(interval, r, p):
"""
Compute mu_r, the r-th factorial moment of a poisson random
variable of parameter `p` truncated to `interval = (b, a)`.
"""
b, a = interval
return p ** r * (1 - ((poisson_interval((a - r + 1, a), p) -
poisson_interval((b - r, b - 1), p)) /
poisson_interval((b, a), p))) | Compute mu_r, the r-th factorial moment of a poisson random
variable of parameter `p` truncated to `interval = (b, a)`. | multinomial_proportions_confint.truncated_poisson_factorial_moment | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def edgeworth(intervals):
"""
Compute the Edgeworth expansion term of Sison & Glaz's formula
(1) (approximated probability for multinomial proportions in a
given box).
"""
# Compute means and central moments of the truncated poisson
# variables.
mu_r1, mu_r2, mu_r3, mu_r4 = (
np.array([truncated_poisson_factorial_moment(interval, r, p)
for (interval, p) in zip(intervals, counts)])
for r in range(1, 5)
)
mu = mu_r1
mu2 = mu_r2 + mu - mu ** 2
mu3 = mu_r3 + mu_r2 * (3 - 3 * mu) + mu - 3 * mu ** 2 + 2 * mu ** 3
mu4 = (mu_r4 + mu_r3 * (6 - 4 * mu) +
mu_r2 * (7 - 12 * mu + 6 * mu ** 2) +
mu - 4 * mu ** 2 + 6 * mu ** 3 - 3 * mu ** 4)
# Compute expansion factors, gamma_1 and gamma_2.
g1 = mu3.sum() / mu2.sum() ** 1.5
g2 = (mu4.sum() - 3 * (mu2 ** 2).sum()) / mu2.sum() ** 2
# Compute the expansion itself.
x = (n - mu.sum()) / np.sqrt(mu2.sum())
phi = np.exp(- x ** 2 / 2) / np.sqrt(2 * np.pi)
H3 = x ** 3 - 3 * x
H4 = x ** 4 - 6 * x ** 2 + 3
H6 = x ** 6 - 15 * x ** 4 + 45 * x ** 2 - 15
f = phi * (1 + g1 * H3 / 6 + g2 * H4 / 24 + g1 ** 2 * H6 / 72)
return f / np.sqrt(mu2.sum()) | Compute the Edgeworth expansion term of Sison & Glaz's formula
(1) (approximated probability for multinomial proportions in a
given box). | multinomial_proportions_confint.edgeworth | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def approximated_multinomial_interval(intervals):
"""
Compute approximated probability for Multinomial(n, proportions)
to be in `intervals` (Sison & Glaz's formula (1)).
"""
return np.exp(
np.sum(np.log([poisson_interval(interval, p)
for (interval, p) in zip(intervals, counts)])) +
np.log(edgeworth(intervals)) -
np.log(stats.poisson._pmf(n, n))
) | Compute approximated probability for Multinomial(n, proportions)
to be in `intervals` (Sison & Glaz's formula (1)). | multinomial_proportions_confint.approximated_multinomial_interval | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def nu(c):
"""
Compute interval coverage for a given `c` (Sison & Glaz's
formula (7)).
"""
return approximated_multinomial_interval(
[(np.maximum(count - c, 0), np.minimum(count + c, n))
for count in counts]) | Compute interval coverage for a given `c` (Sison & Glaz's
formula (7)). | multinomial_proportions_confint.nu | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def multinomial_proportions_confint(counts, alpha=0.05, method='goodman'):
"""
Confidence intervals for multinomial proportions.
Parameters
----------
counts : array_like of int, 1-D
Number of observations in each category.
alpha : float in (0, 1), optional
Significance level, defaults to 0.05.
method : {'goodman', 'sison-glaz'}, optional
Method to use to compute the confidence intervals; available methods
are:
- `goodman`: based on a chi-squared approximation, valid if all
values in `counts` are greater or equal to 5 [2]_
- `sison-glaz`: less conservative than `goodman`, but only valid if
`counts` has 7 or more categories (``len(counts) >= 7``) [3]_
Returns
-------
confint : ndarray, 2-D
Array of [lower, upper] confidence levels for each category, such that
overall coverage is (approximately) `1-alpha`.
Raises
------
ValueError
If `alpha` is not in `(0, 1)` (bounds excluded), or if the values in
`counts` are not all positive or null.
NotImplementedError
If `method` is not kown.
Exception
When ``method == 'sison-glaz'``, if for some reason `c` cannot be
computed; this signals a bug and should be reported.
Notes
-----
The `goodman` method [2]_ is based on approximating a statistic based on
the multinomial as a chi-squared random variable. The usual recommendation
is that this is valid if all the values in `counts` are greater than or
equal to 5. There is no condition on the number of categories for this
method.
The `sison-glaz` method [3]_ approximates the multinomial probabilities,
and evaluates that with a maximum-likelihood estimator. The first
approximation is an Edgeworth expansion that converges when the number of
categories goes to infinity, and the maximum-likelihood estimator converges
when the number of observations (``sum(counts)``) goes to infinity. In
their paper, Sison & Glaz demo their method with at least 7 categories, so
``len(counts) >= 7`` with all values in `counts` at or above 5 can be used
as a rule of thumb for the validity of this method. This method is less
conservative than the `goodman` method (i.e. it will yield confidence
intervals closer to the desired significance level), but produces
confidence intervals of uniform width over all categories (except when the
intervals reach 0 or 1, in which case they are truncated), which makes it
most useful when proportions are of similar magnitude.
Aside from the original sources ([1]_, [2]_, and [3]_), the implementation
uses the formulas (though not the code) presented in [4]_ and [5]_.
References
----------
.. [1] Levin, Bruce, "A representation for multinomial cumulative
distribution functions," The Annals of Statistics, Vol. 9, No. 5,
1981, pp. 1123-1126.
.. [2] Goodman, L.A., "On simultaneous confidence intervals for multinomial
proportions," Technometrics, Vol. 7, No. 2, 1965, pp. 247-254.
.. [3] Sison, Cristina P., and Joseph Glaz, "Simultaneous Confidence
Intervals and Sample Size Determination for Multinomial
Proportions," Journal of the American Statistical Association,
Vol. 90, No. 429, 1995, pp. 366-369.
.. [4] May, Warren L., and William D. Johnson, "A SAS® macro for
constructing simultaneous confidence intervals for multinomial
proportions," Computer methods and programs in Biomedicine, Vol. 53,
No. 3, 1997, pp. 153-162.
.. [5] May, Warren L., and William D. Johnson, "Constructing two-sided
simultaneous confidence intervals for multinomial proportions for
small counts in a large number of cells," Journal of Statistical
Software, Vol. 5, No. 6, 2000, pp. 1-24.
"""
if alpha <= 0 or alpha >= 1:
raise ValueError('alpha must be in (0, 1), bounds excluded')
counts = np.array(counts, dtype=float)
if (counts < 0).any():
raise ValueError('counts must be >= 0')
n = counts.sum()
k = len(counts)
proportions = counts / n
if method == 'goodman':
chi2 = stats.chi2.ppf(1 - alpha / k, 1)
delta = chi2 ** 2 + (4 * n * proportions * chi2 * (1 - proportions))
region = ((2 * n * proportions + chi2 +
np.array([- np.sqrt(delta), np.sqrt(delta)])) /
(2 * (chi2 + n))).T
elif method[:5] == 'sison': # We accept any name starting with 'sison'
# Define a few functions we'll use a lot.
def poisson_interval(interval, p):
"""
Compute P(b <= Z <= a) where Z ~ Poisson(p) and
`interval = (b, a)`.
"""
b, a = interval
prob = stats.poisson.cdf(a, p) - stats.poisson.cdf(b - 1, p)
return prob
def truncated_poisson_factorial_moment(interval, r, p):
"""
Compute mu_r, the r-th factorial moment of a poisson random
variable of parameter `p` truncated to `interval = (b, a)`.
"""
b, a = interval
return p ** r * (1 - ((poisson_interval((a - r + 1, a), p) -
poisson_interval((b - r, b - 1), p)) /
poisson_interval((b, a), p)))
def edgeworth(intervals):
"""
Compute the Edgeworth expansion term of Sison & Glaz's formula
(1) (approximated probability for multinomial proportions in a
given box).
"""
# Compute means and central moments of the truncated poisson
# variables.
mu_r1, mu_r2, mu_r3, mu_r4 = (
np.array([truncated_poisson_factorial_moment(interval, r, p)
for (interval, p) in zip(intervals, counts)])
for r in range(1, 5)
)
mu = mu_r1
mu2 = mu_r2 + mu - mu ** 2
mu3 = mu_r3 + mu_r2 * (3 - 3 * mu) + mu - 3 * mu ** 2 + 2 * mu ** 3
mu4 = (mu_r4 + mu_r3 * (6 - 4 * mu) +
mu_r2 * (7 - 12 * mu + 6 * mu ** 2) +
mu - 4 * mu ** 2 + 6 * mu ** 3 - 3 * mu ** 4)
# Compute expansion factors, gamma_1 and gamma_2.
g1 = mu3.sum() / mu2.sum() ** 1.5
g2 = (mu4.sum() - 3 * (mu2 ** 2).sum()) / mu2.sum() ** 2
# Compute the expansion itself.
x = (n - mu.sum()) / np.sqrt(mu2.sum())
phi = np.exp(- x ** 2 / 2) / np.sqrt(2 * np.pi)
H3 = x ** 3 - 3 * x
H4 = x ** 4 - 6 * x ** 2 + 3
H6 = x ** 6 - 15 * x ** 4 + 45 * x ** 2 - 15
f = phi * (1 + g1 * H3 / 6 + g2 * H4 / 24 + g1 ** 2 * H6 / 72)
return f / np.sqrt(mu2.sum())
def approximated_multinomial_interval(intervals):
"""
Compute approximated probability for Multinomial(n, proportions)
to be in `intervals` (Sison & Glaz's formula (1)).
"""
return np.exp(
np.sum(np.log([poisson_interval(interval, p)
for (interval, p) in zip(intervals, counts)])) +
np.log(edgeworth(intervals)) -
np.log(stats.poisson._pmf(n, n))
)
def nu(c):
"""
Compute interval coverage for a given `c` (Sison & Glaz's
formula (7)).
"""
return approximated_multinomial_interval(
[(np.maximum(count - c, 0), np.minimum(count + c, n))
for count in counts])
# Find the value of `c` that will give us the confidence intervals
# (solving nu(c) <= 1 - alpha < nu(c + 1).
c = 1.0
nuc = nu(c)
nucp1 = nu(c + 1)
while not (nuc <= (1 - alpha) < nucp1):
if c > n:
raise Exception("Couldn't find a value for `c` that "
"solves nu(c) <= 1 - alpha < nu(c + 1)")
c += 1
nuc = nucp1
nucp1 = nu(c + 1)
# Compute gamma and the corresponding confidence intervals.
g = (1 - alpha - nuc) / (nucp1 - nuc)
ci_lower = np.maximum(proportions - c / n, 0)
ci_upper = np.minimum(proportions + (c + 2 * g) / n, 1)
region = np.array([ci_lower, ci_upper]).T
else:
raise NotImplementedError('method "%s" is not available' % method)
return region | Confidence intervals for multinomial proportions.
Parameters
----------
counts : array_like of int, 1-D
Number of observations in each category.
alpha : float in (0, 1), optional
Significance level, defaults to 0.05.
method : {'goodman', 'sison-glaz'}, optional
Method to use to compute the confidence intervals; available methods
are:
- `goodman`: based on a chi-squared approximation, valid if all
values in `counts` are greater or equal to 5 [2]_
- `sison-glaz`: less conservative than `goodman`, but only valid if
`counts` has 7 or more categories (``len(counts) >= 7``) [3]_
Returns
-------
confint : ndarray, 2-D
Array of [lower, upper] confidence levels for each category, such that
overall coverage is (approximately) `1-alpha`.
Raises
------
ValueError
If `alpha` is not in `(0, 1)` (bounds excluded), or if the values in
`counts` are not all positive or null.
NotImplementedError
If `method` is not kown.
Exception
When ``method == 'sison-glaz'``, if for some reason `c` cannot be
computed; this signals a bug and should be reported.
Notes
-----
The `goodman` method [2]_ is based on approximating a statistic based on
the multinomial as a chi-squared random variable. The usual recommendation
is that this is valid if all the values in `counts` are greater than or
equal to 5. There is no condition on the number of categories for this
method.
The `sison-glaz` method [3]_ approximates the multinomial probabilities,
and evaluates that with a maximum-likelihood estimator. The first
approximation is an Edgeworth expansion that converges when the number of
categories goes to infinity, and the maximum-likelihood estimator converges
when the number of observations (``sum(counts)``) goes to infinity. In
their paper, Sison & Glaz demo their method with at least 7 categories, so
``len(counts) >= 7`` with all values in `counts` at or above 5 can be used
as a rule of thumb for the validity of this method. This method is less
conservative than the `goodman` method (i.e. it will yield confidence
intervals closer to the desired significance level), but produces
confidence intervals of uniform width over all categories (except when the
intervals reach 0 or 1, in which case they are truncated), which makes it
most useful when proportions are of similar magnitude.
Aside from the original sources ([1]_, [2]_, and [3]_), the implementation
uses the formulas (though not the code) presented in [4]_ and [5]_.
References
----------
.. [1] Levin, Bruce, "A representation for multinomial cumulative
distribution functions," The Annals of Statistics, Vol. 9, No. 5,
1981, pp. 1123-1126.
.. [2] Goodman, L.A., "On simultaneous confidence intervals for multinomial
proportions," Technometrics, Vol. 7, No. 2, 1965, pp. 247-254.
.. [3] Sison, Cristina P., and Joseph Glaz, "Simultaneous Confidence
Intervals and Sample Size Determination for Multinomial
Proportions," Journal of the American Statistical Association,
Vol. 90, No. 429, 1995, pp. 366-369.
.. [4] May, Warren L., and William D. Johnson, "A SAS® macro for
constructing simultaneous confidence intervals for multinomial
proportions," Computer methods and programs in Biomedicine, Vol. 53,
No. 3, 1997, pp. 153-162.
.. [5] May, Warren L., and William D. Johnson, "Constructing two-sided
simultaneous confidence intervals for multinomial proportions for
small counts in a large number of cells," Journal of Statistical
Software, Vol. 5, No. 6, 2000, pp. 1-24. | multinomial_proportions_confint | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def samplesize_confint_proportion(proportion, half_length, alpha=0.05,
method='normal'):
"""
Find sample size to get desired confidence interval length
Parameters
----------
proportion : float in (0, 1)
proportion or quantile
half_length : float in (0, 1)
desired half length of the confidence interval
alpha : float in (0, 1)
significance level, default 0.05,
coverage of the two-sided interval is (approximately) ``1 - alpha``
method : str in ['normal']
method to use for confidence interval,
currently only normal approximation
Returns
-------
n : float
sample size to get the desired half length of the confidence interval
Notes
-----
this is mainly to store the formula.
possible application: number of replications in bootstrap samples
"""
q_ = proportion
if method == 'normal':
n = q_ * (1 - q_) / (half_length / stats.norm.isf(alpha / 2.))**2
else:
raise NotImplementedError('only "normal" is available')
return n | Find sample size to get desired confidence interval length
Parameters
----------
proportion : float in (0, 1)
proportion or quantile
half_length : float in (0, 1)
desired half length of the confidence interval
alpha : float in (0, 1)
significance level, default 0.05,
coverage of the two-sided interval is (approximately) ``1 - alpha``
method : str in ['normal']
method to use for confidence interval,
currently only normal approximation
Returns
-------
n : float
sample size to get the desired half length of the confidence interval
Notes
-----
this is mainly to store the formula.
possible application: number of replications in bootstrap samples | samplesize_confint_proportion | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def proportion_effectsize(prop1, prop2, method='normal'):
"""
Effect size for a test comparing two proportions
for use in power function
Parameters
----------
prop1, prop2 : float or array_like
The proportion value(s).
Returns
-------
es : float or ndarray
effect size for (transformed) prop1 - prop2
Notes
-----
only method='normal' is implemented to match pwr.p2.test
see http://www.statmethods.net/stats/power.html
Effect size for `normal` is defined as ::
2 * (arcsin(sqrt(prop1)) - arcsin(sqrt(prop2)))
I think other conversions to normality can be used, but I need to check.
Examples
--------
>>> import statsmodels.api as sm
>>> sm.stats.proportion_effectsize(0.5, 0.4)
0.20135792079033088
>>> sm.stats.proportion_effectsize([0.3, 0.4, 0.5], 0.4)
array([-0.21015893, 0. , 0.20135792])
"""
if method != 'normal':
raise ValueError('only "normal" is implemented')
es = 2 * (np.arcsin(np.sqrt(prop1)) - np.arcsin(np.sqrt(prop2)))
return es | Effect size for a test comparing two proportions
for use in power function
Parameters
----------
prop1, prop2 : float or array_like
The proportion value(s).
Returns
-------
es : float or ndarray
effect size for (transformed) prop1 - prop2
Notes
-----
only method='normal' is implemented to match pwr.p2.test
see http://www.statmethods.net/stats/power.html
Effect size for `normal` is defined as ::
2 * (arcsin(sqrt(prop1)) - arcsin(sqrt(prop2)))
I think other conversions to normality can be used, but I need to check.
Examples
--------
>>> import statsmodels.api as sm
>>> sm.stats.proportion_effectsize(0.5, 0.4)
0.20135792079033088
>>> sm.stats.proportion_effectsize([0.3, 0.4, 0.5], 0.4)
array([-0.21015893, 0. , 0.20135792]) | proportion_effectsize | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def std_prop(prop, nobs):
"""
Standard error for the estimate of a proportion
This is just ``np.sqrt(p * (1. - p) / nobs)``
Parameters
----------
prop : array_like
proportion
nobs : int, array_like
number of observations
Returns
-------
std : array_like
standard error for a proportion of nobs independent observations
"""
return np.sqrt(prop * (1. - prop) / nobs) | Standard error for the estimate of a proportion
This is just ``np.sqrt(p * (1. - p) / nobs)``
Parameters
----------
prop : array_like
proportion
nobs : int, array_like
number of observations
Returns
-------
std : array_like
standard error for a proportion of nobs independent observations | std_prop | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def _power_ztost(mean_low, var_low, mean_upp, var_upp, mean_alt, var_alt,
alpha=0.05, discrete=True, dist='norm', nobs=None,
continuity=0, critval_continuity=0):
"""
Generic statistical power function for normal based equivalence test
This includes options to adjust the normal approximation and can use
the binomial to evaluate the probability of the rejection region
see power_ztost_prob for a description of the options
"""
# TODO: refactor structure, separate norm and binom better
if not isinstance(continuity, tuple):
continuity = (continuity, continuity)
crit = stats.norm.isf(alpha)
k_low = mean_low + np.sqrt(var_low) * crit
k_upp = mean_upp - np.sqrt(var_upp) * crit
if discrete or dist == 'binom':
k_low = np.ceil(k_low * nobs + 0.5 * critval_continuity)
k_upp = np.trunc(k_upp * nobs - 0.5 * critval_continuity)
if dist == 'norm':
#need proportion
k_low = (k_low) * 1. / nobs #-1 to match PASS
k_upp = k_upp * 1. / nobs
# else:
# if dist == 'binom':
# #need counts
# k_low *= nobs
# k_upp *= nobs
#print mean_low, np.sqrt(var_low), crit, var_low
#print mean_upp, np.sqrt(var_upp), crit, var_upp
if np.any(k_low > k_upp): #vectorize
import warnings
warnings.warn("no overlap, power is zero", HypothesisTestWarning)
std_alt = np.sqrt(var_alt)
z_low = (k_low - mean_alt - continuity[0] * 0.5 / nobs) / std_alt
z_upp = (k_upp - mean_alt + continuity[1] * 0.5 / nobs) / std_alt
if dist == 'norm':
power = stats.norm.cdf(z_upp) - stats.norm.cdf(z_low)
elif dist == 'binom':
power = (stats.binom.cdf(k_upp, nobs, mean_alt) -
stats.binom.cdf(k_low-1, nobs, mean_alt))
return power, (k_low, k_upp, z_low, z_upp) | Generic statistical power function for normal based equivalence test
This includes options to adjust the normal approximation and can use
the binomial to evaluate the probability of the rejection region
see power_ztost_prob for a description of the options | _power_ztost | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def binom_tost(count, nobs, low, upp):
"""
Exact TOST test for one proportion using binomial distribution
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials.
nobs : int
the number of trials or observations.
low, upp : floats
lower and upper limit of equivalence region
Returns
-------
pvalue : float
p-value of equivalence test
pval_low, pval_upp : floats
p-values of lower and upper one-sided tests
"""
# binom_test_stat only returns pval
tt1 = binom_test(count, nobs, alternative='larger', prop=low)
tt2 = binom_test(count, nobs, alternative='smaller', prop=upp)
return np.maximum(tt1, tt2), tt1, tt2, | Exact TOST test for one proportion using binomial distribution
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials.
nobs : int
the number of trials or observations.
low, upp : floats
lower and upper limit of equivalence region
Returns
-------
pvalue : float
p-value of equivalence test
pval_low, pval_upp : floats
p-values of lower and upper one-sided tests | binom_tost | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def binom_tost_reject_interval(low, upp, nobs, alpha=0.05):
"""
Rejection region for binomial TOST
The interval includes the end points,
`reject` if and only if `r_low <= x <= r_upp`.
The interval might be empty with `r_upp < r_low`.
Parameters
----------
low, upp : floats
lower and upper limit of equivalence region
nobs : int
the number of trials or observations.
Returns
-------
x_low, x_upp : float
lower and upper bound of rejection region
"""
x_low = stats.binom.isf(alpha, nobs, low) + 1
x_upp = stats.binom.ppf(alpha, nobs, upp) - 1
return x_low, x_upp | Rejection region for binomial TOST
The interval includes the end points,
`reject` if and only if `r_low <= x <= r_upp`.
The interval might be empty with `r_upp < r_low`.
Parameters
----------
low, upp : floats
lower and upper limit of equivalence region
nobs : int
the number of trials or observations.
Returns
-------
x_low, x_upp : float
lower and upper bound of rejection region | binom_tost_reject_interval | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def binom_test_reject_interval(value, nobs, alpha=0.05, alternative='two-sided'):
"""
Rejection region for binomial test for one sample proportion
The interval includes the end points of the rejection region.
Parameters
----------
value : float
proportion under the Null hypothesis
nobs : int
the number of trials or observations.
Returns
-------
x_low, x_upp : int
lower and upper bound of rejection region
"""
if alternative in ['2s', 'two-sided']:
alternative = '2s' # normalize alternative name
alpha = alpha / 2
if alternative in ['2s', 'smaller']:
x_low = stats.binom.ppf(alpha, nobs, value) - 1
else:
x_low = 0
if alternative in ['2s', 'larger']:
x_upp = stats.binom.isf(alpha, nobs, value) + 1
else :
x_upp = nobs
return int(x_low), int(x_upp) | Rejection region for binomial test for one sample proportion
The interval includes the end points of the rejection region.
Parameters
----------
value : float
proportion under the Null hypothesis
nobs : int
the number of trials or observations.
Returns
-------
x_low, x_upp : int
lower and upper bound of rejection region | binom_test_reject_interval | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def binom_test(count, nobs, prop=0.5, alternative='two-sided'):
"""
Perform a test that the probability of success is p.
This is an exact, two-sided test of the null hypothesis
that the probability of success in a Bernoulli experiment
is `p`.
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials.
nobs : int
the number of trials or observations.
prop : float, optional
The probability of success under the null hypothesis,
`0 <= prop <= 1`. The default value is `prop = 0.5`
alternative : str in ['two-sided', 'smaller', 'larger']
alternative hypothesis, which can be two-sided or either one of the
one-sided tests.
Returns
-------
p-value : float
The p-value of the hypothesis test
Notes
-----
This uses scipy.stats.binom_test for the two-sided alternative.
"""
if np.any(prop > 1.0) or np.any(prop < 0.0):
raise ValueError("p must be in range [0,1]")
if alternative in ['2s', 'two-sided']:
try:
pval = stats.binomtest(count, n=nobs, p=prop).pvalue
except AttributeError:
# Remove after min SciPy >= 1.7
pval = stats.binom_test(count, n=nobs, p=prop)
elif alternative in ['l', 'larger']:
pval = stats.binom.sf(count-1, nobs, prop)
elif alternative in ['s', 'smaller']:
pval = stats.binom.cdf(count, nobs, prop)
else:
raise ValueError('alternative not recognized\n'
'should be two-sided, larger or smaller')
return pval | Perform a test that the probability of success is p.
This is an exact, two-sided test of the null hypothesis
that the probability of success in a Bernoulli experiment
is `p`.
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials.
nobs : int
the number of trials or observations.
prop : float, optional
The probability of success under the null hypothesis,
`0 <= prop <= 1`. The default value is `prop = 0.5`
alternative : str in ['two-sided', 'smaller', 'larger']
alternative hypothesis, which can be two-sided or either one of the
one-sided tests.
Returns
-------
p-value : float
The p-value of the hypothesis test
Notes
-----
This uses scipy.stats.binom_test for the two-sided alternative. | binom_test | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def power_ztost_prop(low, upp, nobs, p_alt, alpha=0.05, dist='norm',
variance_prop=None, discrete=True, continuity=0,
critval_continuity=0):
"""
Power of proportions equivalence test based on normal distribution
Parameters
----------
low, upp : floats
lower and upper limit of equivalence region
nobs : int
number of observations
p_alt : float in (0,1)
proportion under the alternative
alpha : float in (0,1)
significance level of the test
dist : str in ['norm', 'binom']
This defines the distribution to evaluate the power of the test. The
critical values of the TOST test are always based on the normal
approximation, but the distribution for the power can be either the
normal (default) or the binomial (exact) distribution.
variance_prop : None or float in (0,1)
If this is None, then the variances for the two one sided tests are
based on the proportions equal to the equivalence limits.
If variance_prop is given, then it is used to calculate the variance
for the TOST statistics. If this is based on an sample, then the
estimated proportion can be used.
discrete : bool
If true, then the critical values of the rejection region are converted
to integers. If dist is "binom", this is automatically assumed.
If discrete is false, then the TOST critical values are used as
floating point numbers, and the power is calculated based on the
rejection region that is not discretized.
continuity : bool or float
adjust the rejection region for the normal power probability. This has
and effect only if ``dist='norm'``
critval_continuity : bool or float
If this is non-zero, then the critical values of the tost rejection
region are adjusted before converting to integers. This affects both
distributions, ``dist='norm'`` and ``dist='binom'``.
Returns
-------
power : float
statistical power of the equivalence test.
(k_low, k_upp, z_low, z_upp) : tuple of floats
critical limits in intermediate steps
temporary return, will be changed
Notes
-----
In small samples the power for the ``discrete`` version, has a sawtooth
pattern as a function of the number of observations. As a consequence,
small changes in the number of observations or in the normal approximation
can have a large effect on the power.
``continuity`` and ``critval_continuity`` are added to match some results
of PASS, and are mainly to investigate the sensitivity of the ztost power
to small changes in the rejection region. From my interpretation of the
equations in the SAS manual, both are zero in SAS.
works vectorized
**verification:**
The ``dist='binom'`` results match PASS,
The ``dist='norm'`` results look reasonable, but no benchmark is available.
References
----------
SAS Manual: Chapter 68: The Power Procedure, Computational Resources
PASS Chapter 110: Equivalence Tests for One Proportion.
"""
mean_low = low
var_low = std_prop(low, nobs)**2
mean_upp = upp
var_upp = std_prop(upp, nobs)**2
mean_alt = p_alt
var_alt = std_prop(p_alt, nobs)**2
if variance_prop is not None:
var_low = var_upp = std_prop(variance_prop, nobs)**2
power = _power_ztost(mean_low, var_low, mean_upp, var_upp, mean_alt, var_alt,
alpha=alpha, discrete=discrete, dist=dist, nobs=nobs,
continuity=continuity, critval_continuity=critval_continuity)
return np.maximum(power[0], 0), power[1:] | Power of proportions equivalence test based on normal distribution
Parameters
----------
low, upp : floats
lower and upper limit of equivalence region
nobs : int
number of observations
p_alt : float in (0,1)
proportion under the alternative
alpha : float in (0,1)
significance level of the test
dist : str in ['norm', 'binom']
This defines the distribution to evaluate the power of the test. The
critical values of the TOST test are always based on the normal
approximation, but the distribution for the power can be either the
normal (default) or the binomial (exact) distribution.
variance_prop : None or float in (0,1)
If this is None, then the variances for the two one sided tests are
based on the proportions equal to the equivalence limits.
If variance_prop is given, then it is used to calculate the variance
for the TOST statistics. If this is based on an sample, then the
estimated proportion can be used.
discrete : bool
If true, then the critical values of the rejection region are converted
to integers. If dist is "binom", this is automatically assumed.
If discrete is false, then the TOST critical values are used as
floating point numbers, and the power is calculated based on the
rejection region that is not discretized.
continuity : bool or float
adjust the rejection region for the normal power probability. This has
and effect only if ``dist='norm'``
critval_continuity : bool or float
If this is non-zero, then the critical values of the tost rejection
region are adjusted before converting to integers. This affects both
distributions, ``dist='norm'`` and ``dist='binom'``.
Returns
-------
power : float
statistical power of the equivalence test.
(k_low, k_upp, z_low, z_upp) : tuple of floats
critical limits in intermediate steps
temporary return, will be changed
Notes
-----
In small samples the power for the ``discrete`` version, has a sawtooth
pattern as a function of the number of observations. As a consequence,
small changes in the number of observations or in the normal approximation
can have a large effect on the power.
``continuity`` and ``critval_continuity`` are added to match some results
of PASS, and are mainly to investigate the sensitivity of the ztost power
to small changes in the rejection region. From my interpretation of the
equations in the SAS manual, both are zero in SAS.
works vectorized
**verification:**
The ``dist='binom'`` results match PASS,
The ``dist='norm'`` results look reasonable, but no benchmark is available.
References
----------
SAS Manual: Chapter 68: The Power Procedure, Computational Resources
PASS Chapter 110: Equivalence Tests for One Proportion. | power_ztost_prop | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def _table_proportion(count, nobs):
"""
Create a k by 2 contingency table for proportion
helper function for proportions_chisquare
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials.
nobs : int
the number of trials or observations.
Returns
-------
table : ndarray
(k, 2) contingency table
Notes
-----
recent scipy has more elaborate contingency table functions
"""
count = np.asarray(count)
dt = np.promote_types(count.dtype, np.float64)
count = np.asarray(count, dtype=dt)
table = np.column_stack((count, nobs - count))
expected = table.sum(0) * table.sum(1)[:, None] * 1. / table.sum()
n_rows = table.shape[0]
return table, expected, n_rows | Create a k by 2 contingency table for proportion
helper function for proportions_chisquare
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials.
nobs : int
the number of trials or observations.
Returns
-------
table : ndarray
(k, 2) contingency table
Notes
-----
recent scipy has more elaborate contingency table functions | _table_proportion | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def proportions_ztest(count, nobs, value=None, alternative='two-sided',
prop_var=False):
"""
Test for proportions based on normal (z) test
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials. If this is array_like, then
the assumption is that this represents the number of successes for
each independent sample
nobs : {int, array_like}
the number of trials or observations, with the same length as
count.
value : float, array_like or None, optional
This is the value of the null hypothesis equal to the proportion in the
case of a one sample test. In the case of a two-sample test, the
null hypothesis is that prop[0] - prop[1] = value, where prop is the
proportion in the two samples. If not provided value = 0 and the null
is prop[0] = prop[1]
alternative : str in ['two-sided', 'smaller', 'larger']
The alternative hypothesis can be either two-sided or one of the one-
sided tests, smaller means that the alternative hypothesis is
``prop < value`` and larger means ``prop > value``. In the two sample
test, smaller means that the alternative hypothesis is ``p1 < p2`` and
larger means ``p1 > p2`` where ``p1`` is the proportion of the first
sample and ``p2`` of the second one.
prop_var : False or float in (0, 1)
If prop_var is false, then the variance of the proportion estimate is
calculated based on the sample proportion. Alternatively, a proportion
can be specified to calculate this variance. Common use case is to
use the proportion under the Null hypothesis to specify the variance
of the proportion estimate.
Returns
-------
zstat : float
test statistic for the z-test
p-value : float
p-value for the z-test
Examples
--------
>>> count = 5
>>> nobs = 83
>>> value = .05
>>> stat, pval = proportions_ztest(count, nobs, value)
>>> print('{0:0.3f}'.format(pval))
0.695
>>> import numpy as np
>>> from statsmodels.stats.proportion import proportions_ztest
>>> count = np.array([5, 12])
>>> nobs = np.array([83, 99])
>>> stat, pval = proportions_ztest(count, nobs)
>>> print('{0:0.3f}'.format(pval))
0.159
Notes
-----
This uses a simple normal test for proportions. It should be the same as
running the mean z-test on the data encoded 1 for event and 0 for no event
so that the sum corresponds to the count.
In the one and two sample cases with two-sided alternative, this test
produces the same p-value as ``proportions_chisquare``, since the
chisquare is the distribution of the square of a standard normal
distribution.
"""
# TODO: verify that this really holds
# TODO: add continuity correction or other improvements for small samples
# TODO: change options similar to propotion_ztost ?
count = np.asarray(count)
nobs = np.asarray(nobs)
if nobs.size == 1:
nobs = nobs * np.ones_like(count)
prop = count * 1. / nobs
k_sample = np.size(prop)
if value is None:
if k_sample == 1:
raise ValueError('value must be provided for a 1-sample test')
value = 0
if k_sample == 1:
diff = prop - value
elif k_sample == 2:
diff = prop[0] - prop[1] - value
else:
msg = 'more than two samples are not implemented yet'
raise NotImplementedError(msg)
p_pooled = np.sum(count) * 1. / np.sum(nobs)
nobs_fact = np.sum(1. / nobs)
if prop_var:
p_pooled = prop_var
var_ = p_pooled * (1 - p_pooled) * nobs_fact
std_diff = np.sqrt(var_)
from statsmodels.stats.weightstats import _zstat_generic2
return _zstat_generic2(diff, std_diff, alternative) | Test for proportions based on normal (z) test
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials. If this is array_like, then
the assumption is that this represents the number of successes for
each independent sample
nobs : {int, array_like}
the number of trials or observations, with the same length as
count.
value : float, array_like or None, optional
This is the value of the null hypothesis equal to the proportion in the
case of a one sample test. In the case of a two-sample test, the
null hypothesis is that prop[0] - prop[1] = value, where prop is the
proportion in the two samples. If not provided value = 0 and the null
is prop[0] = prop[1]
alternative : str in ['two-sided', 'smaller', 'larger']
The alternative hypothesis can be either two-sided or one of the one-
sided tests, smaller means that the alternative hypothesis is
``prop < value`` and larger means ``prop > value``. In the two sample
test, smaller means that the alternative hypothesis is ``p1 < p2`` and
larger means ``p1 > p2`` where ``p1`` is the proportion of the first
sample and ``p2`` of the second one.
prop_var : False or float in (0, 1)
If prop_var is false, then the variance of the proportion estimate is
calculated based on the sample proportion. Alternatively, a proportion
can be specified to calculate this variance. Common use case is to
use the proportion under the Null hypothesis to specify the variance
of the proportion estimate.
Returns
-------
zstat : float
test statistic for the z-test
p-value : float
p-value for the z-test
Examples
--------
>>> count = 5
>>> nobs = 83
>>> value = .05
>>> stat, pval = proportions_ztest(count, nobs, value)
>>> print('{0:0.3f}'.format(pval))
0.695
>>> import numpy as np
>>> from statsmodels.stats.proportion import proportions_ztest
>>> count = np.array([5, 12])
>>> nobs = np.array([83, 99])
>>> stat, pval = proportions_ztest(count, nobs)
>>> print('{0:0.3f}'.format(pval))
0.159
Notes
-----
This uses a simple normal test for proportions. It should be the same as
running the mean z-test on the data encoded 1 for event and 0 for no event
so that the sum corresponds to the count.
In the one and two sample cases with two-sided alternative, this test
produces the same p-value as ``proportions_chisquare``, since the
chisquare is the distribution of the square of a standard normal
distribution. | proportions_ztest | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def proportions_ztost(count, nobs, low, upp, prop_var='sample'):
"""
Equivalence test based on normal distribution
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials. If this is array_like, then
the assumption is that this represents the number of successes for
each independent sample
nobs : int
the number of trials or observations, with the same length as
count.
low, upp : float
equivalence interval low < prop1 - prop2 < upp
prop_var : str or float in (0, 1)
prop_var determines which proportion is used for the calculation
of the standard deviation of the proportion estimate
The available options for string are 'sample' (default), 'null' and
'limits'. If prop_var is a float, then it is used directly.
Returns
-------
pvalue : float
pvalue of the non-equivalence test
t1, pv1 : tuple of floats
test statistic and pvalue for lower threshold test
t2, pv2 : tuple of floats
test statistic and pvalue for upper threshold test
Notes
-----
checked only for 1 sample case
"""
if prop_var == 'limits':
prop_var_low = low
prop_var_upp = upp
elif prop_var == 'sample':
prop_var_low = prop_var_upp = False #ztest uses sample
elif prop_var == 'null':
prop_var_low = prop_var_upp = 0.5 * (low + upp)
elif np.isreal(prop_var):
prop_var_low = prop_var_upp = prop_var
tt1 = proportions_ztest(count, nobs, alternative='larger',
prop_var=prop_var_low, value=low)
tt2 = proportions_ztest(count, nobs, alternative='smaller',
prop_var=prop_var_upp, value=upp)
return np.maximum(tt1[1], tt2[1]), tt1, tt2, | Equivalence test based on normal distribution
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials. If this is array_like, then
the assumption is that this represents the number of successes for
each independent sample
nobs : int
the number of trials or observations, with the same length as
count.
low, upp : float
equivalence interval low < prop1 - prop2 < upp
prop_var : str or float in (0, 1)
prop_var determines which proportion is used for the calculation
of the standard deviation of the proportion estimate
The available options for string are 'sample' (default), 'null' and
'limits'. If prop_var is a float, then it is used directly.
Returns
-------
pvalue : float
pvalue of the non-equivalence test
t1, pv1 : tuple of floats
test statistic and pvalue for lower threshold test
t2, pv2 : tuple of floats
test statistic and pvalue for upper threshold test
Notes
-----
checked only for 1 sample case | proportions_ztost | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def proportions_chisquare(count, nobs, value=None):
"""
Test for proportions based on chisquare test
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials. If this is array_like, then
the assumption is that this represents the number of successes for
each independent sample
nobs : int
the number of trials or observations, with the same length as
count.
value : None or float or array_like
Returns
-------
chi2stat : float
test statistic for the chisquare test
p-value : float
p-value for the chisquare test
(table, expected)
table is a (k, 2) contingency table, ``expected`` is the corresponding
table of counts that are expected under independence with given
margins
Notes
-----
Recent version of scipy.stats have a chisquare test for independence in
contingency tables.
This function provides a similar interface to chisquare tests as
``prop.test`` in R, however without the option for Yates continuity
correction.
count can be the count for the number of events for a single proportion,
or the counts for several independent proportions. If value is given, then
all proportions are jointly tested against this value. If value is not
given and count and nobs are not scalar, then the null hypothesis is
that all samples have the same proportion.
"""
nobs = np.atleast_1d(nobs)
table, expected, n_rows = _table_proportion(count, nobs)
if value is not None:
expected = np.column_stack((nobs * value, nobs * (1 - value)))
ddof = n_rows - 1
else:
ddof = n_rows
#print table, expected
chi2stat, pval = stats.chisquare(table.ravel(), expected.ravel(),
ddof=ddof)
return chi2stat, pval, (table, expected) | Test for proportions based on chisquare test
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials. If this is array_like, then
the assumption is that this represents the number of successes for
each independent sample
nobs : int
the number of trials or observations, with the same length as
count.
value : None or float or array_like
Returns
-------
chi2stat : float
test statistic for the chisquare test
p-value : float
p-value for the chisquare test
(table, expected)
table is a (k, 2) contingency table, ``expected`` is the corresponding
table of counts that are expected under independence with given
margins
Notes
-----
Recent version of scipy.stats have a chisquare test for independence in
contingency tables.
This function provides a similar interface to chisquare tests as
``prop.test`` in R, however without the option for Yates continuity
correction.
count can be the count for the number of events for a single proportion,
or the counts for several independent proportions. If value is given, then
all proportions are jointly tested against this value. If value is not
given and count and nobs are not scalar, then the null hypothesis is
that all samples have the same proportion. | proportions_chisquare | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def proportions_chisquare_allpairs(count, nobs, multitest_method='hs'):
"""
Chisquare test of proportions for all pairs of k samples
Performs a chisquare test for proportions for all pairwise comparisons.
The alternative is two-sided
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials.
nobs : int
the number of trials or observations.
multitest_method : str
This chooses the method for the multiple testing p-value correction,
that is used as default in the results.
It can be any method that is available in ``multipletesting``.
The default is Holm-Sidak 'hs'.
Returns
-------
result : AllPairsResults instance
The returned results instance has several statistics, such as p-values,
attached, and additional methods for using a non-default
``multitest_method``.
Notes
-----
Yates continuity correction is not available.
"""
#all_pairs = lmap(list, lzip(*np.triu_indices(4, 1)))
all_pairs = lzip(*np.triu_indices(len(count), 1))
pvals = [proportions_chisquare(count[list(pair)], nobs[list(pair)])[1]
for pair in all_pairs]
return AllPairsResults(pvals, all_pairs, multitest_method=multitest_method) | Chisquare test of proportions for all pairs of k samples
Performs a chisquare test for proportions for all pairwise comparisons.
The alternative is two-sided
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials.
nobs : int
the number of trials or observations.
multitest_method : str
This chooses the method for the multiple testing p-value correction,
that is used as default in the results.
It can be any method that is available in ``multipletesting``.
The default is Holm-Sidak 'hs'.
Returns
-------
result : AllPairsResults instance
The returned results instance has several statistics, such as p-values,
attached, and additional methods for using a non-default
``multitest_method``.
Notes
-----
Yates continuity correction is not available. | proportions_chisquare_allpairs | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def proportions_chisquare_pairscontrol(count, nobs, value=None,
multitest_method='hs', alternative='two-sided'):
"""
Chisquare test of proportions for pairs of k samples compared to control
Performs a chisquare test for proportions for pairwise comparisons with a
control (Dunnet's test). The control is assumed to be the first element
of ``count`` and ``nobs``. The alternative is two-sided, larger or
smaller.
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials.
nobs : int
the number of trials or observations.
multitest_method : str
This chooses the method for the multiple testing p-value correction,
that is used as default in the results.
It can be any method that is available in ``multipletesting``.
The default is Holm-Sidak 'hs'.
alternative : str in ['two-sided', 'smaller', 'larger']
alternative hypothesis, which can be two-sided or either one of the
one-sided tests.
Returns
-------
result : AllPairsResults instance
The returned results instance has several statistics, such as p-values,
attached, and additional methods for using a non-default
``multitest_method``.
Notes
-----
Yates continuity correction is not available.
``value`` and ``alternative`` options are not yet implemented.
"""
if (value is not None) or (alternative not in ['two-sided', '2s']):
raise NotImplementedError
#all_pairs = lmap(list, lzip(*np.triu_indices(4, 1)))
all_pairs = [(0, k) for k in range(1, len(count))]
pvals = [proportions_chisquare(count[list(pair)], nobs[list(pair)],
#alternative=alternative)[1]
)[1]
for pair in all_pairs]
return AllPairsResults(pvals, all_pairs, multitest_method=multitest_method) | Chisquare test of proportions for pairs of k samples compared to control
Performs a chisquare test for proportions for pairwise comparisons with a
control (Dunnet's test). The control is assumed to be the first element
of ``count`` and ``nobs``. The alternative is two-sided, larger or
smaller.
Parameters
----------
count : {int, array_like}
the number of successes in nobs trials.
nobs : int
the number of trials or observations.
multitest_method : str
This chooses the method for the multiple testing p-value correction,
that is used as default in the results.
It can be any method that is available in ``multipletesting``.
The default is Holm-Sidak 'hs'.
alternative : str in ['two-sided', 'smaller', 'larger']
alternative hypothesis, which can be two-sided or either one of the
one-sided tests.
Returns
-------
result : AllPairsResults instance
The returned results instance has several statistics, such as p-values,
attached, and additional methods for using a non-default
``multitest_method``.
Notes
-----
Yates continuity correction is not available.
``value`` and ``alternative`` options are not yet implemented. | proportions_chisquare_pairscontrol | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def confint_proportions_2indep(count1, nobs1, count2, nobs2, method=None,
compare='diff', alpha=0.05, correction=True):
"""
Confidence intervals for comparing two independent proportions.
This assumes that we have two independent binomial samples.
Parameters
----------
count1, nobs1 : float
Count and sample size for first sample.
count2, nobs2 : float
Count and sample size for the second sample.
method : str
Method for computing confidence interval. If method is None, then a
default method is used. The default might change as more methods are
added.
diff:
- 'wald',
- 'agresti-caffo'
- 'newcomb' (default)
- 'score'
ratio:
- 'log'
- 'log-adjusted' (default)
- 'score'
odds-ratio:
- 'logit'
- 'logit-adjusted' (default)
- 'score'
compare : string in ['diff', 'ratio' 'odds-ratio']
If compare is diff, then the confidence interval is for diff = p1 - p2.
If compare is ratio, then the confidence interval is for the risk ratio
defined by ratio = p1 / p2.
If compare is odds-ratio, then the confidence interval is for the
odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2).
alpha : float
Significance level for the confidence interval, default is 0.05.
The nominal coverage probability is 1 - alpha.
Returns
-------
low, upp
See Also
--------
test_proportions_2indep
tost_proportions_2indep
Notes
-----
Status: experimental, API and defaults might still change.
more ``methods`` will be added.
References
----------
.. [1] Fagerland, Morten W., Stian Lydersen, and Petter Laake. 2015.
“Recommended Confidence Intervals for Two Independent Binomial
Proportions.” Statistical Methods in Medical Research 24 (2): 224–54.
https://doi.org/10.1177/0962280211415469.
.. [2] Koopman, P. A. R. 1984. “Confidence Intervals for the Ratio of Two
Binomial Proportions.” Biometrics 40 (2): 513–17.
https://doi.org/10.2307/2531405.
.. [3] Miettinen, Olli, and Markku Nurminen. "Comparative analysis of two
rates." Statistics in medicine 4, no. 2 (1985): 213-226.
.. [4] Newcombe, Robert G. 1998. “Interval Estimation for the Difference
between Independent Proportions: Comparison of Eleven Methods.”
Statistics in Medicine 17 (8): 873–90.
https://doi.org/10.1002/(SICI)1097-0258(19980430)17:8<873::AID-
SIM779>3.0.CO;2-I.
.. [5] Newcombe, Robert G., and Markku M. Nurminen. 2011. “In Defence of
Score Intervals for Proportions and Their Differences.” Communications
in Statistics - Theory and Methods 40 (7): 1271–82.
https://doi.org/10.1080/03610920903576580.
"""
method_default = {'diff': 'newcomb',
'ratio': 'log-adjusted',
'odds-ratio': 'logit-adjusted'}
# normalize compare name
if compare.lower() == 'or':
compare = 'odds-ratio'
if method is None:
method = method_default[compare]
method = method.lower()
if method.startswith('agr'):
method = 'agresti-caffo'
p1 = count1 / nobs1
p2 = count2 / nobs2
diff = p1 - p2
addone = 1 if method == 'agresti-caffo' else 0
if compare == 'diff':
if method in ['wald', 'agresti-caffo']:
count1_, nobs1_ = count1 + addone, nobs1 + 2 * addone
count2_, nobs2_ = count2 + addone, nobs2 + 2 * addone
p1_ = count1_ / nobs1_
p2_ = count2_ / nobs2_
diff_ = p1_ - p2_
var = p1_ * (1 - p1_) / nobs1_ + p2_ * (1 - p2_) / nobs2_
z = stats.norm.isf(alpha / 2)
d_wald = z * np.sqrt(var)
low = diff_ - d_wald
upp = diff_ + d_wald
elif method.startswith('newcomb'):
low1, upp1 = proportion_confint(count1, nobs1,
method='wilson', alpha=alpha)
low2, upp2 = proportion_confint(count2, nobs2,
method='wilson', alpha=alpha)
d_low = np.sqrt((p1 - low1)**2 + (upp2 - p2)**2)
d_upp = np.sqrt((p2 - low2)**2 + (upp1 - p1)**2)
low = diff - d_low
upp = diff + d_upp
elif method == "score":
low, upp = _score_confint_inversion(count1, nobs1, count2, nobs2,
compare=compare, alpha=alpha,
correction=correction)
else:
raise ValueError('method not recognized')
elif compare == 'ratio':
# ratio = p1 / p2
if method in ['log', 'log-adjusted']:
addhalf = 0.5 if method == 'log-adjusted' else 0
count1_, nobs1_ = count1 + addhalf, nobs1 + addhalf
count2_, nobs2_ = count2 + addhalf, nobs2 + addhalf
p1_ = count1_ / nobs1_
p2_ = count2_ / nobs2_
ratio_ = p1_ / p2_
var = (1 / count1_) - 1 / nobs1_ + 1 / count2_ - 1 / nobs2_
z = stats.norm.isf(alpha / 2)
d_log = z * np.sqrt(var)
low = np.exp(np.log(ratio_) - d_log)
upp = np.exp(np.log(ratio_) + d_log)
elif method == 'score':
res = _confint_riskratio_koopman(count1, nobs1, count2, nobs2,
alpha=alpha,
correction=correction)
low, upp = res.confint
else:
raise ValueError('method not recognized')
elif compare == 'odds-ratio':
# odds_ratio = p1 / (1 - p1) / p2 * (1 - p2)
if method in ['logit', 'logit-adjusted', 'logit-smoothed']:
if method in ['logit-smoothed']:
adjusted = _shrink_prob(count1, nobs1, count2, nobs2,
shrink_factor=2, return_corr=False)[0]
count1_, nobs1_, count2_, nobs2_ = adjusted
else:
addhalf = 0.5 if method == 'logit-adjusted' else 0
count1_, nobs1_ = count1 + addhalf, nobs1 + 2 * addhalf
count2_, nobs2_ = count2 + addhalf, nobs2 + 2 * addhalf
p1_ = count1_ / nobs1_
p2_ = count2_ / nobs2_
odds_ratio_ = p1_ / (1 - p1_) / p2_ * (1 - p2_)
var = (1 / count1_ + 1 / (nobs1_ - count1_) +
1 / count2_ + 1 / (nobs2_ - count2_))
z = stats.norm.isf(alpha / 2)
d_log = z * np.sqrt(var)
low = np.exp(np.log(odds_ratio_) - d_log)
upp = np.exp(np.log(odds_ratio_) + d_log)
elif method == "score":
low, upp = _score_confint_inversion(count1, nobs1, count2, nobs2,
compare=compare, alpha=alpha,
correction=correction)
else:
raise ValueError('method not recognized')
else:
raise ValueError('compare not recognized')
return low, upp | Confidence intervals for comparing two independent proportions.
This assumes that we have two independent binomial samples.
Parameters
----------
count1, nobs1 : float
Count and sample size for first sample.
count2, nobs2 : float
Count and sample size for the second sample.
method : str
Method for computing confidence interval. If method is None, then a
default method is used. The default might change as more methods are
added.
diff:
- 'wald',
- 'agresti-caffo'
- 'newcomb' (default)
- 'score'
ratio:
- 'log'
- 'log-adjusted' (default)
- 'score'
odds-ratio:
- 'logit'
- 'logit-adjusted' (default)
- 'score'
compare : string in ['diff', 'ratio' 'odds-ratio']
If compare is diff, then the confidence interval is for diff = p1 - p2.
If compare is ratio, then the confidence interval is for the risk ratio
defined by ratio = p1 / p2.
If compare is odds-ratio, then the confidence interval is for the
odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2).
alpha : float
Significance level for the confidence interval, default is 0.05.
The nominal coverage probability is 1 - alpha.
Returns
-------
low, upp
See Also
--------
test_proportions_2indep
tost_proportions_2indep
Notes
-----
Status: experimental, API and defaults might still change.
more ``methods`` will be added.
References
----------
.. [1] Fagerland, Morten W., Stian Lydersen, and Petter Laake. 2015.
“Recommended Confidence Intervals for Two Independent Binomial
Proportions.” Statistical Methods in Medical Research 24 (2): 224–54.
https://doi.org/10.1177/0962280211415469.
.. [2] Koopman, P. A. R. 1984. “Confidence Intervals for the Ratio of Two
Binomial Proportions.” Biometrics 40 (2): 513–17.
https://doi.org/10.2307/2531405.
.. [3] Miettinen, Olli, and Markku Nurminen. "Comparative analysis of two
rates." Statistics in medicine 4, no. 2 (1985): 213-226.
.. [4] Newcombe, Robert G. 1998. “Interval Estimation for the Difference
between Independent Proportions: Comparison of Eleven Methods.”
Statistics in Medicine 17 (8): 873–90.
https://doi.org/10.1002/(SICI)1097-0258(19980430)17:8<873::AID-
SIM779>3.0.CO;2-I.
.. [5] Newcombe, Robert G., and Markku M. Nurminen. 2011. “In Defence of
Score Intervals for Proportions and Their Differences.” Communications
in Statistics - Theory and Methods 40 (7): 1271–82.
https://doi.org/10.1080/03610920903576580. | confint_proportions_2indep | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def _shrink_prob(count1, nobs1, count2, nobs2, shrink_factor=2,
return_corr=True):
"""
Shrink observed counts towards independence
Helper function for 'logit-smoothed' inference for the odds-ratio of two
independent proportions.
Parameters
----------
count1, nobs1 : float or int
count and sample size for first sample
count2, nobs2 : float or int
count and sample size for the second sample
shrink_factor : float
This corresponds to the number of observations that are added in total
proportional to the probabilities under independence.
return_corr : bool
If true, then only the correction term is returned
If false, then the corrected counts, i.e. original counts plus
correction term, are returned.
Returns
-------
count1_corr, nobs1_corr, count2_corr, nobs2_corr : float
correction or corrected counts
prob_indep :
TODO/Warning : this will change most likely
probabilities under independence, only returned if return_corr is
false.
"""
vectorized = any(np.size(i) > 1 for i in [count1, nobs1, count2, nobs2])
if vectorized:
raise ValueError("function is not vectorized")
nobs_col = np.array([count1 + count2, nobs1 - count1 + nobs2 - count2])
nobs_row = np.array([nobs1, nobs2])
nobs = nobs1 + nobs2
prob_indep = (nobs_col * nobs_row[:, None]) / nobs**2
corr = shrink_factor * prob_indep
if return_corr:
return (corr[0, 0], corr[0].sum(), corr[1, 0], corr[1].sum())
else:
return (count1 + corr[0, 0], nobs1 + corr[0].sum(),
count2 + corr[1, 0], nobs2 + corr[1].sum()), prob_indep | Shrink observed counts towards independence
Helper function for 'logit-smoothed' inference for the odds-ratio of two
independent proportions.
Parameters
----------
count1, nobs1 : float or int
count and sample size for first sample
count2, nobs2 : float or int
count and sample size for the second sample
shrink_factor : float
This corresponds to the number of observations that are added in total
proportional to the probabilities under independence.
return_corr : bool
If true, then only the correction term is returned
If false, then the corrected counts, i.e. original counts plus
correction term, are returned.
Returns
-------
count1_corr, nobs1_corr, count2_corr, nobs2_corr : float
correction or corrected counts
prob_indep :
TODO/Warning : this will change most likely
probabilities under independence, only returned if return_corr is
false. | _shrink_prob | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def score_test_proportions_2indep(count1, nobs1, count2, nobs2, value=None,
compare='diff', alternative='two-sided',
correction=True, return_results=True):
"""
Score test for two independent proportions
This uses the constrained estimate of the proportions to compute
the variance under the Null hypothesis.
Parameters
----------
count1, nobs1 :
count and sample size for first sample
count2, nobs2 :
count and sample size for the second sample
value : float
diff, ratio or odds-ratio under the null hypothesis. If value is None,
then equality of proportions under the Null is assumed,
i.e. value=0 for 'diff' or value=1 for either rate or odds-ratio.
compare : string in ['diff', 'ratio' 'odds-ratio']
If compare is diff, then the confidence interval is for diff = p1 - p2.
If compare is ratio, then the confidence interval is for the risk ratio
defined by ratio = p1 / p2.
If compare is odds-ratio, then the confidence interval is for the
odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2)
return_results : bool
If true, then a results instance with extra information is returned,
otherwise a tuple with statistic and pvalue is returned.
Returns
-------
results : results instance or tuple
If return_results is True, then a results instance with the
information in attributes is returned.
If return_results is False, then only ``statistic`` and ``pvalue``
are returned.
statistic : float
test statistic asymptotically normal distributed N(0, 1)
pvalue : float
p-value based on normal distribution
other attributes :
additional information about the hypothesis test
Notes
-----
Status: experimental, the type or extra information in the return might
change.
"""
value_default = 0 if compare == 'diff' else 1
if value is None:
# TODO: odds ratio does not work if value=1
value = value_default
nobs = nobs1 + nobs2
count = count1 + count2
p1 = count1 / nobs1
p2 = count2 / nobs2
if value == value_default:
# use pooled estimator if equality test
# shortcut, but required for odds ratio
prop0 = prop1 = count / nobs
# this uses index 0 from Miettinen Nurminned 1985
count0, nobs0 = count2, nobs2
p0 = p2
if compare == 'diff':
diff = value # hypothesis value
if diff != 0:
tmp3 = nobs
tmp2 = (nobs1 + 2 * nobs0) * diff - nobs - count
tmp1 = (count0 * diff - nobs - 2 * count0) * diff + count
tmp0 = count0 * diff * (1 - diff)
q = ((tmp2 / (3 * tmp3))**3 - tmp1 * tmp2 / (6 * tmp3**2) +
tmp0 / (2 * tmp3))
p = np.sign(q) * np.sqrt((tmp2 / (3 * tmp3))**2 -
tmp1 / (3 * tmp3))
a = (np.pi + np.arccos(q / p**3)) / 3
prop0 = 2 * p * np.cos(a) - tmp2 / (3 * tmp3)
prop1 = prop0 + diff
var = prop1 * (1 - prop1) / nobs1 + prop0 * (1 - prop0) / nobs0
if correction:
var *= nobs / (nobs - 1)
diff_stat = (p1 - p0 - diff)
elif compare == 'ratio':
# risk ratio
ratio = value
if ratio != 1:
a = nobs * ratio
b = -(nobs1 * ratio + count1 + nobs2 + count0 * ratio)
c = count
prop0 = (-b - np.sqrt(b**2 - 4 * a * c)) / (2 * a)
prop1 = prop0 * ratio
var = (prop1 * (1 - prop1) / nobs1 +
ratio**2 * prop0 * (1 - prop0) / nobs0)
if correction:
var *= nobs / (nobs - 1)
# NCSS looks incorrect for var, but it is what should be reported
# diff_stat = (p1 / p0 - ratio) # NCSS/PASS
diff_stat = (p1 - ratio * p0) # Miettinen Nurminen
elif compare in ['or', 'odds-ratio']:
# odds ratio
oratio = value
if oratio != 1:
# Note the constraint estimator does not handle odds-ratio = 1
a = nobs0 * (oratio - 1)
b = nobs1 * oratio + nobs0 - count * (oratio - 1)
c = -count
prop0 = (-b + np.sqrt(b**2 - 4 * a * c)) / (2 * a)
prop1 = prop0 * oratio / (1 + prop0 * (oratio - 1))
# try to avoid 0 and 1 proportions,
# those raise Zero Division Runtime Warnings
eps = 1e-10
prop0 = np.clip(prop0, eps, 1 - eps)
prop1 = np.clip(prop1, eps, 1 - eps)
var = (1 / (prop1 * (1 - prop1) * nobs1) +
1 / (prop0 * (1 - prop0) * nobs0))
if correction:
var *= nobs / (nobs - 1)
diff_stat = ((p1 - prop1) / (prop1 * (1 - prop1)) -
(p0 - prop0) / (prop0 * (1 - prop0)))
statistic, pvalue = _zstat_generic2(diff_stat, np.sqrt(var),
alternative=alternative)
if return_results:
res = HolderTuple(statistic=statistic,
pvalue=pvalue,
compare=compare,
method='score',
variance=var,
alternative=alternative,
prop1_null=prop1,
prop2_null=prop0,
)
return res
else:
return statistic, pvalue | Score test for two independent proportions
This uses the constrained estimate of the proportions to compute
the variance under the Null hypothesis.
Parameters
----------
count1, nobs1 :
count and sample size for first sample
count2, nobs2 :
count and sample size for the second sample
value : float
diff, ratio or odds-ratio under the null hypothesis. If value is None,
then equality of proportions under the Null is assumed,
i.e. value=0 for 'diff' or value=1 for either rate or odds-ratio.
compare : string in ['diff', 'ratio' 'odds-ratio']
If compare is diff, then the confidence interval is for diff = p1 - p2.
If compare is ratio, then the confidence interval is for the risk ratio
defined by ratio = p1 / p2.
If compare is odds-ratio, then the confidence interval is for the
odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2)
return_results : bool
If true, then a results instance with extra information is returned,
otherwise a tuple with statistic and pvalue is returned.
Returns
-------
results : results instance or tuple
If return_results is True, then a results instance with the
information in attributes is returned.
If return_results is False, then only ``statistic`` and ``pvalue``
are returned.
statistic : float
test statistic asymptotically normal distributed N(0, 1)
pvalue : float
p-value based on normal distribution
other attributes :
additional information about the hypothesis test
Notes
-----
Status: experimental, the type or extra information in the return might
change. | score_test_proportions_2indep | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.