code
stringlengths 26
870k
| docstring
stringlengths 1
65.6k
| func_name
stringlengths 1
194
| language
stringclasses 1
value | repo
stringlengths 8
68
| path
stringlengths 5
194
| url
stringlengths 46
254
| license
stringclasses 4
values |
---|---|---|---|---|---|---|---|
def lm_robust(score, constraint_matrix, score_deriv_inv, cov_score,
cov_params=None):
'''general formula for score/LM test
generalized score or lagrange multiplier test for implicit constraints
`r(params) = 0`, with gradient `R = d r / d params`
linear constraints are given by `R params - q = 0`
It is assumed that all arrays are evaluated at the constrained estimates.
Parameters
----------
score : ndarray, 1-D
derivative of objective function at estimated parameters
of constrained model
constraint_matrix R : ndarray
Linear restriction matrix or Jacobian of nonlinear constraints
hessian_inv, Ainv : ndarray, symmetric, square
inverse of second derivative of objective function
TODO: could be OPG or any other estimator if information matrix
equality holds
cov_score B : ndarray, symmetric, square
covariance matrix of the score. This is the inner part of a sandwich
estimator.
cov_params V : ndarray, symmetric, square
covariance of full parameter vector evaluated at constrained parameter
estimate. This can be specified instead of cov_score B.
Returns
-------
lm_stat : float
score/lagrange multiplier statistic
Notes
-----
'''
# shorthand alias
R, Ainv, B, V = constraint_matrix, score_deriv_inv, cov_score, cov_params
tmp = R.dot(Ainv)
wscore = tmp.dot(score) # C Ainv score
if B is None and V is None:
# only Ainv is given, so we assume information matrix identity holds
# computational short cut, should be same if Ainv == inv(B)
lm_stat = score.dot(Ainv.dot(score))
else:
# information matrix identity does not hold
if V is None:
inner = tmp.dot(B).dot(tmp.T)
else:
inner = R.dot(V).dot(R.T)
#lm_stat2 = wscore.dot(np.linalg.pinv(inner).dot(wscore))
# Let's assume inner is invertible, TODO: check if usecase for pinv exists
lm_stat = wscore.dot(np.linalg.solve(inner, wscore))
return lm_stat#, lm_stat2 | general formula for score/LM test
generalized score or lagrange multiplier test for implicit constraints
`r(params) = 0`, with gradient `R = d r / d params`
linear constraints are given by `R params - q = 0`
It is assumed that all arrays are evaluated at the constrained estimates.
Parameters
----------
score : ndarray, 1-D
derivative of objective function at estimated parameters
of constrained model
constraint_matrix R : ndarray
Linear restriction matrix or Jacobian of nonlinear constraints
hessian_inv, Ainv : ndarray, symmetric, square
inverse of second derivative of objective function
TODO: could be OPG or any other estimator if information matrix
equality holds
cov_score B : ndarray, symmetric, square
covariance matrix of the score. This is the inner part of a sandwich
estimator.
cov_params V : ndarray, symmetric, square
covariance of full parameter vector evaluated at constrained parameter
estimate. This can be specified instead of cov_score B.
Returns
-------
lm_stat : float
score/lagrange multiplier statistic
Notes
----- | lm_robust | python | statsmodels/statsmodels | statsmodels/stats/_diagnostic_other.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_diagnostic_other.py | BSD-3-Clause |
def lm_robust_subset(score, k_constraints, score_deriv_inv, cov_score):
'''general formula for score/LM test
generalized score or lagrange multiplier test for constraints on a subset
of parameters
`params_1 = value`, where params_1 is a subset of the unconstrained
parameter vector.
It is assumed that all arrays are evaluated at the constrained estimates.
Parameters
----------
score : ndarray, 1-D
derivative of objective function at estimated parameters
of constrained model
k_constraint : int
number of constraints
score_deriv_inv : ndarray, symmetric, square
inverse of second derivative of objective function
TODO: could be OPG or any other estimator if information matrix
equality holds
cov_score B : ndarray, symmetric, square
covariance matrix of the score. This is the inner part of a sandwich
estimator.
not cov_params V : ndarray, symmetric, square
covariance of full parameter vector evaluated at constrained parameter
estimate. This can be specified instead of cov_score B.
Returns
-------
lm_stat : float
score/lagrange multiplier statistic
p-value : float
p-value of the LM test based on chisquare distribution
Notes
-----
The implementation is based on Boos 1992 section 4.1. The same derivation
is also in other articles and in text books.
'''
# Notation in Boos
# score `S = sum (s_i)
# score_obs `s_i`
# score_deriv `I` is derivative of score (hessian)
# `D` is covariance matrix of score, OPG product given independent observations
#k_params = len(score)
# Note: I reverse order between constraint and unconstrained compared to Boos
# submatrices of score_deriv/hessian
# these are I22 and I12 in Boos
#h_uu = score_deriv[-k_constraints:, -k_constraints:]
h_uu = score_deriv_inv[:-k_constraints, :-k_constraints]
h_cu = score_deriv_inv[-k_constraints:, :-k_constraints]
# TODO: pinv or solve ?
tmp_proj = h_cu.dot(np.linalg.inv(h_uu))
tmp = np.column_stack((-tmp_proj, np.eye(k_constraints))) #, tmp_proj))
cov_score_constraints = tmp.dot(cov_score.dot(tmp.T))
#lm_stat2 = wscore.dot(np.linalg.pinv(inner).dot(wscore))
# Let's assume inner is invertible, TODO: check if usecase for pinv exists
lm_stat = score.dot(np.linalg.solve(cov_score_constraints, score))
pval = stats.chi2.sf(lm_stat, k_constraints)
# # check second calculation Boos referencing Kent 1982 and Engle 1984
# # we can use this when robust_cov_params of full model is available
# #h_inv = np.linalg.inv(score_deriv)
# hinv = score_deriv_inv
# v = h_inv.dot(cov_score.dot(h_inv)) # this is robust cov_params
# v_cc = v[:k_constraints, :k_constraints]
# h_cc = score_deriv[:k_constraints, :k_constraints]
# # brute force calculation:
# h_resid_cu = h_cc - h_cu.dot(np.linalg.solve(h_uu, h_cu))
# cov_s_c = h_resid_cu.dot(v_cc.dot(h_resid_cu))
# diff = np.max(np.abs(cov_s_c - cov_score_constraints))
return lm_stat, pval #, lm_stat2 | general formula for score/LM test
generalized score or lagrange multiplier test for constraints on a subset
of parameters
`params_1 = value`, where params_1 is a subset of the unconstrained
parameter vector.
It is assumed that all arrays are evaluated at the constrained estimates.
Parameters
----------
score : ndarray, 1-D
derivative of objective function at estimated parameters
of constrained model
k_constraint : int
number of constraints
score_deriv_inv : ndarray, symmetric, square
inverse of second derivative of objective function
TODO: could be OPG or any other estimator if information matrix
equality holds
cov_score B : ndarray, symmetric, square
covariance matrix of the score. This is the inner part of a sandwich
estimator.
not cov_params V : ndarray, symmetric, square
covariance of full parameter vector evaluated at constrained parameter
estimate. This can be specified instead of cov_score B.
Returns
-------
lm_stat : float
score/lagrange multiplier statistic
p-value : float
p-value of the LM test based on chisquare distribution
Notes
-----
The implementation is based on Boos 1992 section 4.1. The same derivation
is also in other articles and in text books. | lm_robust_subset | python | statsmodels/statsmodels | statsmodels/stats/_diagnostic_other.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_diagnostic_other.py | BSD-3-Clause |
def lm_robust_subset_parts(score, k_constraints,
score_deriv_uu, score_deriv_cu,
cov_score_cc, cov_score_cu, cov_score_uu):
"""robust generalized score tests on subset of parameters
This is the same as lm_robust_subset with arguments in parts of
partitioned matrices.
This can be useful, when we have the parts based on different estimation
procedures, i.e. when we do not have the full unconstrained model.
Calculates mainly the covariance of the constraint part of the score.
Parameters
----------
score : ndarray, 1-D
derivative of objective function at estimated parameters
of constrained model. These is the score component for the restricted
part under hypothesis. The unconstrained part of the score is assumed
to be zero.
k_constraint : int
number of constraints
score_deriv_uu : ndarray, symmetric, square
first derivative of moment equation or second derivative of objective
function for the unconstrained part
TODO: could be OPG or any other estimator if information matrix
equality holds
score_deriv_cu : ndarray
first cross derivative of moment equation or second cross
derivative of objective function between.
cov_score_cc : ndarray
covariance matrix of the score for the unconstrained part.
This is the inner part of a sandwich estimator.
cov_score_cu : ndarray
covariance matrix of the score for the off-diagonal block, i.e.
covariance between constrained and unconstrained part.
cov_score_uu : ndarray
covariance matrix of the score for the unconstrained part.
Returns
-------
lm_stat : float
score/lagrange multiplier statistic
p-value : float
p-value of the LM test based on chisquare distribution
Notes
-----
TODO: these function should just return the covariance of the score
instead of calculating the score/lm test.
Implementation similar to lm_robust_subset and is based on Boos 1992,
section 4.1 in the form attributed to Breslow (1990). It does not use the
computation attributed to Kent (1982) and Engle (1984).
"""
tmp_proj = np.linalg.solve(score_deriv_uu, score_deriv_cu.T).T
tmp = tmp_proj.dot(cov_score_cu.T)
# this needs to make a copy of cov_score_cc for further inplace modification
cov = cov_score_cc - tmp
cov -= tmp.T
cov += tmp_proj.dot(cov_score_uu).dot(tmp_proj.T)
lm_stat = score.dot(np.linalg.solve(cov, score))
pval = stats.chi2.sf(lm_stat, k_constraints)
return lm_stat, pval | robust generalized score tests on subset of parameters
This is the same as lm_robust_subset with arguments in parts of
partitioned matrices.
This can be useful, when we have the parts based on different estimation
procedures, i.e. when we do not have the full unconstrained model.
Calculates mainly the covariance of the constraint part of the score.
Parameters
----------
score : ndarray, 1-D
derivative of objective function at estimated parameters
of constrained model. These is the score component for the restricted
part under hypothesis. The unconstrained part of the score is assumed
to be zero.
k_constraint : int
number of constraints
score_deriv_uu : ndarray, symmetric, square
first derivative of moment equation or second derivative of objective
function for the unconstrained part
TODO: could be OPG or any other estimator if information matrix
equality holds
score_deriv_cu : ndarray
first cross derivative of moment equation or second cross
derivative of objective function between.
cov_score_cc : ndarray
covariance matrix of the score for the unconstrained part.
This is the inner part of a sandwich estimator.
cov_score_cu : ndarray
covariance matrix of the score for the off-diagonal block, i.e.
covariance between constrained and unconstrained part.
cov_score_uu : ndarray
covariance matrix of the score for the unconstrained part.
Returns
-------
lm_stat : float
score/lagrange multiplier statistic
p-value : float
p-value of the LM test based on chisquare distribution
Notes
-----
TODO: these function should just return the covariance of the score
instead of calculating the score/lm test.
Implementation similar to lm_robust_subset and is based on Boos 1992,
section 4.1 in the form attributed to Breslow (1990). It does not use the
computation attributed to Kent (1982) and Engle (1984). | lm_robust_subset_parts | python | statsmodels/statsmodels | statsmodels/stats/_diagnostic_other.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_diagnostic_other.py | BSD-3-Clause |
def lm_robust_reparameterized(score, params_deriv, score_deriv, cov_score):
"""robust generalized score test for transformed parameters
The parameters are given by a nonlinear transformation of the estimated
reduced parameters
`params = g(params_reduced)` with jacobian `G = d g / d params_reduced`
score and other arrays are for full parameter space `params`
Parameters
----------
score : ndarray, 1-D
derivative of objective function at estimated parameters
of constrained model
params_deriv : ndarray
Jacobian G of the parameter trasnformation
score_deriv : ndarray, symmetric, square
second derivative of objective function
TODO: could be OPG or any other estimator if information matrix
equality holds
cov_score B : ndarray, symmetric, square
covariance matrix of the score. This is the inner part of a sandwich
estimator.
Returns
-------
lm_stat : float
score/lagrange multiplier statistic
p-value : float
p-value of the LM test based on chisquare distribution
Notes
-----
Boos 1992, section 4.3, expression for T_{GS} just before example 6
"""
# Boos notation
# params_deriv G
k_params, k_reduced = params_deriv.shape
k_constraints = k_params - k_reduced
G = params_deriv # shortcut alias
tmp_c0 = np.linalg.pinv(G.T.dot(score_deriv.dot(G)))
tmp_c1 = score_deriv.dot(G.dot(tmp_c0.dot(G.T)))
tmp_c = np.eye(k_params) - tmp_c1
cov = tmp_c.dot(cov_score.dot(tmp_c.T)) # warning: reduced rank
lm_stat = score.dot(np.linalg.pinv(cov).dot(score))
pval = stats.chi2.sf(lm_stat, k_constraints)
return lm_stat, pval | robust generalized score test for transformed parameters
The parameters are given by a nonlinear transformation of the estimated
reduced parameters
`params = g(params_reduced)` with jacobian `G = d g / d params_reduced`
score and other arrays are for full parameter space `params`
Parameters
----------
score : ndarray, 1-D
derivative of objective function at estimated parameters
of constrained model
params_deriv : ndarray
Jacobian G of the parameter trasnformation
score_deriv : ndarray, symmetric, square
second derivative of objective function
TODO: could be OPG or any other estimator if information matrix
equality holds
cov_score B : ndarray, symmetric, square
covariance matrix of the score. This is the inner part of a sandwich
estimator.
Returns
-------
lm_stat : float
score/lagrange multiplier statistic
p-value : float
p-value of the LM test based on chisquare distribution
Notes
-----
Boos 1992, section 4.3, expression for T_{GS} just before example 6 | lm_robust_reparameterized | python | statsmodels/statsmodels | statsmodels/stats/_diagnostic_other.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_diagnostic_other.py | BSD-3-Clause |
def conditional_moment_test_generic(mom_test, mom_test_deriv,
mom_incl, mom_incl_deriv,
var_mom_all=None,
cov_type='OPG', cov_kwds=None):
"""generic conditional moment test
This is mainly intended as internal function in support of diagnostic
and specification tests. It has no conversion and checking of correct
arguments.
Parameters
----------
mom_test : ndarray, 2-D (nobs, k_constraints)
moment conditions that will be tested to be zero
mom_test_deriv : ndarray, 2-D, square (k_constraints, k_constraints)
derivative of moment conditions under test with respect to the
parameters of the model summed over observations.
mom_incl : ndarray, 2-D (nobs, k_params)
moment conditions that where use in estimation, assumed to be zero
This is score_obs in the case of (Q)MLE
mom_incl_deriv : ndarray, 2-D, square (k_params, k_params)
derivative of moment conditions of estimator summed over observations
This is the information matrix or Hessian in the case of (Q)MLE.
var_mom_all : None, or ndarray, 2-D, (k, k) with k = k_constraints + k_params
Expected product or variance of the joint (column_stacked) moment
conditions. The stacking should have the variance of the moment
conditions under test in the first k_constraint rows and columns.
If it is not None, then it will be estimated based on cov_type.
I think: This is the Hessian of the extended or alternative model
under full MLE and score test assuming information matrix identity
holds.
Returns
-------
results
Notes
-----
TODO: cov_type other than OPG is missing
initial implementation based on Cameron Trived countbook 1998 p.48, p.56
also included: mom_incl can be None if expected mom_test_deriv is zero.
References
----------
Cameron and Trivedi 1998 count book
Wooldridge ???
Pagan and Vella 1989
"""
if cov_type != 'OPG':
raise NotImplementedError
k_constraints = mom_test.shape[1]
if mom_incl is None:
# assume mom_test_deriv is zero, do not include effect of mom_incl
if var_mom_all is None:
var_cm = mom_test.T.dot(mom_test)
else:
var_cm = var_mom_all
else:
# take into account he effect of parameter estimates on mom_test
if var_mom_all is None:
mom_all = np.column_stack((mom_test, mom_incl))
# TODO: replace with inner sandwich covariance estimator
var_mom_all = mom_all.T.dot(mom_all)
tmp = mom_test_deriv.dot(np.linalg.pinv(mom_incl_deriv))
h = np.column_stack((np.eye(k_constraints), -tmp))
var_cm = h.dot(var_mom_all.dot(h.T))
# calculate test results with chisquare
var_cm_inv = np.linalg.pinv(var_cm)
mom_test_sum = mom_test.sum(0)
statistic = mom_test_sum.dot(var_cm_inv.dot(mom_test_sum))
pval = stats.chi2.sf(statistic, k_constraints)
# normal test of individual components
se = np.sqrt(np.diag(var_cm))
tvalues = mom_test_sum / se
pvalues = stats.norm.sf(np.abs(tvalues))
res = ResultsGeneric(var_cm=var_cm,
stat_cmt=statistic,
pval_cmt=pval,
tvalues=tvalues,
pvalues=pvalues)
return res | generic conditional moment test
This is mainly intended as internal function in support of diagnostic
and specification tests. It has no conversion and checking of correct
arguments.
Parameters
----------
mom_test : ndarray, 2-D (nobs, k_constraints)
moment conditions that will be tested to be zero
mom_test_deriv : ndarray, 2-D, square (k_constraints, k_constraints)
derivative of moment conditions under test with respect to the
parameters of the model summed over observations.
mom_incl : ndarray, 2-D (nobs, k_params)
moment conditions that where use in estimation, assumed to be zero
This is score_obs in the case of (Q)MLE
mom_incl_deriv : ndarray, 2-D, square (k_params, k_params)
derivative of moment conditions of estimator summed over observations
This is the information matrix or Hessian in the case of (Q)MLE.
var_mom_all : None, or ndarray, 2-D, (k, k) with k = k_constraints + k_params
Expected product or variance of the joint (column_stacked) moment
conditions. The stacking should have the variance of the moment
conditions under test in the first k_constraint rows and columns.
If it is not None, then it will be estimated based on cov_type.
I think: This is the Hessian of the extended or alternative model
under full MLE and score test assuming information matrix identity
holds.
Returns
-------
results
Notes
-----
TODO: cov_type other than OPG is missing
initial implementation based on Cameron Trived countbook 1998 p.48, p.56
also included: mom_incl can be None if expected mom_test_deriv is zero.
References
----------
Cameron and Trivedi 1998 count book
Wooldridge ???
Pagan and Vella 1989 | conditional_moment_test_generic | python | statsmodels/statsmodels | statsmodels/stats/_diagnostic_other.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_diagnostic_other.py | BSD-3-Clause |
def conditional_moment_test_regression(mom_test, mom_test_deriv=None,
mom_incl=None, mom_incl_deriv=None,
var_mom_all=None, demean=False,
cov_type='OPG', cov_kwds=None):
"""generic conditional moment test based artificial regression
this is very experimental, no options implemented yet
so far
OPG regression, or
artificial regression with Robust Wald test
The latter is (as far as I can see) the same as an overidentifying test
in GMM where the test statistic is the value of the GMM objective function
and it is assumed that parameters were estimated with optimial GMM, i.e.
the weight matrix equal to the expectation of the score variance.
"""
# so far coded from memory
nobs, k_constraints = mom_test.shape
endog = np.ones(nobs)
if mom_incl is not None:
ex = np.column_stack((mom_test, mom_incl))
else:
ex = mom_test
if demean:
ex -= ex.mean(0)
if cov_type == 'OPG':
res = OLS(endog, ex).fit()
statistic = nobs * res.rsquared
pval = stats.chi2.sf(statistic, k_constraints)
else:
res = OLS(endog, ex).fit(cov_type=cov_type, cov_kwds=cov_kwds)
tres = res.wald_test(np.eye(ex.shape[1]))
statistic = tres.statistic
pval = tres.pvalue
return statistic, pval | generic conditional moment test based artificial regression
this is very experimental, no options implemented yet
so far
OPG regression, or
artificial regression with Robust Wald test
The latter is (as far as I can see) the same as an overidentifying test
in GMM where the test statistic is the value of the GMM objective function
and it is assumed that parameters were estimated with optimial GMM, i.e.
the weight matrix equal to the expectation of the score variance. | conditional_moment_test_regression | python | statsmodels/statsmodels | statsmodels/stats/_diagnostic_other.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_diagnostic_other.py | BSD-3-Clause |
def asy_cov_moments(self):
"""
`sqrt(T) * g_T(b_0) asy N(K delta, V)`
mean is not implemented,
V is the same as cov_moments in __init__ argument
"""
return self.cov_moments | `sqrt(T) * g_T(b_0) asy N(K delta, V)`
mean is not implemented,
V is the same as cov_moments in __init__ argument | asy_cov_moments | python | statsmodels/statsmodels | statsmodels/stats/_diagnostic_other.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_diagnostic_other.py | BSD-3-Clause |
def ztest(self):
"""statistic, p-value and degrees of freedom of separate moment test
currently two sided test only
TODO: This can use generic ztest/ttest features and return
ContrastResults
"""
diff = self.moments_constraint
bse = np.sqrt(np.diag(self.cov_mom_constraints))
# Newey uses a generalized inverse
stat = diff / bse
pval = stats.norm.sf(np.abs(stat))*2
return stat, pval | statistic, p-value and degrees of freedom of separate moment test
currently two sided test only
TODO: This can use generic ztest/ttest features and return
ContrastResults | ztest | python | statsmodels/statsmodels | statsmodels/stats/_diagnostic_other.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_diagnostic_other.py | BSD-3-Clause |
def chisquare(self):
"""statistic, p-value and degrees of freedom of joint moment test
"""
diff = self.moments_constraint
cov = self.cov_mom_constraints
# Newey uses a generalized inverse
stat = diff.T.dot(np.linalg.pinv(cov).dot(diff))
df = self.rank_cov_mom_constraints
pval = stats.chi2.sf(stat, df) # Theorem 1
return stat, pval, df | statistic, p-value and degrees of freedom of joint moment test | chisquare | python | statsmodels/statsmodels | statsmodels/stats/_diagnostic_other.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_diagnostic_other.py | BSD-3-Clause |
def ztest(self):
"""statistic, p-value and degrees of freedom of separate moment test
currently two sided test only
TODO: This can use generic ztest/ttest features and return
ContrastResults
"""
diff = self.moments_constraint
bse = np.sqrt(np.diag(self.cov_mom_constraints))
# Newey uses a generalized inverse
stat = diff / bse
pval = stats.norm.sf(np.abs(stat))*2
return stat, pval | statistic, p-value and degrees of freedom of separate moment test
currently two sided test only
TODO: This can use generic ztest/ttest features and return
ContrastResults | ztest | python | statsmodels/statsmodels | statsmodels/stats/_diagnostic_other.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_diagnostic_other.py | BSD-3-Clause |
def chisquare(self):
"""statistic, p-value and degrees of freedom of joint moment test
"""
diff = self.moments #_constraints
cov = self.cov_mom_constraints
# Newey uses a generalized inverse, we use it also here
stat = diff.T.dot(np.linalg.pinv(cov).dot(diff))
#df = self.k_moments_test
# We allow for redundant mom_constraints:
df = self.rank_cov_mom_constraints
pval = stats.chi2.sf(stat, df)
return stat, pval, df | statistic, p-value and degrees of freedom of joint moment test | chisquare | python | statsmodels/statsmodels | statsmodels/stats/_diagnostic_other.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_diagnostic_other.py | BSD-3-Clause |
def conf_int_samples(self, alpha=0.05, use_t=None, nobs=None,
ci_func=None):
"""confidence intervals for the effect size estimate of samples
Additional information needs to be provided for confidence intervals
that are not based on normal distribution using available variance.
This is likely to change in future.
Parameters
----------
alpha : float in (0, 1)
Significance level for confidence interval. Nominal coverage is
``1 - alpha``.
use_t : None or bool
If use_t is None, then the attribute `use_t` determines whether
normal or t-distribution is used for confidence intervals.
Specifying use_t overrides the attribute.
If use_t is false, then confidence intervals are based on the
normal distribution. If it is true, then the t-distribution is
used.
nobs : None or float
Number of observations used for degrees of freedom computation.
Only used if use_t is true.
ci_func : None or callable
User provided function to compute confidence intervals.
This is not used yet and will allow using non-standard confidence
intervals.
Returns
-------
ci_eff : tuple of ndarrays
Tuple (ci_low, ci_upp) with confidence interval computed for each
sample.
Notes
-----
CombineResults currently only has information from the combine_effects
function, which does not provide details about individual samples.
"""
# this is a bit messy, we don't have enough information about
# computing conf_int already in results for other than normal
# TODO: maybe there is a better
if (alpha, use_t) in self.cache_ci:
return self.cache_ci[(alpha, use_t)]
if use_t is None:
use_t = self.use_t
if ci_func is not None:
kwds = {"use_t": use_t} if use_t is not None else {}
ci_eff = ci_func(alpha=alpha, **kwds)
self.ci_sample_distr = "ci_func"
else:
if use_t is False:
crit = stats.norm.isf(alpha / 2)
self.ci_sample_distr = "normal"
else:
if nobs is not None:
df_resid = nobs - 1
crit = stats.t.isf(alpha / 2, df_resid)
self.ci_sample_distr = "t"
else:
msg = ("`use_t=True` requires `nobs` for each sample "
"or `ci_func`. Using normal distribution for "
"confidence interval of individual samples.")
import warnings
warnings.warn(msg)
crit = stats.norm.isf(alpha / 2)
self.ci_sample_distr = "normal"
# sgn = np.asarray([-1, 1])
# ci_eff = self.eff + sgn * crit * self.sd_eff
ci_low = self.eff - crit * self.sd_eff
ci_upp = self.eff + crit * self.sd_eff
ci_eff = (ci_low, ci_upp)
# if (alpha, use_t) not in self.cache_ci: # not needed
self.cache_ci[(alpha, use_t)] = ci_eff
return ci_eff | confidence intervals for the effect size estimate of samples
Additional information needs to be provided for confidence intervals
that are not based on normal distribution using available variance.
This is likely to change in future.
Parameters
----------
alpha : float in (0, 1)
Significance level for confidence interval. Nominal coverage is
``1 - alpha``.
use_t : None or bool
If use_t is None, then the attribute `use_t` determines whether
normal or t-distribution is used for confidence intervals.
Specifying use_t overrides the attribute.
If use_t is false, then confidence intervals are based on the
normal distribution. If it is true, then the t-distribution is
used.
nobs : None or float
Number of observations used for degrees of freedom computation.
Only used if use_t is true.
ci_func : None or callable
User provided function to compute confidence intervals.
This is not used yet and will allow using non-standard confidence
intervals.
Returns
-------
ci_eff : tuple of ndarrays
Tuple (ci_low, ci_upp) with confidence interval computed for each
sample.
Notes
-----
CombineResults currently only has information from the combine_effects
function, which does not provide details about individual samples. | conf_int_samples | python | statsmodels/statsmodels | statsmodels/stats/meta_analysis.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/meta_analysis.py | BSD-3-Clause |
def conf_int(self, alpha=0.05, use_t=None):
"""confidence interval for the overall mean estimate
Parameters
----------
alpha : float in (0, 1)
Significance level for confidence interval. Nominal coverage is
``1 - alpha``.
use_t : None or bool
If use_t is None, then the attribute `use_t` determines whether
normal or t-distribution is used for confidence intervals.
Specifying use_t overrides the attribute.
If use_t is false, then confidence intervals are based on the
normal distribution. If it is true, then the t-distribution is
used.
Returns
-------
ci_eff_fe : tuple of floats
Confidence interval for mean effects size based on fixed effects
model with scale=1.
ci_eff_re : tuple of floats
Confidence interval for mean effects size based on random effects
model with scale=1
ci_eff_fe_wls : tuple of floats
Confidence interval for mean effects size based on fixed effects
model with estimated scale corresponding to WLS, ie. HKSJ.
ci_eff_re_wls : tuple of floats
Confidence interval for mean effects size based on random effects
model with estimated scale corresponding to WLS, ie. HKSJ.
If random effects method is fully iterated, i.e. Paule-Mandel, then
the estimated scale is 1.
"""
if use_t is None:
use_t = self.use_t
if use_t is False:
crit = stats.norm.isf(alpha / 2)
else:
crit = stats.t.isf(alpha / 2, self.df_resid)
sgn = np.asarray([-1, 1])
m_fe = self.mean_effect_fe
m_re = self.mean_effect_re
ci_eff_fe = m_fe + sgn * crit * self.sd_eff_w_fe
ci_eff_re = m_re + sgn * crit * self.sd_eff_w_re
ci_eff_fe_wls = m_fe + sgn * crit * np.sqrt(self.var_hksj_fe)
ci_eff_re_wls = m_re + sgn * crit * np.sqrt(self.var_hksj_re)
return ci_eff_fe, ci_eff_re, ci_eff_fe_wls, ci_eff_re_wls | confidence interval for the overall mean estimate
Parameters
----------
alpha : float in (0, 1)
Significance level for confidence interval. Nominal coverage is
``1 - alpha``.
use_t : None or bool
If use_t is None, then the attribute `use_t` determines whether
normal or t-distribution is used for confidence intervals.
Specifying use_t overrides the attribute.
If use_t is false, then confidence intervals are based on the
normal distribution. If it is true, then the t-distribution is
used.
Returns
-------
ci_eff_fe : tuple of floats
Confidence interval for mean effects size based on fixed effects
model with scale=1.
ci_eff_re : tuple of floats
Confidence interval for mean effects size based on random effects
model with scale=1
ci_eff_fe_wls : tuple of floats
Confidence interval for mean effects size based on fixed effects
model with estimated scale corresponding to WLS, ie. HKSJ.
ci_eff_re_wls : tuple of floats
Confidence interval for mean effects size based on random effects
model with estimated scale corresponding to WLS, ie. HKSJ.
If random effects method is fully iterated, i.e. Paule-Mandel, then
the estimated scale is 1. | conf_int | python | statsmodels/statsmodels | statsmodels/stats/meta_analysis.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/meta_analysis.py | BSD-3-Clause |
def test_homogeneity(self):
"""Test whether the means of all samples are the same
currently no options, test uses chisquare distribution
default might change depending on `use_t`
Returns
-------
res : HolderTuple instance
The results include the following attributes:
- statistic : float
Test statistic, ``q`` in meta-analysis, this is the
pearson_chi2 statistic for the fixed effects model.
- pvalue : float
P-value based on chisquare distribution.
- df : float
Degrees of freedom, equal to number of studies or samples
minus 1.
"""
pvalue = stats.chi2.sf(self.q, self.k - 1)
res = HolderTuple(statistic=self.q,
pvalue=pvalue,
df=self.k - 1,
distr="chi2")
return res | Test whether the means of all samples are the same
currently no options, test uses chisquare distribution
default might change depending on `use_t`
Returns
-------
res : HolderTuple instance
The results include the following attributes:
- statistic : float
Test statistic, ``q`` in meta-analysis, this is the
pearson_chi2 statistic for the fixed effects model.
- pvalue : float
P-value based on chisquare distribution.
- df : float
Degrees of freedom, equal to number of studies or samples
minus 1. | test_homogeneity | python | statsmodels/statsmodels | statsmodels/stats/meta_analysis.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/meta_analysis.py | BSD-3-Clause |
def summary_array(self, alpha=0.05, use_t=None):
"""Create array with sample statistics and mean estimates
Parameters
----------
alpha : float in (0, 1)
Significance level for confidence interval. Nominal coverage is
``1 - alpha``.
use_t : None or bool
If use_t is None, then the attribute `use_t` determines whether
normal or t-distribution is used for confidence intervals.
Specifying use_t overrides the attribute.
If use_t is false, then confidence intervals are based on the
normal distribution. If it is true, then the t-distribution is
used.
Returns
-------
res : ndarray
Array with columns
['eff', "sd_eff", "ci_low", "ci_upp", "w_fe","w_re"].
Rows include statistics for samples and estimates of overall mean.
column_names : list of str
The names for the columns, used when creating summary DataFrame.
"""
ci_low, ci_upp = self.conf_int_samples(alpha=alpha, use_t=use_t)
res = np.column_stack([self.eff, self.sd_eff,
ci_low, ci_upp,
self.weights_rel_fe, self.weights_rel_re])
ci = self.conf_int(alpha=alpha, use_t=use_t)
res_fe = [[self.mean_effect_fe, self.sd_eff_w_fe,
ci[0][0], ci[0][1], 1, np.nan]]
res_re = [[self.mean_effect_re, self.sd_eff_w_re,
ci[1][0], ci[1][1], np.nan, 1]]
res_fe_wls = [[self.mean_effect_fe, self.sd_eff_w_fe_hksj,
ci[2][0], ci[2][1], 1, np.nan]]
res_re_wls = [[self.mean_effect_re, self.sd_eff_w_re_hksj,
ci[3][0], ci[3][1], np.nan, 1]]
res = np.concatenate([res, res_fe, res_re, res_fe_wls, res_re_wls],
axis=0)
column_names = ['eff', "sd_eff", "ci_low", "ci_upp", "w_fe", "w_re"]
return res, column_names | Create array with sample statistics and mean estimates
Parameters
----------
alpha : float in (0, 1)
Significance level for confidence interval. Nominal coverage is
``1 - alpha``.
use_t : None or bool
If use_t is None, then the attribute `use_t` determines whether
normal or t-distribution is used for confidence intervals.
Specifying use_t overrides the attribute.
If use_t is false, then confidence intervals are based on the
normal distribution. If it is true, then the t-distribution is
used.
Returns
-------
res : ndarray
Array with columns
['eff', "sd_eff", "ci_low", "ci_upp", "w_fe","w_re"].
Rows include statistics for samples and estimates of overall mean.
column_names : list of str
The names for the columns, used when creating summary DataFrame. | summary_array | python | statsmodels/statsmodels | statsmodels/stats/meta_analysis.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/meta_analysis.py | BSD-3-Clause |
def summary_frame(self, alpha=0.05, use_t=None):
"""Create DataFrame with sample statistics and mean estimates
Parameters
----------
alpha : float in (0, 1)
Significance level for confidence interval. Nominal coverage is
``1 - alpha``.
use_t : None or bool
If use_t is None, then the attribute `use_t` determines whether
normal or t-distribution is used for confidence intervals.
Specifying use_t overrides the attribute.
If use_t is false, then confidence intervals are based on the
normal distribution. If it is true, then the t-distribution is
used.
Returns
-------
res : DataFrame
pandas DataFrame instance with columns
['eff', "sd_eff", "ci_low", "ci_upp", "w_fe","w_re"].
Rows include statistics for samples and estimates of overall mean.
"""
if use_t is None:
use_t = self.use_t
labels = (list(self.row_names) +
["fixed effect", "random effect",
"fixed effect wls", "random effect wls"])
res, col_names = self.summary_array(alpha=alpha, use_t=use_t)
results = pd.DataFrame(res, index=labels, columns=col_names)
return results | Create DataFrame with sample statistics and mean estimates
Parameters
----------
alpha : float in (0, 1)
Significance level for confidence interval. Nominal coverage is
``1 - alpha``.
use_t : None or bool
If use_t is None, then the attribute `use_t` determines whether
normal or t-distribution is used for confidence intervals.
Specifying use_t overrides the attribute.
If use_t is false, then confidence intervals are based on the
normal distribution. If it is true, then the t-distribution is
used.
Returns
-------
res : DataFrame
pandas DataFrame instance with columns
['eff', "sd_eff", "ci_low", "ci_upp", "w_fe","w_re"].
Rows include statistics for samples and estimates of overall mean. | summary_frame | python | statsmodels/statsmodels | statsmodels/stats/meta_analysis.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/meta_analysis.py | BSD-3-Clause |
def plot_forest(self, alpha=0.05, use_t=None, use_exp=False,
ax=None, **kwds):
"""Forest plot with means and confidence intervals
Parameters
----------
ax : None or matplotlib axis instance
If ax is provided, then the plot will be added to it.
alpha : float in (0, 1)
Significance level for confidence interval. Nominal coverage is
``1 - alpha``.
use_t : None or bool
If use_t is None, then the attribute `use_t` determines whether
normal or t-distribution is used for confidence intervals.
Specifying use_t overrides the attribute.
If use_t is false, then confidence intervals are based on the
normal distribution. If it is true, then the t-distribution is
used.
use_exp : bool
If `use_exp` is True, then the effect size and confidence limits
will be exponentiated. This transform log-odds-ration into
odds-ratio, and similarly for risk-ratio.
ax : AxesSubplot, optional
If given, this axes is used to plot in instead of a new figure
being created.
kwds : optional keyword arguments
Keywords are forwarded to the dot_plot function that creates the
plot.
Returns
-------
fig : Matplotlib figure instance
See Also
--------
dot_plot
"""
from statsmodels.graphics.dotplots import dot_plot
res_df = self.summary_frame(alpha=alpha, use_t=use_t)
if use_exp:
res_df = np.exp(res_df[["eff", "ci_low", "ci_upp"]])
hw = np.abs(res_df[["ci_low", "ci_upp"]] - res_df[["eff"]].values)
fig = dot_plot(points=res_df["eff"], intervals=hw,
lines=res_df.index, line_order=res_df.index, **kwds)
return fig | Forest plot with means and confidence intervals
Parameters
----------
ax : None or matplotlib axis instance
If ax is provided, then the plot will be added to it.
alpha : float in (0, 1)
Significance level for confidence interval. Nominal coverage is
``1 - alpha``.
use_t : None or bool
If use_t is None, then the attribute `use_t` determines whether
normal or t-distribution is used for confidence intervals.
Specifying use_t overrides the attribute.
If use_t is false, then confidence intervals are based on the
normal distribution. If it is true, then the t-distribution is
used.
use_exp : bool
If `use_exp` is True, then the effect size and confidence limits
will be exponentiated. This transform log-odds-ration into
odds-ratio, and similarly for risk-ratio.
ax : AxesSubplot, optional
If given, this axes is used to plot in instead of a new figure
being created.
kwds : optional keyword arguments
Keywords are forwarded to the dot_plot function that creates the
plot.
Returns
-------
fig : Matplotlib figure instance
See Also
--------
dot_plot | plot_forest | python | statsmodels/statsmodels | statsmodels/stats/meta_analysis.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/meta_analysis.py | BSD-3-Clause |
def effectsize_smd(mean1, sd1, nobs1, mean2, sd2, nobs2):
"""effect sizes for mean difference for use in meta-analysis
mean1, sd1, nobs1 are for treatment
mean2, sd2, nobs2 are for control
Effect sizes are computed for the mean difference ``mean1 - mean2``
standardized by an estimate of the within variance.
This does not have option yet.
It uses standardized mean difference with bias correction as effect size.
This currently does not use np.asarray, all computations are possible in
pandas.
Parameters
----------
mean1 : array
mean of second sample, treatment groups
sd1 : array
standard deviation of residuals in treatment groups, within
nobs1 : array
number of observations in treatment groups
mean2, sd2, nobs2 : arrays
mean, standard deviation and number of observations of control groups
Returns
-------
smd_bc : array
bias corrected estimate of standardized mean difference
var_smdbc : array
estimate of variance of smd_bc
Notes
-----
Status: API will still change. This is currently intended for support of
meta-analysis.
References
----------
Borenstein, Michael. 2009. Introduction to Meta-Analysis.
Chichester: Wiley.
Chen, Ding-Geng, and Karl E. Peace. 2013. Applied Meta-Analysis with R.
Chapman & Hall/CRC Biostatistics Series.
Boca Raton: CRC Press/Taylor & Francis Group.
"""
# TODO: not used yet, design and options ?
# k = len(mean1)
# if row_names is None:
# row_names = list(range(k))
# crit = stats.norm.isf(alpha / 2)
# var_diff_uneq = sd1**2 / nobs1 + sd2**2 / nobs2
var_diff = (sd1**2 * (nobs1 - 1) +
sd2**2 * (nobs2 - 1)) / (nobs1 + nobs2 - 2)
sd_diff = np.sqrt(var_diff)
nobs = nobs1 + nobs2
bias_correction = 1 - 3 / (4 * nobs - 9)
smd = (mean1 - mean2) / sd_diff
smd_bc = bias_correction * smd
var_smdbc = nobs / nobs1 / nobs2 + smd_bc**2 / 2 / (nobs - 3.94)
return smd_bc, var_smdbc | effect sizes for mean difference for use in meta-analysis
mean1, sd1, nobs1 are for treatment
mean2, sd2, nobs2 are for control
Effect sizes are computed for the mean difference ``mean1 - mean2``
standardized by an estimate of the within variance.
This does not have option yet.
It uses standardized mean difference with bias correction as effect size.
This currently does not use np.asarray, all computations are possible in
pandas.
Parameters
----------
mean1 : array
mean of second sample, treatment groups
sd1 : array
standard deviation of residuals in treatment groups, within
nobs1 : array
number of observations in treatment groups
mean2, sd2, nobs2 : arrays
mean, standard deviation and number of observations of control groups
Returns
-------
smd_bc : array
bias corrected estimate of standardized mean difference
var_smdbc : array
estimate of variance of smd_bc
Notes
-----
Status: API will still change. This is currently intended for support of
meta-analysis.
References
----------
Borenstein, Michael. 2009. Introduction to Meta-Analysis.
Chichester: Wiley.
Chen, Ding-Geng, and Karl E. Peace. 2013. Applied Meta-Analysis with R.
Chapman & Hall/CRC Biostatistics Series.
Boca Raton: CRC Press/Taylor & Francis Group. | effectsize_smd | python | statsmodels/statsmodels | statsmodels/stats/meta_analysis.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/meta_analysis.py | BSD-3-Clause |
def effectsize_2proportions(count1, nobs1, count2, nobs2, statistic="diff",
zero_correction=None, zero_kwds=None):
"""Effects sizes for two sample binomial proportions
Parameters
----------
count1, nobs1, count2, nobs2 : array_like
data for two samples
statistic : {"diff", "odds-ratio", "risk-ratio", "arcsine"}
statistic for the comparison of two proportions
Effect sizes for "odds-ratio" and "risk-ratio" are in logarithm.
zero_correction : {None, float, "tac", "clip"}
Some statistics are not finite when zero counts are in the data.
The options to remove zeros are:
* float : if zero_correction is a single float, then it will be added
to all count (cells) if the sample has any zeros.
* "tac" : treatment arm continuity correction see Ruecker et al 2009,
section 3.2
* "clip" : clip proportions without adding a value to all cells
The clip bounds can be set with zero_kwds["clip_bounds"]
zero_kwds : dict
additional options to handle zero counts
"clip_bounds" tuple, default (1e-6, 1 - 1e-6) if zero_correction="clip"
other options not yet implemented
Returns
-------
effect size : array
Effect size for each sample.
var_es : array
Estimate of variance of the effect size
Notes
-----
Status: API is experimental, Options for zero handling is incomplete.
The names for ``statistics`` keyword can be shortened to "rd", "rr", "or"
and "as".
The statistics are defined as:
- risk difference = p1 - p2
- log risk ratio = log(p1 / p2)
- log odds_ratio = log(p1 / (1 - p1) * (1 - p2) / p2)
- arcsine-sqrt = arcsin(sqrt(p1)) - arcsin(sqrt(p2))
where p1 and p2 are the estimated proportions in sample 1 (treatment) and
sample 2 (control).
log-odds-ratio and log-risk-ratio can be transformed back to ``or`` and
`rr` using `exp` function.
See Also
--------
statsmodels.stats.contingency_tables
"""
if zero_correction is None:
cc1 = cc2 = 0
elif zero_correction == "tac":
# treatment arm continuity correction Ruecker et al 2009, section 3.2
nobs_t = nobs1 + nobs2
cc1 = nobs2 / nobs_t
cc2 = nobs1 / nobs_t
elif zero_correction == "clip":
clip_bounds = zero_kwds.get("clip_bounds", (1e-6, 1 - 1e-6))
cc1 = cc2 = 0
elif zero_correction:
# TODO: check is float_like
cc1 = cc2 = zero_correction
else:
msg = "zero_correction not recognized or supported"
raise NotImplementedError(msg)
zero_mask1 = (count1 == 0) | (count1 == nobs1)
zero_mask2 = (count2 == 0) | (count2 == nobs2)
zmask = np.logical_or(zero_mask1, zero_mask2)
n1 = nobs1 + (cc1 + cc2) * zmask
n2 = nobs2 + (cc1 + cc2) * zmask
p1 = (count1 + cc1) / (n1)
p2 = (count2 + cc2) / (n2)
if zero_correction == "clip":
p1 = np.clip(p1, *clip_bounds)
p2 = np.clip(p2, *clip_bounds)
if statistic in ["diff", "rd"]:
rd = p1 - p2
rd_var = p1 * (1 - p1) / n1 + p2 * (1 - p2) / n2
eff = rd
var_eff = rd_var
elif statistic in ["risk-ratio", "rr"]:
# rr = p1 / p2
log_rr = np.log(p1) - np.log(p2)
log_rr_var = (1 - p1) / p1 / n1 + (1 - p2) / p2 / n2
eff = log_rr
var_eff = log_rr_var
elif statistic in ["odds-ratio", "or"]:
# or_ = p1 / (1 - p1) * (1 - p2) / p2
log_or = np.log(p1) - np.log(1 - p1) - np.log(p2) + np.log(1 - p2)
log_or_var = 1 / (p1 * (1 - p1) * n1) + 1 / (p2 * (1 - p2) * n2)
eff = log_or
var_eff = log_or_var
elif statistic in ["arcsine", "arcsin", "as"]:
as_ = np.arcsin(np.sqrt(p1)) - np.arcsin(np.sqrt(p2))
as_var = (1 / n1 + 1 / n2) / 4
eff = as_
var_eff = as_var
else:
msg = 'statistic not recognized, use one of "rd", "rr", "or", "as"'
raise NotImplementedError(msg)
return eff, var_eff | Effects sizes for two sample binomial proportions
Parameters
----------
count1, nobs1, count2, nobs2 : array_like
data for two samples
statistic : {"diff", "odds-ratio", "risk-ratio", "arcsine"}
statistic for the comparison of two proportions
Effect sizes for "odds-ratio" and "risk-ratio" are in logarithm.
zero_correction : {None, float, "tac", "clip"}
Some statistics are not finite when zero counts are in the data.
The options to remove zeros are:
* float : if zero_correction is a single float, then it will be added
to all count (cells) if the sample has any zeros.
* "tac" : treatment arm continuity correction see Ruecker et al 2009,
section 3.2
* "clip" : clip proportions without adding a value to all cells
The clip bounds can be set with zero_kwds["clip_bounds"]
zero_kwds : dict
additional options to handle zero counts
"clip_bounds" tuple, default (1e-6, 1 - 1e-6) if zero_correction="clip"
other options not yet implemented
Returns
-------
effect size : array
Effect size for each sample.
var_es : array
Estimate of variance of the effect size
Notes
-----
Status: API is experimental, Options for zero handling is incomplete.
The names for ``statistics`` keyword can be shortened to "rd", "rr", "or"
and "as".
The statistics are defined as:
- risk difference = p1 - p2
- log risk ratio = log(p1 / p2)
- log odds_ratio = log(p1 / (1 - p1) * (1 - p2) / p2)
- arcsine-sqrt = arcsin(sqrt(p1)) - arcsin(sqrt(p2))
where p1 and p2 are the estimated proportions in sample 1 (treatment) and
sample 2 (control).
log-odds-ratio and log-risk-ratio can be transformed back to ``or`` and
`rr` using `exp` function.
See Also
--------
statsmodels.stats.contingency_tables | effectsize_2proportions | python | statsmodels/statsmodels | statsmodels/stats/meta_analysis.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/meta_analysis.py | BSD-3-Clause |
def combine_effects(effect, variance, method_re="iterated", row_names=None,
use_t=False, alpha=0.05, **kwds):
"""combining effect sizes for effect sizes using meta-analysis
This currently does not use np.asarray, all computations are possible in
pandas.
Parameters
----------
effect : array
mean of effect size measure for all samples
variance : array
variance of mean or effect size measure for all samples
method_re : {"iterated", "chi2"}
method that is use to compute the between random effects variance
"iterated" or "pm" uses Paule and Mandel method to iteratively
estimate the random effects variance. Options for the iteration can
be provided in the ``kwds``
"chi2" or "dl" uses DerSimonian and Laird one-step estimator.
row_names : list of strings (optional)
names for samples or studies, will be included in results summary and
table.
alpha : float in (0, 1)
significance level, default is 0.05, for the confidence intervals
Returns
-------
results : CombineResults
Contains estimation results and intermediate statistics, and includes
a method to return a summary table.
Statistics from intermediate calculations might be removed at a later
time.
Notes
-----
Status: Basic functionality is verified, mainly compared to R metafor
package. However, API might still change.
This computes both fixed effects and random effects estimates. The
random effects results depend on the method to estimate the RE variance.
Scale estimate
In fixed effects models and in random effects models without fully
iterated random effects variance, the model will in general not account
for all residual variance. Traditional meta-analysis uses a fixed
scale equal to 1, that might not produce test statistics and
confidence intervals with the correct size. Estimating the scale to account
for residual variance often improves the small sample properties of
inference and confidence intervals.
This adjustment to the standard errors is often referred to as HKSJ
method based attributed to Hartung and Knapp and Sidik and Jonkman.
However, this is equivalent to estimating the scale in WLS.
The results instance includes both, fixed scale and estimated scale
versions of standard errors and confidence intervals.
References
----------
Borenstein, Michael. 2009. Introduction to Meta-Analysis.
Chichester: Wiley.
Chen, Ding-Geng, and Karl E. Peace. 2013. Applied Meta-Analysis with R.
Chapman & Hall/CRC Biostatistics Series.
Boca Raton: CRC Press/Taylor & Francis Group.
"""
k = len(effect)
if row_names is None:
row_names = list(range(k))
crit = stats.norm.isf(alpha / 2)
# alias for initial version
eff = effect
var_eff = variance
sd_eff = np.sqrt(var_eff)
# fixed effects computation
weights_fe = 1 / var_eff # no bias correction ?
w_total_fe = weights_fe.sum(0)
weights_rel_fe = weights_fe / w_total_fe
eff_w_fe = weights_rel_fe * eff
mean_effect_fe = eff_w_fe.sum()
var_eff_w_fe = 1 / w_total_fe
sd_eff_w_fe = np.sqrt(var_eff_w_fe)
# random effects computation
q = (weights_fe * eff**2).sum(0)
q -= (weights_fe * eff).sum()**2 / w_total_fe
df = k - 1
if method_re.lower() in ["iterated", "pm"]:
tau2, _ = _fit_tau_iterative(eff, var_eff, **kwds)
elif method_re.lower() in ["chi2", "dl"]:
c = w_total_fe - (weights_fe**2).sum() / w_total_fe
tau2 = (q - df) / c
else:
raise ValueError('method_re should be "iterated" or "chi2"')
weights_re = 1 / (var_eff + tau2) # no bias_correction ?
w_total_re = weights_re.sum(0)
weights_rel_re = weights_re / weights_re.sum(0)
eff_w_re = weights_rel_re * eff
mean_effect_re = eff_w_re.sum()
var_eff_w_re = 1 / w_total_re
sd_eff_w_re = np.sqrt(var_eff_w_re)
# ci_low_eff_re = mean_effect_re - crit * sd_eff_w_re
# ci_upp_eff_re = mean_effect_re + crit * sd_eff_w_re
scale_hksj_re = (weights_re * (eff - mean_effect_re)**2).sum() / df
scale_hksj_fe = (weights_fe * (eff - mean_effect_fe)**2).sum() / df
var_hksj_re = (weights_rel_re * (eff - mean_effect_re)**2).sum() / df
var_hksj_fe = (weights_rel_fe * (eff - mean_effect_fe)**2).sum() / df
res = CombineResults(**locals())
return res | combining effect sizes for effect sizes using meta-analysis
This currently does not use np.asarray, all computations are possible in
pandas.
Parameters
----------
effect : array
mean of effect size measure for all samples
variance : array
variance of mean or effect size measure for all samples
method_re : {"iterated", "chi2"}
method that is use to compute the between random effects variance
"iterated" or "pm" uses Paule and Mandel method to iteratively
estimate the random effects variance. Options for the iteration can
be provided in the ``kwds``
"chi2" or "dl" uses DerSimonian and Laird one-step estimator.
row_names : list of strings (optional)
names for samples or studies, will be included in results summary and
table.
alpha : float in (0, 1)
significance level, default is 0.05, for the confidence intervals
Returns
-------
results : CombineResults
Contains estimation results and intermediate statistics, and includes
a method to return a summary table.
Statistics from intermediate calculations might be removed at a later
time.
Notes
-----
Status: Basic functionality is verified, mainly compared to R metafor
package. However, API might still change.
This computes both fixed effects and random effects estimates. The
random effects results depend on the method to estimate the RE variance.
Scale estimate
In fixed effects models and in random effects models without fully
iterated random effects variance, the model will in general not account
for all residual variance. Traditional meta-analysis uses a fixed
scale equal to 1, that might not produce test statistics and
confidence intervals with the correct size. Estimating the scale to account
for residual variance often improves the small sample properties of
inference and confidence intervals.
This adjustment to the standard errors is often referred to as HKSJ
method based attributed to Hartung and Knapp and Sidik and Jonkman.
However, this is equivalent to estimating the scale in WLS.
The results instance includes both, fixed scale and estimated scale
versions of standard errors and confidence intervals.
References
----------
Borenstein, Michael. 2009. Introduction to Meta-Analysis.
Chichester: Wiley.
Chen, Ding-Geng, and Karl E. Peace. 2013. Applied Meta-Analysis with R.
Chapman & Hall/CRC Biostatistics Series.
Boca Raton: CRC Press/Taylor & Francis Group. | combine_effects | python | statsmodels/statsmodels | statsmodels/stats/meta_analysis.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/meta_analysis.py | BSD-3-Clause |
def _fit_tau_iterative(eff, var_eff, tau2_start=0, atol=1e-5, maxiter=50):
"""Paule-Mandel iterative estimate of between random effect variance
implementation follows DerSimonian and Kacker 2007 Appendix 8
see also Kacker 2004
Parameters
----------
eff : ndarray
effect sizes
var_eff : ndarray
variance of effect sizes
tau2_start : float
starting value for iteration
atol : float, default: 1e-5
convergence tolerance for absolute value of estimating equation
maxiter : int
maximum number of iterations
Returns
-------
tau2 : float
estimate of random effects variance tau squared
converged : bool
True if iteration has converged.
"""
tau2 = tau2_start
k = eff.shape[0]
converged = False
for i in range(maxiter):
w = 1 / (var_eff + tau2)
m = w.dot(eff) / w.sum(0)
resid_sq = (eff - m)**2
q_w = w.dot(resid_sq)
# estimating equation
ee = q_w - (k - 1)
if ee < 0:
tau2 = 0
converged = 0
break
if np.allclose(ee, 0, atol=atol):
converged = True
break
# update tau2
delta = ee / (w**2).dot(resid_sq)
tau2 += delta
return tau2, converged | Paule-Mandel iterative estimate of between random effect variance
implementation follows DerSimonian and Kacker 2007 Appendix 8
see also Kacker 2004
Parameters
----------
eff : ndarray
effect sizes
var_eff : ndarray
variance of effect sizes
tau2_start : float
starting value for iteration
atol : float, default: 1e-5
convergence tolerance for absolute value of estimating equation
maxiter : int
maximum number of iterations
Returns
-------
tau2 : float
estimate of random effects variance tau squared
converged : bool
True if iteration has converged. | _fit_tau_iterative | python | statsmodels/statsmodels | statsmodels/stats/meta_analysis.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/meta_analysis.py | BSD-3-Clause |
def _fit_tau_mm(eff, var_eff, weights):
"""one-step method of moment estimate of between random effect variance
implementation follows Kacker 2004 and DerSimonian and Kacker 2007 eq. 6
Parameters
----------
eff : ndarray
effect sizes
var_eff : ndarray
variance of effect sizes
weights : ndarray
weights for estimating overall weighted mean
Returns
-------
tau2 : float
estimate of random effects variance tau squared
"""
w = weights
m = w.dot(eff) / w.sum(0)
resid_sq = (eff - m)**2
q_w = w.dot(resid_sq)
w_t = w.sum()
expect = w.dot(var_eff) - (w**2).dot(var_eff) / w_t
denom = w_t - (w**2).sum() / w_t
# moment estimate from estimating equation
tau2 = (q_w - expect) / denom
return tau2 | one-step method of moment estimate of between random effect variance
implementation follows Kacker 2004 and DerSimonian and Kacker 2007 eq. 6
Parameters
----------
eff : ndarray
effect sizes
var_eff : ndarray
variance of effect sizes
weights : ndarray
weights for estimating overall weighted mean
Returns
-------
tau2 : float
estimate of random effects variance tau squared | _fit_tau_mm | python | statsmodels/statsmodels | statsmodels/stats/meta_analysis.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/meta_analysis.py | BSD-3-Clause |
def _fit_tau_iter_mm(eff, var_eff, tau2_start=0, atol=1e-5, maxiter=50):
"""iterated method of moment estimate of between random effect variance
This repeatedly estimates tau, updating weights in each iteration
see two-step estimators in DerSimonian and Kacker 2007
Parameters
----------
eff : ndarray
effect sizes
var_eff : ndarray
variance of effect sizes
tau2_start : float
starting value for iteration
atol : float, default: 1e-5
convergence tolerance for change in tau2 estimate between iterations
maxiter : int
maximum number of iterations
Returns
-------
tau2 : float
estimate of random effects variance tau squared
converged : bool
True if iteration has converged.
"""
tau2 = tau2_start
converged = False
for _ in range(maxiter):
w = 1 / (var_eff + tau2)
tau2_new = _fit_tau_mm(eff, var_eff, w)
tau2_new = max(0, tau2_new)
delta = tau2_new - tau2
if np.allclose(delta, 0, atol=atol):
converged = True
break
tau2 = tau2_new
return tau2, converged | iterated method of moment estimate of between random effect variance
This repeatedly estimates tau, updating weights in each iteration
see two-step estimators in DerSimonian and Kacker 2007
Parameters
----------
eff : ndarray
effect sizes
var_eff : ndarray
variance of effect sizes
tau2_start : float
starting value for iteration
atol : float, default: 1e-5
convergence tolerance for change in tau2 estimate between iterations
maxiter : int
maximum number of iterations
Returns
-------
tau2 : float
estimate of random effects variance tau squared
converged : bool
True if iteration has converged. | _fit_tau_iter_mm | python | statsmodels/statsmodels | statsmodels/stats/meta_analysis.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/meta_analysis.py | BSD-3-Clause |
def anova_single(model, **kwargs):
"""
Anova table for one fitted linear model.
Parameters
----------
model : fitted linear model results instance
A fitted linear model
typ : int or str {1,2,3} or {"I","II","III"}
Type of sum of squares to use.
**kwargs**
scale : float
Estimate of variance, If None, will be estimated from the largest
model. Default is None.
test : str {"F", "Chisq", "Cp"} or None
Test statistics to provide. Default is "F".
Notes
-----
Use of this function is discouraged. Use anova_lm instead.
"""
test = kwargs.get("test", "F")
typ = kwargs.get("typ", 1)
robust = kwargs.get("robust", None)
if robust:
robust = robust.lower()
endog = model.model.endog
exog = model.model.exog
nobs = exog.shape[0]
model_spec = model.model.data.model_spec
# +1 for resids
mgr = FormulaManager()
n_rows = (len(model_spec.terms) - mgr.has_intercept(model_spec) + 1)
pr_test = "PR(>%s)" % test
names = ['df', 'sum_sq', 'mean_sq', test, pr_test]
table = DataFrame(np.zeros((n_rows, 5)), columns=names)
if typ in [1, "I"]:
return anova1_lm_single(model, endog, exog, nobs, model_spec, table,
n_rows, test, pr_test, robust)
elif typ in [2, "II"]:
return anova2_lm_single(model, model_spec, n_rows, test, pr_test,
robust)
elif typ in [3, "III"]:
return anova3_lm_single(model, model_spec, n_rows, test, pr_test,
robust)
elif typ in [4, "IV"]:
raise NotImplementedError("Type IV not yet implemented")
else: # pragma: no cover
raise ValueError("Type %s not understood" % str(typ)) | Anova table for one fitted linear model.
Parameters
----------
model : fitted linear model results instance
A fitted linear model
typ : int or str {1,2,3} or {"I","II","III"}
Type of sum of squares to use.
**kwargs**
scale : float
Estimate of variance, If None, will be estimated from the largest
model. Default is None.
test : str {"F", "Chisq", "Cp"} or None
Test statistics to provide. Default is "F".
Notes
-----
Use of this function is discouraged. Use anova_lm instead. | anova_single | python | statsmodels/statsmodels | statsmodels/stats/anova.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/anova.py | BSD-3-Clause |
def anova1_lm_single(model, endog, exog, nobs, model_spec, table, n_rows, test,
pr_test, robust):
"""
Anova table for one fitted linear model.
Parameters
----------
model : fitted linear model results instance
A fitted linear model
**kwargs**
scale : float
Estimate of variance, If None, will be estimated from the largest
model. Default is None.
test : str {"F", "Chisq", "Cp"} or None
Test statistics to provide. Default is "F".
Notes
-----
Use of this function is discouraged. Use anova_lm instead.
"""
#maybe we should rethink using pinv > qr in OLS/linear models?
mgr = FormulaManager()
effects = getattr(model, 'effects', None)
if effects is None:
q,r = np.linalg.qr(exog)
effects = np.dot(q.T, endog)
arr = np.zeros((len(model_spec.terms), len(model_spec.column_names)))
slices = [
mgr.get_slice(model_spec, name) for name in mgr.get_term_names(model_spec)
]
for i, slice_ in enumerate(slices):
arr[i, slice_] = 1
sum_sq = np.dot(arr, effects**2)
#NOTE: assumes intercept is first column
mgr = FormulaManager()
idx = mgr.intercept_idx(model_spec)
sum_sq = sum_sq[~idx]
term_names = np.array(mgr.get_term_names(model_spec)) # want boolean indexing
term_names = term_names[~idx]
index = term_names.tolist()
table.index = Index(index + ['Residual'])
table.loc[index, ['df', 'sum_sq']] = np.c_[arr[~idx].sum(1), sum_sq]
# fill in residual
table.loc['Residual', ['sum_sq','df']] = model.ssr, model.df_resid
if test == 'F':
table[test] = ((table['sum_sq'] / table['df']) /
(model.ssr / model.df_resid))
table[pr_test] = stats.f.sf(table["F"], table["df"],
model.df_resid)
table.loc['Residual', [test, pr_test]] = np.nan, np.nan
table['mean_sq'] = table['sum_sq'] / table['df']
return table | Anova table for one fitted linear model.
Parameters
----------
model : fitted linear model results instance
A fitted linear model
**kwargs**
scale : float
Estimate of variance, If None, will be estimated from the largest
model. Default is None.
test : str {"F", "Chisq", "Cp"} or None
Test statistics to provide. Default is "F".
Notes
-----
Use of this function is discouraged. Use anova_lm instead. | anova1_lm_single | python | statsmodels/statsmodels | statsmodels/stats/anova.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/anova.py | BSD-3-Clause |
def anova2_lm_single(model, model_spec, n_rows, test, pr_test, robust):
"""
Anova type II table for one fitted linear model.
Parameters
----------
model : fitted linear model results instance
A fitted linear model
**kwargs**
scale : float
Estimate of variance, If None, will be estimated from the largest
model. Default is None.
test : str {"F", "Chisq", "Cp"} or None
Test statistics to provide. Default is "F".
Notes
-----
Use of this function is discouraged. Use anova_lm instead.
Type II
Sum of Squares compares marginal contribution of terms. Thus, it is
not particularly useful for models with significant interaction terms.
"""
mgr = FormulaManager()
terms_info = model_spec.terms[:] # copy
terms_info = mgr.remove_intercept(terms_info)
names = ['sum_sq', 'df', test, pr_test]
table = DataFrame(np.zeros((n_rows, 4)), columns = names)
robust_cov = _get_covariance(model, robust)
col_order = []
index = []
for i, term in enumerate(terms_info):
# grab all variables except interaction effects that contain term
# need two hypotheses matrices L1 is most restrictive, ie., term==0
# L2 is everything except term==0
cols = mgr.get_slice(model_spec, term)
L1 = lrange(cols.start, cols.stop)
L2 = []
term_set = set(term.factors)
for t in terms_info: # for the term you have
other_set = set(t.factors)
if term_set.issubset(other_set) and not term_set == other_set:
col = mgr.get_slice(model_spec, t)
# on a higher order term containing current `term`
L1.extend(lrange(col.start, col.stop))
L2.extend(lrange(col.start, col.stop))
L1 = np.eye(model.model.exog.shape[1])[L1]
L2 = np.eye(model.model.exog.shape[1])[L2]
if L2.size:
LVL = np.dot(np.dot(L1,robust_cov),L2.T)
from scipy import linalg
orth_compl,_ = linalg.qr(LVL)
r = L1.shape[0] - L2.shape[0]
# L1|2
# use the non-unique orthogonal completion since L12 is rank r
L12 = np.dot(orth_compl[:,-r:].T, L1)
else:
L12 = L1
r = L1.shape[0]
#from IPython.core.debugger import Pdb; Pdb().set_trace()
if test == 'F':
f = model.f_test(L12, cov_p=robust_cov)
table.loc[table.index[i], test] = f.fvalue
table.loc[table.index[i], pr_test] = f.pvalue
# need to back out SSR from f_test
table.loc[table.index[i], 'df'] = r
col_order.append(cols.start)
index.append(mgr.get_term_name(term))
table.index = Index(index + ['Residual'])
table = table.iloc[np.argsort(col_order + [model.model.exog.shape[1]+1])]
# back out sum of squares from f_test
ssr = table[test] * table['df'] * model.ssr/model.df_resid
table['sum_sq'] = ssr
# fill in residual
table.loc['Residual', ['sum_sq','df', test, pr_test]] = (model.ssr,
model.df_resid,
np.nan, np.nan)
return table | Anova type II table for one fitted linear model.
Parameters
----------
model : fitted linear model results instance
A fitted linear model
**kwargs**
scale : float
Estimate of variance, If None, will be estimated from the largest
model. Default is None.
test : str {"F", "Chisq", "Cp"} or None
Test statistics to provide. Default is "F".
Notes
-----
Use of this function is discouraged. Use anova_lm instead.
Type II
Sum of Squares compares marginal contribution of terms. Thus, it is
not particularly useful for models with significant interaction terms. | anova2_lm_single | python | statsmodels/statsmodels | statsmodels/stats/anova.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/anova.py | BSD-3-Clause |
def anova_lm(*args, **kwargs):
"""
Anova table for one or more fitted linear models.
Parameters
----------
args : fitted linear model results instance
One or more fitted linear models
scale : float
Estimate of variance, If None, will be estimated from the largest
model. Default is None.
test : str {"F", "Chisq", "Cp"} or None
Test statistics to provide. Default is "F".
typ : str or int {"I","II","III"} or {1,2,3}
The type of Anova test to perform. See notes.
robust : {None, "hc0", "hc1", "hc2", "hc3"}
Use heteroscedasticity-corrected coefficient covariance matrix.
If robust covariance is desired, it is recommended to use `hc3`.
Returns
-------
anova : DataFrame
When args is a single model, return is DataFrame with columns:
sum_sq : float64
Sum of squares for model terms.
df : float64
Degrees of freedom for model terms.
F : float64
F statistic value for significance of adding model terms.
PR(>F) : float64
P-value for significance of adding model terms.
When args is multiple models, return is DataFrame with columns:
df_resid : float64
Degrees of freedom of residuals in models.
ssr : float64
Sum of squares of residuals in models.
df_diff : float64
Degrees of freedom difference from previous model in args
ss_dff : float64
Difference in ssr from previous model in args
F : float64
F statistic comparing to previous model in args
PR(>F): float64
P-value for significance comparing to previous model in args
Notes
-----
Model statistics are given in the order of args. Models must have been fit
using the formula api.
See Also
--------
model_results.compare_f_test, model_results.compare_lm_test
Examples
--------
>>> import statsmodels.api as sm
>>> from statsmodels.formula.api import ols
>>> moore = sm.datasets.get_rdataset("Moore", "carData", cache=True) # load
>>> data = moore.data
>>> data = data.rename(columns={"partner.status" :
... "partner_status"}) # make name pythonic
>>> moore_lm = ols('conformity ~ C(fcategory, Sum)*C(partner_status, Sum)',
... data=data).fit()
>>> table = sm.stats.anova_lm(moore_lm, typ=2) # Type 2 Anova DataFrame
>>> print(table)
"""
typ = kwargs.get('typ', 1)
### Farm Out Single model Anova Type I, II, III, and IV ###
if len(args) == 1:
model = args[0]
return anova_single(model, **kwargs)
if typ not in [1, "I"]:
raise ValueError("Multiple models only supported for type I. "
"Got type %s" % str(typ))
test = kwargs.get("test", "F")
scale = kwargs.get("scale", None)
n_models = len(args)
pr_test = "Pr(>%s)" % test
names = ['df_resid', 'ssr', 'df_diff', 'ss_diff', test, pr_test]
table = DataFrame(np.zeros((n_models, 6)), columns=names)
if not scale: # assume biggest model is last
scale = args[-1].scale
table["ssr"] = [mdl.ssr for mdl in args]
table["df_resid"] = [mdl.df_resid for mdl in args]
table.loc[table.index[1:], "df_diff"] = -np.diff(table["df_resid"].values)
table["ss_diff"] = -table["ssr"].diff()
if test == "F":
table["F"] = table["ss_diff"] / table["df_diff"] / scale
table[pr_test] = stats.f.sf(table["F"], table["df_diff"],
table["df_resid"])
# for earlier scipy - stats.f.sf(np.nan, 10, 2) -> 0 not nan
table.loc[table['F'].isnull(), pr_test] = np.nan
return table | Anova table for one or more fitted linear models.
Parameters
----------
args : fitted linear model results instance
One or more fitted linear models
scale : float
Estimate of variance, If None, will be estimated from the largest
model. Default is None.
test : str {"F", "Chisq", "Cp"} or None
Test statistics to provide. Default is "F".
typ : str or int {"I","II","III"} or {1,2,3}
The type of Anova test to perform. See notes.
robust : {None, "hc0", "hc1", "hc2", "hc3"}
Use heteroscedasticity-corrected coefficient covariance matrix.
If robust covariance is desired, it is recommended to use `hc3`.
Returns
-------
anova : DataFrame
When args is a single model, return is DataFrame with columns:
sum_sq : float64
Sum of squares for model terms.
df : float64
Degrees of freedom for model terms.
F : float64
F statistic value for significance of adding model terms.
PR(>F) : float64
P-value for significance of adding model terms.
When args is multiple models, return is DataFrame with columns:
df_resid : float64
Degrees of freedom of residuals in models.
ssr : float64
Sum of squares of residuals in models.
df_diff : float64
Degrees of freedom difference from previous model in args
ss_dff : float64
Difference in ssr from previous model in args
F : float64
F statistic comparing to previous model in args
PR(>F): float64
P-value for significance comparing to previous model in args
Notes
-----
Model statistics are given in the order of args. Models must have been fit
using the formula api.
See Also
--------
model_results.compare_f_test, model_results.compare_lm_test
Examples
--------
>>> import statsmodels.api as sm
>>> from statsmodels.formula.api import ols
>>> moore = sm.datasets.get_rdataset("Moore", "carData", cache=True) # load
>>> data = moore.data
>>> data = data.rename(columns={"partner.status" :
... "partner_status"}) # make name pythonic
>>> moore_lm = ols('conformity ~ C(fcategory, Sum)*C(partner_status, Sum)',
... data=data).fit()
>>> table = sm.stats.anova_lm(moore_lm, typ=2) # Type 2 Anova DataFrame
>>> print(table) | anova_lm | python | statsmodels/statsmodels | statsmodels/stats/anova.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/anova.py | BSD-3-Clause |
def _ssr_reduced_model(y, x, term_slices, params, keys):
"""
Residual sum of squares of OLS model excluding factors in `keys`
Assumes x matrix is orthogonal
Parameters
----------
y : array_like
dependent variable
x : array_like
independent variables
term_slices : a dict of slices
term_slices[key] is a boolean array specifies the parameters
associated with the factor `key`
params : ndarray
OLS solution of y = x * params
keys : keys for term_slices
factors to be excluded
Returns
-------
rss : float
residual sum of squares
df : int
degrees of freedom
"""
ind = _not_slice(term_slices, keys, x.shape[1])
params1 = params[ind]
ssr = np.subtract(y, x[:, ind].dot(params1))
ssr = ssr.T.dot(ssr)
df_resid = len(y) - len(params1)
return ssr, df_resid | Residual sum of squares of OLS model excluding factors in `keys`
Assumes x matrix is orthogonal
Parameters
----------
y : array_like
dependent variable
x : array_like
independent variables
term_slices : a dict of slices
term_slices[key] is a boolean array specifies the parameters
associated with the factor `key`
params : ndarray
OLS solution of y = x * params
keys : keys for term_slices
factors to be excluded
Returns
-------
rss : float
residual sum of squares
df : int
degrees of freedom | _ssr_reduced_model | python | statsmodels/statsmodels | statsmodels/stats/anova.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/anova.py | BSD-3-Clause |
def _check_data_balanced(self):
"""raise if data is not balanced
This raises a ValueError if the data is not balanced, and
returns None if it is balance
Return might change
"""
factor_levels = 1
for wi in self.within:
factor_levels *= len(self.data[wi].unique())
cell_count = {}
for index in range(self.data.shape[0]):
key = []
for col in self.within:
key.append(self.data[col].iloc[index])
key = tuple(key)
if key in cell_count:
cell_count[key] = cell_count[key] + 1
else:
cell_count[key] = 1
error_message = "Data is unbalanced."
if len(cell_count) != factor_levels:
raise ValueError(error_message)
count = cell_count[key]
for key in cell_count:
if count != cell_count[key]:
raise ValueError(error_message)
if self.data.shape[0] > count * factor_levels:
raise ValueError('There are more than 1 element in a cell! Missing'
' factors?') | raise if data is not balanced
This raises a ValueError if the data is not balanced, and
returns None if it is balance
Return might change | _check_data_balanced | python | statsmodels/statsmodels | statsmodels/stats/anova.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/anova.py | BSD-3-Clause |
def fit(self):
"""estimate the model and compute the Anova table
Returns
-------
AnovaResults instance
"""
y = self.data[self.depvar].values
# Construct OLS endog and exog from string using patsy
within = ['C(%s, Sum)' % i for i in self.within]
subject = 'C(%s, Sum)' % self.subject
factors = within + [subject]
mgr = FormulaManager()
x = mgr.get_matrices('*'.join(factors), data=self.data, pandas=False)
term_slices = mgr.get_term_name_slices(x)
for key in term_slices:
ind = np.array([False]*x.shape[1])
ind[term_slices[key]] = True
term_slices[key] = np.array(ind)
term_exclude = [':'.join(factors)]
ind = _not_slice(term_slices, term_exclude, x.shape[1])
x = x[:, ind]
# Fit OLS
model = OLS(y, x)
results = model.fit()
if model.rank < x.shape[1]:
raise ValueError('Independent variables are collinear.')
for i in term_exclude:
term_slices.pop(i)
for key in term_slices:
term_slices[key] = term_slices[key][ind]
params = results.params
df_resid = results.df_resid
ssr = results.ssr
columns = ['F Value', 'Num DF', 'Den DF', 'Pr > F']
anova_table = pd.DataFrame(np.zeros((0, 4)), columns=columns)
for key in term_slices:
if self.subject not in str(key) and str(key) not in ('Intercept', "1"):
# Independent variables are orthogonal
ssr1, df_resid1 = _ssr_reduced_model(
y, x, term_slices, params, [key])
df1 = df_resid1 - df_resid
msm = (ssr1 - ssr) / df1
if (str(key) == ':'.join(factors[:-1]) or
(str(key) + ':' + subject not in term_slices)):
mse = ssr / df_resid
df2 = df_resid
else:
ssr1, df_resid1 = _ssr_reduced_model(
y, x, term_slices, params,
[str(key) + ':' + subject])
df2 = df_resid1 - df_resid
mse = (ssr1 - ssr) / df2
F = msm / mse
p = stats.f.sf(F, df1, df2)
term = str(key).replace('C(', '').replace(', Sum)', '')
anova_table.loc[term, 'F Value'] = F
anova_table.loc[term, 'Num DF'] = df1
anova_table.loc[term, 'Den DF'] = df2
anova_table.loc[term, 'Pr > F'] = p
return AnovaResults(anova_table) | estimate the model and compute the Anova table
Returns
-------
AnovaResults instance | fit | python | statsmodels/statsmodels | statsmodels/stats/anova.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/anova.py | BSD-3-Clause |
def summary(self):
"""create summary results
Returns
-------
summary : summary2.Summary instance
"""
summ = summary2.Summary()
summ.add_title('Anova')
summ.add_df(self.anova_table)
return summ | create summary results
Returns
-------
summary : summary2.Summary instance | summary | python | statsmodels/statsmodels | statsmodels/stats/anova.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/anova.py | BSD-3-Clause |
def _calc_nodewise_row(exog, idx, alpha):
"""calculates the nodewise_row values for the idxth variable, used to
estimate approx_inv_cov.
Parameters
----------
exog : array_like
The weighted design matrix for the current partition.
idx : scalar
Index of the current variable.
alpha : scalar or array_like
The penalty weight. If a scalar, the same penalty weight
applies to all variables in the model. If a vector, it
must have the same length as `params`, and contains a
penalty weight for each coefficient.
Returns
-------
An array-like object of length p-1
Notes
-----
nodewise_row_i = arg min 1/(2n) ||exog_i - exog_-i gamma||_2^2
+ alpha ||gamma||_1
"""
p = exog.shape[1]
ind = list(range(p))
ind.pop(idx)
# handle array alphas
if not np.isscalar(alpha):
alpha = alpha[ind]
tmod = OLS(exog[:, idx], exog[:, ind])
nodewise_row = tmod.fit_regularized(alpha=alpha).params
return nodewise_row | calculates the nodewise_row values for the idxth variable, used to
estimate approx_inv_cov.
Parameters
----------
exog : array_like
The weighted design matrix for the current partition.
idx : scalar
Index of the current variable.
alpha : scalar or array_like
The penalty weight. If a scalar, the same penalty weight
applies to all variables in the model. If a vector, it
must have the same length as `params`, and contains a
penalty weight for each coefficient.
Returns
-------
An array-like object of length p-1
Notes
-----
nodewise_row_i = arg min 1/(2n) ||exog_i - exog_-i gamma||_2^2
+ alpha ||gamma||_1 | _calc_nodewise_row | python | statsmodels/statsmodels | statsmodels/stats/regularized_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/regularized_covariance.py | BSD-3-Clause |
def _calc_nodewise_weight(exog, nodewise_row, idx, alpha):
"""calculates the nodewise_weightvalue for the idxth variable, used to
estimate approx_inv_cov.
Parameters
----------
exog : array_like
The weighted design matrix for the current partition.
nodewise_row : array_like
The nodewise_row values for the current variable.
idx : scalar
Index of the current variable
alpha : scalar or array_like
The penalty weight. If a scalar, the same penalty weight
applies to all variables in the model. If a vector, it
must have the same length as `params`, and contains a
penalty weight for each coefficient.
Returns
-------
A scalar
Notes
-----
nodewise_weight_i = sqrt(1/n ||exog,i - exog_-i nodewise_row||_2^2
+ alpha ||nodewise_row||_1)
"""
n, p = exog.shape
ind = list(range(p))
ind.pop(idx)
# handle array alphas
if not np.isscalar(alpha):
alpha = alpha[ind]
d = np.linalg.norm(exog[:, idx] - exog[:, ind].dot(nodewise_row))**2
d = np.sqrt(d / n + alpha * np.linalg.norm(nodewise_row, 1))
return d | calculates the nodewise_weightvalue for the idxth variable, used to
estimate approx_inv_cov.
Parameters
----------
exog : array_like
The weighted design matrix for the current partition.
nodewise_row : array_like
The nodewise_row values for the current variable.
idx : scalar
Index of the current variable
alpha : scalar or array_like
The penalty weight. If a scalar, the same penalty weight
applies to all variables in the model. If a vector, it
must have the same length as `params`, and contains a
penalty weight for each coefficient.
Returns
-------
A scalar
Notes
-----
nodewise_weight_i = sqrt(1/n ||exog,i - exog_-i nodewise_row||_2^2
+ alpha ||nodewise_row||_1) | _calc_nodewise_weight | python | statsmodels/statsmodels | statsmodels/stats/regularized_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/regularized_covariance.py | BSD-3-Clause |
def _calc_approx_inv_cov(nodewise_row_l, nodewise_weight_l):
"""calculates the approximate inverse covariance matrix
Parameters
----------
nodewise_row_l : list
A list of array-like object where each object corresponds to
the nodewise_row values for the corresponding variable, should
be length p.
nodewise_weight_l : list
A list of scalars where each scalar corresponds to the nodewise_weight
value for the corresponding variable, should be length p.
Returns
------
An array-like object, p x p matrix
Notes
-----
nwr = nodewise_row
nww = nodewise_weight
approx_inv_cov_j = - 1 / nww_j [nwr_j,1,...,1,...nwr_j,p]
"""
p = len(nodewise_weight_l)
approx_inv_cov = -np.eye(p)
for idx in range(p):
ind = list(range(p))
ind.pop(idx)
approx_inv_cov[idx, ind] = nodewise_row_l[idx]
approx_inv_cov *= -1 / nodewise_weight_l[:, None]**2
return approx_inv_cov | calculates the approximate inverse covariance matrix
Parameters
----------
nodewise_row_l : list
A list of array-like object where each object corresponds to
the nodewise_row values for the corresponding variable, should
be length p.
nodewise_weight_l : list
A list of scalars where each scalar corresponds to the nodewise_weight
value for the corresponding variable, should be length p.
Returns
------
An array-like object, p x p matrix
Notes
-----
nwr = nodewise_row
nww = nodewise_weight
approx_inv_cov_j = - 1 / nww_j [nwr_j,1,...,1,...nwr_j,p] | _calc_approx_inv_cov | python | statsmodels/statsmodels | statsmodels/stats/regularized_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/regularized_covariance.py | BSD-3-Clause |
def fit(self, alpha=0):
"""estimates the regularized inverse covariance using nodewise
regression
Parameters
----------
alpha : scalar
Regularizing constant
"""
n, p = self.exog.shape
nodewise_row_l = []
nodewise_weight_l = []
for idx in range(p):
nodewise_row = _calc_nodewise_row(self.exog, idx, alpha)
nodewise_row_l.append(nodewise_row)
nodewise_weight = _calc_nodewise_weight(self.exog, nodewise_row,
idx, alpha)
nodewise_weight_l.append(nodewise_weight)
nodewise_row_l = np.array(nodewise_row_l)
nodewise_weight_l = np.array(nodewise_weight_l)
approx_inv_cov = _calc_approx_inv_cov(nodewise_row_l,
nodewise_weight_l)
self._approx_inv_cov = approx_inv_cov | estimates the regularized inverse covariance using nodewise
regression
Parameters
----------
alpha : scalar
Regularizing constant | fit | python | statsmodels/statsmodels | statsmodels/stats/regularized_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/regularized_covariance.py | BSD-3-Clause |
def transform_corr_normal(corr, method, return_var=False, possdef=True):
"""transform correlation matrix to be consistent at normal distribution
Parameters
----------
corr : array_like
correlation matrix, either Pearson, Gaussian-rank, Spearman, Kendall
or quadrant correlation matrix
method : string
type of covariance matrix
supported types are 'pearson', 'gauss_rank', 'kendal', 'spearman' and
'quadrant'
return_var : bool
If true, then the asymptotic variance of the normalized correlation
is also returned. The variance of the spearman correlation requires
numerical integration which is calculated with scipy's odeint.
possdef : not implemented yet
Check whether resulting correlation matrix for positive semidefinite
and return a positive semidefinite approximation if not.
Returns
-------
corr : ndarray
correlation matrix, consistent with correlation for a multivariate
normal distribution
var : ndarray (optional)
asymptotic variance of the correlation if requested by `return_var`.
Notes
-----
Pearson and Gaussian-rank correlation are consistent at the normal
distribution and will be returned without changes.
The other correlation matrices are not guaranteed to be positive
semidefinite in small sample after conversion, even if the underlying
untransformed correlation matrix is positive (semi)definite. Croux and
Dehon mention that nobs / k_vars should be larger than 3 for kendall and
larger than 2 for spearman.
References
----------
.. [1] Boudt, Kris, Jonathan Cornelissen, and Christophe Croux. “The
Gaussian Rank Correlation Estimator: Robustness Properties.”
Statistics and Computing 22, no. 2 (April 5, 2011): 471–83.
https://doi.org/10.1007/s11222-011-9237-0.
.. [2] Croux, Christophe, and Catherine Dehon. “Influence Functions of the
Spearman and Kendall Correlation Measures.”
Statistical Methods & Applications 19, no. 4 (May 12, 2010): 497–515.
https://doi.org/10.1007/s10260-010-0142-z.
"""
method = method.lower()
rho = np.asarray(corr)
var = None # initialize
if method in ['pearson', 'gauss_rank']:
corr_n = corr
if return_var:
var = (1 - rho**2)**2
elif method.startswith('kendal'):
corr_n = np.sin(np.pi / 2 * corr)
if return_var:
var = (1 - rho**2) * np.pi**2 * (
1./9 - 4 / np.pi**2 * np.arcsin(rho / 2)**2)
elif method == 'quadrant':
corr_n = np.sin(np.pi / 2 * corr)
if return_var:
var = (1 - rho**2) * (np.pi**2 / 4 - np.arcsin(rho)**2)
elif method.startswith('spearman'):
corr_n = 2 * np.sin(np.pi / 6 * corr)
# not clear which rho is in formula, should be normalized rho,
# but original corr coefficient seems to match results in articles
# rho = corr_n
if return_var:
# odeint only works if grid of rho is large, i.e. many points
# e.g. rho = np.linspace(0, 1, 101)
rho = np.atleast_1d(rho)
idx = np.argsort(rho)
rhos = rho[idx]
rhos = np.concatenate(([0], rhos))
t = np.arcsin(rhos / 2)
# drop np namespace here
sin, cos = np.sin, np.cos
var = (1 - rho**2 / 4) * pi2 / 9 # leading factor
f1 = lambda t, x: np.arcsin(sin(x) / (1 + 2 * cos(2 * x))) # noqa
f2 = lambda t, x: np.arcsin(sin(2 * x) / # noqa
np.sqrt(1 + 2 * cos(2 * x)))
f3 = lambda t, x: np.arcsin(sin(2 * x) / # noqa
(2 * np.sqrt(cos(2 * x))))
f4 = lambda t, x: np.arcsin(( 3 * sin(x) - sin(3 * x)) / # noqa
(4 * cos(2 * x)))
# todo check dimension, odeint return column (n, 1) array
hmax = 1e-1
rf1 = integrate.odeint(f1 , 0, t=t, hmax=hmax).squeeze()
rf2 = integrate.odeint(f2 , 0, t=t, hmax=hmax).squeeze()
rf3 = integrate.odeint(f3 , 0, t=t, hmax=hmax).squeeze()
rf4 = integrate.odeint(f4 , 0, t=t, hmax=hmax).squeeze()
fact = 1 + 144 * (-9 / 4. * pi2i * np.arcsin(rhos / 2)**2 +
pi2i * rf1 +
2 * pi2i * rf2 + pi2i * rf3 +
0.5 * pi2i * rf4)
# fact = 1 - 9 / 4 * pi2i * np.arcsin(rhos / 2)**2
fact2 = np.zeros_like(var) * np.nan
fact2[idx] = fact[1:]
var *= fact2
else:
raise ValueError('method not recognized')
if return_var:
return corr_n, var
else:
return corr_n | transform correlation matrix to be consistent at normal distribution
Parameters
----------
corr : array_like
correlation matrix, either Pearson, Gaussian-rank, Spearman, Kendall
or quadrant correlation matrix
method : string
type of covariance matrix
supported types are 'pearson', 'gauss_rank', 'kendal', 'spearman' and
'quadrant'
return_var : bool
If true, then the asymptotic variance of the normalized correlation
is also returned. The variance of the spearman correlation requires
numerical integration which is calculated with scipy's odeint.
possdef : not implemented yet
Check whether resulting correlation matrix for positive semidefinite
and return a positive semidefinite approximation if not.
Returns
-------
corr : ndarray
correlation matrix, consistent with correlation for a multivariate
normal distribution
var : ndarray (optional)
asymptotic variance of the correlation if requested by `return_var`.
Notes
-----
Pearson and Gaussian-rank correlation are consistent at the normal
distribution and will be returned without changes.
The other correlation matrices are not guaranteed to be positive
semidefinite in small sample after conversion, even if the underlying
untransformed correlation matrix is positive (semi)definite. Croux and
Dehon mention that nobs / k_vars should be larger than 3 for kendall and
larger than 2 for spearman.
References
----------
.. [1] Boudt, Kris, Jonathan Cornelissen, and Christophe Croux. “The
Gaussian Rank Correlation Estimator: Robustness Properties.”
Statistics and Computing 22, no. 2 (April 5, 2011): 471–83.
https://doi.org/10.1007/s11222-011-9237-0.
.. [2] Croux, Christophe, and Catherine Dehon. “Influence Functions of the
Spearman and Kendall Correlation Measures.”
Statistical Methods & Applications 19, no. 4 (May 12, 2010): 497–515.
https://doi.org/10.1007/s10260-010-0142-z. | transform_corr_normal | python | statsmodels/statsmodels | statsmodels/stats/covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/covariance.py | BSD-3-Clause |
def corr_rank(data):
"""Spearman rank correlation
simplified version of scipy.stats.spearmanr
"""
x = np.asarray(data)
axisout = 0
ar = np.apply_along_axis(stats.rankdata, axisout, x)
corr = np.corrcoef(ar, rowvar=False)
return corr | Spearman rank correlation
simplified version of scipy.stats.spearmanr | corr_rank | python | statsmodels/statsmodels | statsmodels/stats/covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/covariance.py | BSD-3-Clause |
def corr_normal_scores(data):
"""Gaussian rank (normal scores) correlation
Status: unverified, subject to change
Parameters
----------
data : array_like
2-D data with observations in rows and variables in columns
Returns
-------
corr : ndarray
correlation matrix
References
----------
.. [1] Boudt, Kris, Jonathan Cornelissen, and Christophe Croux. “The
Gaussian Rank Correlation Estimator: Robustness Properties.”
Statistics and Computing 22, no. 2 (April 5, 2011): 471–83.
https://doi.org/10.1007/s11222-011-9237-0.
"""
# TODO: a full version should be same as scipy spearmanr
# I think that's not true the croux et al articles mention different
# results
# needs verification for the p-value calculation
x = np.asarray(data)
nobs = x.shape[0]
axisout = 0
ar = np.apply_along_axis(stats.rankdata, axisout, x)
ar = stats.norm.ppf(ar / (nobs + 1))
corr = np.corrcoef(ar, rowvar=axisout)
return corr | Gaussian rank (normal scores) correlation
Status: unverified, subject to change
Parameters
----------
data : array_like
2-D data with observations in rows and variables in columns
Returns
-------
corr : ndarray
correlation matrix
References
----------
.. [1] Boudt, Kris, Jonathan Cornelissen, and Christophe Croux. “The
Gaussian Rank Correlation Estimator: Robustness Properties.”
Statistics and Computing 22, no. 2 (April 5, 2011): 471–83.
https://doi.org/10.1007/s11222-011-9237-0. | corr_normal_scores | python | statsmodels/statsmodels | statsmodels/stats/covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/covariance.py | BSD-3-Clause |
def corr_quadrant(data, transform=np.sign, normalize=False):
"""Quadrant correlation
Status: unverified, subject to change
Parameters
----------
data : array_like
2-D data with observations in rows and variables in columns
Returns
-------
corr : ndarray
correlation matrix
References
----------
.. [1] Croux, Christophe, and Catherine Dehon. “Influence Functions of the
Spearman and Kendall Correlation Measures.”
Statistical Methods & Applications 19, no. 4 (May 12, 2010): 497–515.
https://doi.org/10.1007/s10260-010-0142-z.
"""
# try also with tanh transform, a starting corr for DetXXX
# tanh produces a cov not a corr
x = np.asarray(data)
nobs = x.shape[0]
med = np.median(x, 0)
x_dm = transform(x - med)
corr = x_dm.T.dot(x_dm) / nobs
if normalize:
std = np.sqrt(np.diag(corr))
corr /= std
corr /= std[:, None]
return corr | Quadrant correlation
Status: unverified, subject to change
Parameters
----------
data : array_like
2-D data with observations in rows and variables in columns
Returns
-------
corr : ndarray
correlation matrix
References
----------
.. [1] Croux, Christophe, and Catherine Dehon. “Influence Functions of the
Spearman and Kendall Correlation Measures.”
Statistical Methods & Applications 19, no. 4 (May 12, 2010): 497–515.
https://doi.org/10.1007/s10260-010-0142-z. | corr_quadrant | python | statsmodels/statsmodels | statsmodels/stats/covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/covariance.py | BSD-3-Clause |
def partial_project(endog, exog):
'''helper function to get linear projection or partialling out of variables
endog variables are projected on exog variables
Parameters
----------
endog : ndarray
array of variables where the effect of exog is partialled out.
exog : ndarray
array of variables on which the endog variables are projected.
Returns
-------
res : instance of Bunch with
- params : OLS parameter estimates from projection of endog on exog
- fittedvalues : predicted values of endog given exog
- resid : residual of the regression, values of endog with effect of
exog partialled out
Notes
-----
This is no-frills mainly for internal calculations, no error checking or
array conversion is performed, at least for now.
'''
x1, x2 = endog, exog
params = np.linalg.pinv(x2).dot(x1)
predicted = x2.dot(params)
residual = x1 - predicted
res = Bunch(params=params,
fittedvalues=predicted,
resid=residual)
return res | helper function to get linear projection or partialling out of variables
endog variables are projected on exog variables
Parameters
----------
endog : ndarray
array of variables where the effect of exog is partialled out.
exog : ndarray
array of variables on which the endog variables are projected.
Returns
-------
res : instance of Bunch with
- params : OLS parameter estimates from projection of endog on exog
- fittedvalues : predicted values of endog given exog
- resid : residual of the regression, values of endog with effect of
exog partialled out
Notes
-----
This is no-frills mainly for internal calculations, no error checking or
array conversion is performed, at least for now. | partial_project | python | statsmodels/statsmodels | statsmodels/stats/multivariate_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multivariate_tools.py | BSD-3-Clause |
def cancorr(x1, x2, demean=True, standardize=False):
'''canonical correlation coefficient beween 2 arrays
Parameters
----------
x1, x2 : ndarrays, 2_D
two 2-dimensional data arrays, observations in rows, variables in columns
demean : bool
If demean is true, then the mean is subtracted from each variable
standardize : bool
If standardize is true, then each variable is demeaned and divided by
its standard deviation. Rescaling does not change the canonical
correlation coefficients.
Returns
-------
ccorr : ndarray, 1d
canonical correlation coefficients, sorted from largest to smallest.
Note, that these are the square root of the eigenvalues.
Notes
-----
This is a helper function for other statistical functions. It only
calculates the canonical correlation coefficients and does not do a full
canoncial correlation analysis
The canonical correlation coefficient is calculated with the generalized
matrix inverse and does not raise an exception if one of the data arrays
have less than full column rank.
See Also
--------
cc_ranktest
cc_stats
CCA not yet
'''
#x, y = x1, x2
if demean or standardize:
x1 = (x1 - x1.mean(0))
x2 = (x2 - x2.mean(0))
if standardize:
#std does not make a difference to canonical correlation coefficients
x1 /= x1.std(0)
x2 /= x2.std(0)
t1 = np.linalg.pinv(x1).dot(x2)
t2 = np.linalg.pinv(x2).dot(x1)
m = t1.dot(t2)
cc = np.sqrt(np.linalg.eigvals(m))
cc.sort()
return cc[::-1] | canonical correlation coefficient beween 2 arrays
Parameters
----------
x1, x2 : ndarrays, 2_D
two 2-dimensional data arrays, observations in rows, variables in columns
demean : bool
If demean is true, then the mean is subtracted from each variable
standardize : bool
If standardize is true, then each variable is demeaned and divided by
its standard deviation. Rescaling does not change the canonical
correlation coefficients.
Returns
-------
ccorr : ndarray, 1d
canonical correlation coefficients, sorted from largest to smallest.
Note, that these are the square root of the eigenvalues.
Notes
-----
This is a helper function for other statistical functions. It only
calculates the canonical correlation coefficients and does not do a full
canoncial correlation analysis
The canonical correlation coefficient is calculated with the generalized
matrix inverse and does not raise an exception if one of the data arrays
have less than full column rank.
See Also
--------
cc_ranktest
cc_stats
CCA not yet | cancorr | python | statsmodels/statsmodels | statsmodels/stats/multivariate_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multivariate_tools.py | BSD-3-Clause |
def cc_ranktest(x1, x2, demean=True, fullrank=False):
'''rank tests based on smallest canonical correlation coefficients
Anderson canonical correlations test (LM test) and
Cragg-Donald test (Wald test)
Assumes homoskedasticity and independent observations, overrejects if
there is heteroscedasticity or autocorrelation.
The Null Hypothesis is that the rank is k - 1, the alternative hypothesis
is that the rank is at least k.
Parameters
----------
x1, x2 : ndarrays, 2_D
two 2-dimensional data arrays, observations in rows, variables in columns
demean : bool
If demean is true, then the mean is subtracted from each variable.
fullrank : bool
If true, then only the test that the matrix has full rank is returned.
If false, the test for all possible ranks are returned. However, no
the p-values are not corrected for the multiplicity of tests.
Returns
-------
value : float
value of the test statistic
p-value : float
p-value for the test Null Hypothesis tha the smallest canonical
correlation coefficient is zero. based on chi-square distribution
df : int
degrees of freedom for thechi-square distribution in the hypothesis test
ccorr : ndarray, 1d
All canonical correlation coefficients sorted from largest to smallest.
Notes
-----
Degrees of freedom for the distribution of the test statistic are based on
number of columns of x1 and x2 and not on their matrix rank.
(I'm not sure yet what the interpretation of the test is if x1 or x2 are of
reduced rank.)
See Also
--------
cancorr
cc_stats
'''
from scipy import stats
nobs1, k1 = x1.shape
nobs2, k2 = x2.shape
cc = cancorr(x1, x2, demean=demean)
cc2 = cc * cc
if fullrank:
df = np.abs(k1 - k2) + 1
value = nobs1 * cc2[-1]
w_value = nobs1 * (cc2[-1] / (1. - cc2[-1]))
return value, stats.chi2.sf(value, df), df, cc, w_value, stats.chi2.sf(w_value, df)
else:
r = np.arange(min(k1, k2))[::-1]
df = (k1 - r) * (k2 - r)
values = nobs1 * cc2[::-1].cumsum()
w_values = nobs1 * (cc2 / (1. - cc2))[::-1].cumsum()
return values, stats.chi2.sf(values, df), df, cc, w_values, stats.chi2.sf(w_values, df) | rank tests based on smallest canonical correlation coefficients
Anderson canonical correlations test (LM test) and
Cragg-Donald test (Wald test)
Assumes homoskedasticity and independent observations, overrejects if
there is heteroscedasticity or autocorrelation.
The Null Hypothesis is that the rank is k - 1, the alternative hypothesis
is that the rank is at least k.
Parameters
----------
x1, x2 : ndarrays, 2_D
two 2-dimensional data arrays, observations in rows, variables in columns
demean : bool
If demean is true, then the mean is subtracted from each variable.
fullrank : bool
If true, then only the test that the matrix has full rank is returned.
If false, the test for all possible ranks are returned. However, no
the p-values are not corrected for the multiplicity of tests.
Returns
-------
value : float
value of the test statistic
p-value : float
p-value for the test Null Hypothesis tha the smallest canonical
correlation coefficient is zero. based on chi-square distribution
df : int
degrees of freedom for thechi-square distribution in the hypothesis test
ccorr : ndarray, 1d
All canonical correlation coefficients sorted from largest to smallest.
Notes
-----
Degrees of freedom for the distribution of the test statistic are based on
number of columns of x1 and x2 and not on their matrix rank.
(I'm not sure yet what the interpretation of the test is if x1 or x2 are of
reduced rank.)
See Also
--------
cancorr
cc_stats | cc_ranktest | python | statsmodels/statsmodels | statsmodels/stats/multivariate_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multivariate_tools.py | BSD-3-Clause |
def cc_stats(x1, x2, demean=True):
'''MANOVA statistics based on canonical correlation coefficient
Calculates Pillai's Trace, Wilk's Lambda, Hotelling's Trace and
Roy's Largest Root.
Parameters
----------
x1, x2 : ndarrays, 2_D
two 2-dimensional data arrays, observations in rows, variables in columns
demean : bool
If demean is true, then the mean is subtracted from each variable.
Returns
-------
res : dict
Dictionary containing the test statistics.
Notes
-----
same as `canon` in Stata
missing: F-statistics and p-values
TODO: should return a results class instead
produces nans sometimes, singular, perfect correlation of x1, x2 ?
'''
nobs1, k1 = x1.shape # endogenous ?
nobs2, k2 = x2.shape
cc = cancorr(x1, x2, demean=demean)
cc2 = cc**2
lam = (cc2 / (1 - cc2)) # what if max cc2 is 1 ?
# Problem: ccr might not care if x1 or x2 are reduced rank,
# but df will depend on rank
df_model = k1 * k2 # df_hypothesis (we do not include mean in x1, x2)
df_resid = k1 * (nobs1 - k2 - demean)
m = 0.5 * (df_model - k1)
pt_value = cc2.sum() # Pillai's trace
wl_value = np.product(1 / (1 + lam)) # Wilk's Lambda
ht_value = lam.sum() # Hotelling's Trace
rm_value = lam.max() # Roy's largest root
#from scipy import stats
# what's the distribution, the test statistic ?
res = {}
res['canonical correlation coefficient'] = cc
res['eigenvalues'] = lam
res["Pillai's Trace"] = pt_value
res["Wilk's Lambda"] = wl_value
res["Hotelling's Trace"] = ht_value
res["Roy's Largest Root"] = rm_value
res['df_resid'] = df_resid
res['df_m'] = m
return res | MANOVA statistics based on canonical correlation coefficient
Calculates Pillai's Trace, Wilk's Lambda, Hotelling's Trace and
Roy's Largest Root.
Parameters
----------
x1, x2 : ndarrays, 2_D
two 2-dimensional data arrays, observations in rows, variables in columns
demean : bool
If demean is true, then the mean is subtracted from each variable.
Returns
-------
res : dict
Dictionary containing the test statistics.
Notes
-----
same as `canon` in Stata
missing: F-statistics and p-values
TODO: should return a results class instead
produces nans sometimes, singular, perfect correlation of x1, x2 ? | cc_stats | python | statsmodels/statsmodels | statsmodels/stats/multivariate_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multivariate_tools.py | BSD-3-Clause |
def test_poisson(count, nobs, value, method=None, alternative="two-sided",
dispersion=1):
"""Test for one sample poisson mean or rate
Parameters
----------
count : array_like
Observed count, number of events.
nobs : arrat_like
Currently this is total exposure time of the count variable.
This will likely change.
value : float, array_like
This is the value of poisson rate under the null hypothesis.
method : str
Method to use for confidence interval.
This is required, there is currently no default method.
See Notes for available methods.
alternative : {'two-sided', 'smaller', 'larger'}
alternative hypothesis, which can be two-sided or either one of the
one-sided tests.
dispersion : float
Dispersion scale coefficient for Poisson QMLE. Default is that the
data follows a Poisson distribution. Dispersion different from 1
correspond to excess-dispersion in Poisson quasi-likelihood (GLM).
Dispersion coeffficient different from one is currently only used in
wald and score method.
Returns
-------
HolderTuple instance with test statistic, pvalue and other attributes.
Notes
-----
The implementatio of the hypothesis test is mainly based on the references
for the confidence interval, see confint_poisson.
Available methods are:
- "score" : based on score test, uses variance under null value
- "wald" : based on wald test, uses variance base on estimated rate.
- "waldccv" : based on wald test with 0.5 count added to variance
computation. This does not use continuity correction for the center of
the confidence interval.
- "exact-c" central confidence interval based on gamma distribution
- "midp-c" : based on midp correction of central exact confidence interval.
this uses numerical inversion of the test function. not vectorized.
- "sqrt" : based on square root transformed counts
- "sqrt-a" based on Anscombe square root transformation of counts + 3/8.
See Also
--------
confint_poisson
"""
n = nobs # short hand
rate = count / n
if method is None:
msg = "method needs to be specified, currently no default method"
raise ValueError(msg)
if dispersion != 1:
if method not in ["wald", "waldcc", "score"]:
msg = "excess dispersion only supported in wald and score methods"
raise ValueError(msg)
dist = "normal"
if method == "wald":
std = np.sqrt(dispersion * rate / n)
statistic = (rate - value) / std
elif method == "waldccv":
# WCC in Barker 2002
# add 0.5 event, not 0.5 event rate as in waldcc
# std = np.sqrt((rate + 0.5 / n) / n)
# statistic = (rate + 0.5 / n - value) / std
std = np.sqrt(dispersion * (rate + 0.5 / n) / n)
statistic = (rate - value) / std
elif method == "score":
std = np.sqrt(dispersion * value / n)
statistic = (rate - value) / std
pvalue = stats.norm.sf(statistic)
elif method.startswith("exact-c") or method.startswith("midp-c"):
pv1 = stats.poisson.cdf(count, n * value)
pv2 = stats.poisson.sf(count - 1, n * value)
if method.startswith("midp-c"):
pv1 = pv1 - 0.5 * stats.poisson.pmf(count, n * value)
pv2 = pv2 - 0.5 * stats.poisson.pmf(count, n * value)
if alternative == "two-sided":
pvalue = 2 * np.minimum(pv1, pv2)
elif alternative == "larger":
pvalue = pv2
elif alternative == "smaller":
pvalue = pv1
else:
msg = 'alternative should be "two-sided", "larger" or "smaller"'
raise ValueError(msg)
statistic = np.nan
dist = "Poisson"
elif method == "sqrt":
std = 0.5
statistic = (np.sqrt(count) - np.sqrt(n * value)) / std
elif method == "sqrt-a":
# anscombe, based on Swift 2009 (with transformation to rate)
std = 0.5
statistic = (np.sqrt(count + 3 / 8) - np.sqrt(n * value + 3 / 8)) / std
elif method == "sqrt-v":
# vandenbroucke, based on Swift 2009 (with transformation to rate)
std = 0.5
crit = stats.norm.isf(0.025)
statistic = (np.sqrt(count + (crit**2 + 2) / 12) -
# np.sqrt(n * value + (crit**2 + 2) / 12)) / std
np.sqrt(n * value)) / std
else:
raise ValueError("unknown method %s" % method)
if dist == 'normal':
statistic, pvalue = _zstat_generic2(statistic, 1, alternative)
res = HolderTuple(
statistic=statistic,
pvalue=np.clip(pvalue, 0, 1),
distribution=dist,
method=method,
alternative=alternative,
rate=rate,
nobs=n
)
return res | Test for one sample poisson mean or rate
Parameters
----------
count : array_like
Observed count, number of events.
nobs : arrat_like
Currently this is total exposure time of the count variable.
This will likely change.
value : float, array_like
This is the value of poisson rate under the null hypothesis.
method : str
Method to use for confidence interval.
This is required, there is currently no default method.
See Notes for available methods.
alternative : {'two-sided', 'smaller', 'larger'}
alternative hypothesis, which can be two-sided or either one of the
one-sided tests.
dispersion : float
Dispersion scale coefficient for Poisson QMLE. Default is that the
data follows a Poisson distribution. Dispersion different from 1
correspond to excess-dispersion in Poisson quasi-likelihood (GLM).
Dispersion coeffficient different from one is currently only used in
wald and score method.
Returns
-------
HolderTuple instance with test statistic, pvalue and other attributes.
Notes
-----
The implementatio of the hypothesis test is mainly based on the references
for the confidence interval, see confint_poisson.
Available methods are:
- "score" : based on score test, uses variance under null value
- "wald" : based on wald test, uses variance base on estimated rate.
- "waldccv" : based on wald test with 0.5 count added to variance
computation. This does not use continuity correction for the center of
the confidence interval.
- "exact-c" central confidence interval based on gamma distribution
- "midp-c" : based on midp correction of central exact confidence interval.
this uses numerical inversion of the test function. not vectorized.
- "sqrt" : based on square root transformed counts
- "sqrt-a" based on Anscombe square root transformation of counts + 3/8.
See Also
--------
confint_poisson | test_poisson | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def confint_poisson(count, exposure, method=None, alpha=0.05,
alternative="two-sided"):
"""Confidence interval for a Poisson mean or rate
The function is vectorized for all methods except "midp-c", which uses
an iterative method to invert the hypothesis test function.
All current methods are central, that is the probability of each tail is
smaller or equal to alpha / 2.
Parameters
----------
count : array_like
Observed count, number of events.
exposure : arrat_like
Currently this is total exposure time of the count variable.
This will likely change.
method : str
Method to use for confidence interval
This is required, there is currently no default method
alpha : float in (0, 1)
Significance level, nominal coverage of the confidence interval is
1 - alpha.
alternative : {"two-sider", "larger", "smaller")
default: "two-sided"
Specifies whether to calculate a two-sided or one-sided confidence
interval.
Returns
-------
tuple (low, upp) : confidence limits.
When alternative is not "two-sided", lower or upper bound is set to
0 or inf respectively.
Notes
-----
Methods are mainly based on Barker (2002) [1]_ and Swift (2009) [3]_.
Available methods are:
- "exact-c" central confidence interval based on gamma distribution
- "score" : based on score test, uses variance under null value
- "wald" : based on wald test, uses variance base on estimated rate.
- "waldccv" : based on wald test with 0.5 count added to variance
computation. This does not use continuity correction for the center of
the confidence interval.
- "midp-c" : based on midp correction of central exact confidence interval.
this uses numerical inversion of the test function. not vectorized.
- "jeffreys" : based on Jeffreys' prior. computed using gamma distribution
- "sqrt" : based on square root transformed counts
- "sqrt-a" based on Anscombe square root transformation of counts + 3/8.
- "sqrt-centcc" will likely be dropped. anscombe with continuity corrected
center.
(Similar to R survival cipoisson, but without the 3/8 right shift of
the confidence interval).
sqrt-cent is the same as sqrt-a, using a different computation, will be
deleted.
sqrt-v is a corrected square root method attributed to vandenbrouke, which
might also be deleted.
Todo:
- missing dispersion,
- maybe split nobs and exposure (? needed in NB). Exposure could be used
to standardize rate.
- modified wald, switch method if count=0.
See Also
--------
test_poisson
References
----------
.. [1] Barker, Lawrence. 2002. “A Comparison of Nine Confidence Intervals
for a Poisson Parameter When the Expected Number of Events Is ≤ 5.”
The American Statistician 56 (2): 85–89.
https://doi.org/10.1198/000313002317572736.
.. [2] Patil, VV, and HV Kulkarni. 2012. “Comparison of Confidence
Intervals for the Poisson Mean: Some New Aspects.”
REVSTAT–Statistical Journal 10(2): 211–27.
.. [3] Swift, Michael Bruce. 2009. “Comparison of Confidence Intervals for
a Poisson Mean – Further Considerations.” Communications in Statistics -
Theory and Methods 38 (5): 748–59.
https://doi.org/10.1080/03610920802255856.
"""
n = exposure # short hand
rate = count / exposure
if alternative == 'two-sided':
alpha = alpha / 2
elif alternative not in ['larger', 'smaller']:
raise NotImplementedError(
f"alternative {alternative} is not available")
if method is None:
msg = "method needs to be specified, currently no default method"
raise ValueError(msg)
if method == "wald":
whalf = stats.norm.isf(alpha) * np.sqrt(rate / n)
ci = (rate - whalf, rate + whalf)
elif method == "waldccv":
# based on WCC in Barker 2002
# add 0.5 event, not 0.5 event rate as in BARKER waldcc
whalf = stats.norm.isf(alpha) * np.sqrt((rate + 0.5 / n) / n)
ci = (rate - whalf, rate + whalf)
elif method == "score":
crit = stats.norm.isf(alpha)
center = count + crit**2 / 2
whalf = crit * np.sqrt(count + crit**2 / 4)
ci = ((center - whalf) / n, (center + whalf) / n)
elif method == "midp-c":
# note local alpha above is for one tail
ci = _invert_test_confint(count, n, alpha=2 * alpha, method="midp-c",
method_start="exact-c")
elif method == "sqrt":
# drop, wrong n
crit = stats.norm.isf(alpha)
center = rate + crit**2 / (4 * n)
whalf = crit * np.sqrt(rate / n)
ci = (center - whalf, center + whalf)
elif method == "sqrt-cent":
crit = stats.norm.isf(alpha)
center = count + crit**2 / 4
whalf = crit * np.sqrt(count + 3 / 8)
ci = ((center - whalf) / n, (center + whalf) / n)
elif method == "sqrt-centcc":
# drop with cc, does not match cipoisson in R survival
crit = stats.norm.isf(alpha)
# avoid sqrt of negative value if count=0
center_low = np.sqrt(np.maximum(count + 3 / 8 - 0.5, 0))
center_upp = np.sqrt(count + 3 / 8 + 0.5)
whalf = crit / 2
# above is for ci of count
ci = (((np.maximum(center_low - whalf, 0))**2 - 3 / 8) / n,
((center_upp + whalf)**2 - 3 / 8) / n)
# crit = stats.norm.isf(alpha)
# center = count
# whalf = crit * np.sqrt((count + 3 / 8 + 0.5))
# ci = ((center - whalf - 0.5) / n, (center + whalf + 0.5) / n)
elif method == "sqrt-a":
# anscombe, based on Swift 2009 (with transformation to rate)
crit = stats.norm.isf(alpha)
center = np.sqrt(count + 3 / 8)
whalf = crit / 2
# above is for ci of count
ci = (((np.maximum(center - whalf, 0))**2 - 3 / 8) / n,
((center + whalf)**2 - 3 / 8) / n)
elif method == "sqrt-v":
# vandenbroucke, based on Swift 2009 (with transformation to rate)
crit = stats.norm.isf(alpha)
center = np.sqrt(count + (crit**2 + 2) / 12)
whalf = crit / 2
# above is for ci of count
ci = (np.maximum(center - whalf, 0))**2 / n, (center + whalf)**2 / n
elif method in ["gamma", "exact-c"]:
# garwood exact, gamma
low = stats.gamma.ppf(alpha, count) / exposure
upp = stats.gamma.isf(alpha, count+1) / exposure
if np.isnan(low).any():
# case with count = 0
if np.size(low) == 1:
low = 0.0
else:
low[np.isnan(low)] = 0.0
ci = (low, upp)
elif method.startswith("jeff"):
# jeffreys, gamma
countc = count + 0.5
ci = (stats.gamma.ppf(alpha, countc) / exposure,
stats.gamma.isf(alpha, countc) / exposure)
else:
raise ValueError("unknown method %s" % method)
if alternative == "larger":
ci = (0, ci[1])
elif alternative == "smaller":
ci = (ci[0], np.inf)
ci = (np.maximum(ci[0], 0), ci[1])
return ci | Confidence interval for a Poisson mean or rate
The function is vectorized for all methods except "midp-c", which uses
an iterative method to invert the hypothesis test function.
All current methods are central, that is the probability of each tail is
smaller or equal to alpha / 2.
Parameters
----------
count : array_like
Observed count, number of events.
exposure : arrat_like
Currently this is total exposure time of the count variable.
This will likely change.
method : str
Method to use for confidence interval
This is required, there is currently no default method
alpha : float in (0, 1)
Significance level, nominal coverage of the confidence interval is
1 - alpha.
alternative : {"two-sider", "larger", "smaller")
default: "two-sided"
Specifies whether to calculate a two-sided or one-sided confidence
interval.
Returns
-------
tuple (low, upp) : confidence limits.
When alternative is not "two-sided", lower or upper bound is set to
0 or inf respectively.
Notes
-----
Methods are mainly based on Barker (2002) [1]_ and Swift (2009) [3]_.
Available methods are:
- "exact-c" central confidence interval based on gamma distribution
- "score" : based on score test, uses variance under null value
- "wald" : based on wald test, uses variance base on estimated rate.
- "waldccv" : based on wald test with 0.5 count added to variance
computation. This does not use continuity correction for the center of
the confidence interval.
- "midp-c" : based on midp correction of central exact confidence interval.
this uses numerical inversion of the test function. not vectorized.
- "jeffreys" : based on Jeffreys' prior. computed using gamma distribution
- "sqrt" : based on square root transformed counts
- "sqrt-a" based on Anscombe square root transformation of counts + 3/8.
- "sqrt-centcc" will likely be dropped. anscombe with continuity corrected
center.
(Similar to R survival cipoisson, but without the 3/8 right shift of
the confidence interval).
sqrt-cent is the same as sqrt-a, using a different computation, will be
deleted.
sqrt-v is a corrected square root method attributed to vandenbrouke, which
might also be deleted.
Todo:
- missing dispersion,
- maybe split nobs and exposure (? needed in NB). Exposure could be used
to standardize rate.
- modified wald, switch method if count=0.
See Also
--------
test_poisson
References
----------
.. [1] Barker, Lawrence. 2002. “A Comparison of Nine Confidence Intervals
for a Poisson Parameter When the Expected Number of Events Is ≤ 5.”
The American Statistician 56 (2): 85–89.
https://doi.org/10.1198/000313002317572736.
.. [2] Patil, VV, and HV Kulkarni. 2012. “Comparison of Confidence
Intervals for the Poisson Mean: Some New Aspects.”
REVSTAT–Statistical Journal 10(2): 211–27.
.. [3] Swift, Michael Bruce. 2009. “Comparison of Confidence Intervals for
a Poisson Mean – Further Considerations.” Communications in Statistics -
Theory and Methods 38 (5): 748–59.
https://doi.org/10.1080/03610920802255856. | confint_poisson | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def tolerance_int_poisson(count, exposure, prob=0.95, exposure_new=1.,
method=None, alpha=0.05,
alternative="two-sided"):
"""tolerance interval for a poisson observation
Parameters
----------
count : array_like
Observed count, number of events.
exposure : arrat_like
Currently this is total exposure time of the count variable.
prob : float in (0, 1)
Probability of poisson interval, often called "content".
With known parameters, each tail would have at most probability
``1 - prob / 2`` in the two-sided interval.
exposure_new : float
Exposure of the new or predicted observation.
method : str
Method to used for confidence interval of the estimate of the
poisson rate, used in `confint_poisson`.
This is required, there is currently no default method.
alpha : float in (0, 1)
Significance level for the confidence interval of the estimate of the
Poisson rate. Nominal coverage of the confidence interval is
1 - alpha.
alternative : {"two-sider", "larger", "smaller")
The tolerance interval can be two-sided or one-sided.
Alternative "larger" provides the upper bound of the confidence
interval, larger counts are outside the interval.
Returns
-------
tuple (low, upp) of limits of tolerance interval.
The tolerance interval is a closed interval, that is both ``low`` and
``upp`` are in the interval.
Notes
-----
verified against R package tolerance `poistol.int`
See Also
--------
confint_poisson
confint_quantile_poisson
References
----------
.. [1] Hahn, Gerald J., and William Q. Meeker. 1991. Statistical Intervals:
A Guide for Practitioners. 1st ed. Wiley Series in Probability and
Statistics. Wiley. https://doi.org/10.1002/9780470316771.
.. [2] Hahn, Gerald J., and Ramesh Chandra. 1981. “Tolerance Intervals for
Poisson and Binomial Variables.” Journal of Quality Technology 13 (2):
100–110. https://doi.org/10.1080/00224065.1981.11980998.
"""
prob_tail = 1 - prob
alpha_ = alpha
if alternative != "two-sided":
# confint_poisson does not have one-sided alternatives
alpha_ = alpha * 2
low, upp = confint_poisson(count, exposure, method=method, alpha=alpha_)
if exposure_new != 1:
low *= exposure_new
upp *= exposure_new
if alternative == "two-sided":
low_pred = stats.poisson.ppf(prob_tail / 2, low)
upp_pred = stats.poisson.ppf(1 - prob_tail / 2, upp)
elif alternative == "larger":
low_pred = 0
upp_pred = stats.poisson.ppf(1 - prob_tail, upp)
elif alternative == "smaller":
low_pred = stats.poisson.ppf(prob_tail, low)
upp_pred = np.inf
# clip -1 of ppf(0)
low_pred = np.maximum(low_pred, 0)
return low_pred, upp_pred | tolerance interval for a poisson observation
Parameters
----------
count : array_like
Observed count, number of events.
exposure : arrat_like
Currently this is total exposure time of the count variable.
prob : float in (0, 1)
Probability of poisson interval, often called "content".
With known parameters, each tail would have at most probability
``1 - prob / 2`` in the two-sided interval.
exposure_new : float
Exposure of the new or predicted observation.
method : str
Method to used for confidence interval of the estimate of the
poisson rate, used in `confint_poisson`.
This is required, there is currently no default method.
alpha : float in (0, 1)
Significance level for the confidence interval of the estimate of the
Poisson rate. Nominal coverage of the confidence interval is
1 - alpha.
alternative : {"two-sider", "larger", "smaller")
The tolerance interval can be two-sided or one-sided.
Alternative "larger" provides the upper bound of the confidence
interval, larger counts are outside the interval.
Returns
-------
tuple (low, upp) of limits of tolerance interval.
The tolerance interval is a closed interval, that is both ``low`` and
``upp`` are in the interval.
Notes
-----
verified against R package tolerance `poistol.int`
See Also
--------
confint_poisson
confint_quantile_poisson
References
----------
.. [1] Hahn, Gerald J., and William Q. Meeker. 1991. Statistical Intervals:
A Guide for Practitioners. 1st ed. Wiley Series in Probability and
Statistics. Wiley. https://doi.org/10.1002/9780470316771.
.. [2] Hahn, Gerald J., and Ramesh Chandra. 1981. “Tolerance Intervals for
Poisson and Binomial Variables.” Journal of Quality Technology 13 (2):
100–110. https://doi.org/10.1080/00224065.1981.11980998. | tolerance_int_poisson | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def confint_quantile_poisson(count, exposure, prob, exposure_new=1.,
method=None, alpha=0.05,
alternative="two-sided"):
"""confidence interval for quantile of poisson random variable
Parameters
----------
count : array_like
Observed count, number of events.
exposure : arrat_like
Currently this is total exposure time of the count variable.
prob : float in (0, 1)
Probability for the quantile, e.g. 0.95 to get the upper 95% quantile.
With known mean mu, the quantile would be poisson.ppf(prob, mu).
exposure_new : float
Exposure of the new or predicted observation.
method : str
Method to used for confidence interval of the estimate of the
poisson rate, used in `confint_poisson`.
This is required, there is currently no default method.
alpha : float in (0, 1)
Significance level for the confidence interval of the estimate of the
Poisson rate. Nominal coverage of the confidence interval is
1 - alpha.
alternative : {"two-sider", "larger", "smaller")
The tolerance interval can be two-sided or one-sided.
Alternative "larger" provides the upper bound of the confidence
interval, larger counts are outside the interval.
Returns
-------
tuple (low, upp) of limits of tolerance interval.
The confidence interval is a closed interval, that is both ``low`` and
``upp`` are in the interval.
See Also
--------
confint_poisson
tolerance_int_poisson
References
----------
Hahn, Gerald J, and William Q Meeker. 2010. Statistical Intervals: A Guide
for Practitioners.
"""
alpha_ = alpha
if alternative != "two-sided":
# confint_poisson does not have one-sided alternatives
alpha_ = alpha * 2
low, upp = confint_poisson(count, exposure, method=method, alpha=alpha_)
if exposure_new != 1:
low *= exposure_new
upp *= exposure_new
if alternative == "two-sided":
low_pred = stats.poisson.ppf(prob, low)
upp_pred = stats.poisson.ppf(prob, upp)
elif alternative == "larger":
low_pred = 0
upp_pred = stats.poisson.ppf(prob, upp)
elif alternative == "smaller":
low_pred = stats.poisson.ppf(prob, low)
upp_pred = np.inf
# clip -1 of ppf(0)
low_pred = np.maximum(low_pred, 0)
return low_pred, upp_pred | confidence interval for quantile of poisson random variable
Parameters
----------
count : array_like
Observed count, number of events.
exposure : arrat_like
Currently this is total exposure time of the count variable.
prob : float in (0, 1)
Probability for the quantile, e.g. 0.95 to get the upper 95% quantile.
With known mean mu, the quantile would be poisson.ppf(prob, mu).
exposure_new : float
Exposure of the new or predicted observation.
method : str
Method to used for confidence interval of the estimate of the
poisson rate, used in `confint_poisson`.
This is required, there is currently no default method.
alpha : float in (0, 1)
Significance level for the confidence interval of the estimate of the
Poisson rate. Nominal coverage of the confidence interval is
1 - alpha.
alternative : {"two-sider", "larger", "smaller")
The tolerance interval can be two-sided or one-sided.
Alternative "larger" provides the upper bound of the confidence
interval, larger counts are outside the interval.
Returns
-------
tuple (low, upp) of limits of tolerance interval.
The confidence interval is a closed interval, that is both ``low`` and
``upp`` are in the interval.
See Also
--------
confint_poisson
tolerance_int_poisson
References
----------
Hahn, Gerald J, and William Q Meeker. 2010. Statistical Intervals: A Guide
for Practitioners. | confint_quantile_poisson | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def _invert_test_confint(count, nobs, alpha=0.05, method="midp-c",
method_start="exact-c"):
"""invert hypothesis test to get confidence interval
"""
def func(r):
v = (test_poisson(count, nobs, value=r, method=method)[1] -
alpha)**2
return v
ci = confint_poisson(count, nobs, method=method_start)
low = optimize.fmin(func, ci[0], xtol=1e-8, disp=False)
upp = optimize.fmin(func, ci[1], xtol=1e-8, disp=False)
assert np.size(low) == 1
return low[0], upp[0] | invert hypothesis test to get confidence interval | _invert_test_confint | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def _invert_test_confint_2indep(
count1, exposure1, count2, exposure2,
alpha=0.05,
method="score",
compare="diff",
method_start="wald"
):
"""invert hypothesis test to get confidence interval for 2indep
"""
def func(r):
v = (test_poisson_2indep(
count1, exposure1, count2, exposure2,
value=r, method=method, compare=compare
)[1] - alpha)**2
return v
ci = confint_poisson_2indep(count1, exposure1, count2, exposure2,
method=method_start, compare=compare)
low = optimize.fmin(func, ci[0], xtol=1e-8, disp=False)
upp = optimize.fmin(func, ci[1], xtol=1e-8, disp=False)
assert np.size(low) == 1
return low[0], upp[0] | invert hypothesis test to get confidence interval for 2indep | _invert_test_confint_2indep | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def test_poisson_2indep(count1, exposure1, count2, exposure2, value=None,
ratio_null=None,
method=None, compare='ratio',
alternative='two-sided', etest_kwds=None):
'''Test for comparing two sample Poisson intensity rates.
Rates are defined as expected count divided by exposure.
The Null and alternative hypothesis for the rates, rate1 and rate2, of two
independent Poisson samples are
for compare = 'diff'
- H0: rate1 - rate2 - value = 0
- H1: rate1 - rate2 - value != 0 if alternative = 'two-sided'
- H1: rate1 - rate2 - value > 0 if alternative = 'larger'
- H1: rate1 - rate2 - value < 0 if alternative = 'smaller'
for compare = 'ratio'
- H0: rate1 / rate2 - value = 0
- H1: rate1 / rate2 - value != 0 if alternative = 'two-sided'
- H1: rate1 / rate2 - value > 0 if alternative = 'larger'
- H1: rate1 / rate2 - value < 0 if alternative = 'smaller'
Parameters
----------
count1 : int
Number of events in first sample, treatment group.
exposure1 : float
Total exposure (time * subjects) in first sample.
count2 : int
Number of events in second sample, control group.
exposure2 : float
Total exposure (time * subjects) in second sample.
ratio_null: float
Ratio of the two Poisson rates under the Null hypothesis. Default is 1.
Deprecated, use ``value`` instead.
.. deprecated:: 0.14.0
Use ``value`` instead.
value : float
Value of the ratio or difference of 2 independent rates under the null
hypothesis. Default is equal rates, i.e. 1 for ratio and 0 for diff.
.. versionadded:: 0.14.0
Replacement for ``ratio_null``.
method : string
Method for the test statistic and the p-value. Defaults to `'score'`.
see Notes.
ratio:
- 'wald': method W1A, wald test, variance based on observed rates
- 'score': method W2A, score test, variance based on estimate under
the Null hypothesis
- 'wald-log': W3A, uses log-ratio, variance based on observed rates
- 'score-log' W4A, uses log-ratio, variance based on estimate under
the Null hypothesis
- 'sqrt': W5A, based on variance stabilizing square root transformation
- 'exact-cond': exact conditional test based on binomial distribution
This uses ``binom_test`` which is minlike in the two-sided case.
- 'cond-midp': midpoint-pvalue of exact conditional test
- 'etest' or 'etest-score: etest with score test statistic
- 'etest-wald': etest with wald test statistic
diff:
- 'wald',
- 'waldccv'
- 'score'
- 'etest-score' or 'etest: etest with score test statistic
- 'etest-wald': etest with wald test statistic
compare : {'diff', 'ratio'}
Default is "ratio".
If compare is `ratio`, then the hypothesis test is for the
rate ratio defined by ratio = rate1 / rate2.
If compare is `diff`, then the hypothesis test is for
diff = rate1 - rate2.
alternative : {"two-sided" (default), "larger", smaller}
The alternative hypothesis, H1, has to be one of the following
- 'two-sided': H1: ratio, or diff, of rates is not equal to value
- 'larger' : H1: ratio, or diff, of rates is larger than value
- 'smaller' : H1: ratio, or diff, of rates is smaller than value
etest_kwds: dictionary
Additional optional parameters to be passed to the etest_poisson_2indep
function, namely y_grid.
Returns
-------
results : instance of HolderTuple class
The two main attributes are test statistic `statistic` and p-value
`pvalue`.
See Also
--------
tost_poisson_2indep
etest_poisson_2indep
Notes
-----
The hypothesis tests for compare="ratio" are based on Gu et al 2018.
The e-tests are also based on ...
- 'wald': method W1A, wald test, variance based on separate estimates
- 'score': method W2A, score test, variance based on estimate under Null
- 'wald-log': W3A, wald test for log transformed ratio
- 'score-log' W4A, score test for log transformed ratio
- 'sqrt': W5A, based on variance stabilizing square root transformation
- 'exact-cond': exact conditional test based on binomial distribution
- 'cond-midp': midpoint-pvalue of exact conditional test
- 'etest': etest with score test statistic
- 'etest-wald': etest with wald test statistic
The hypothesis test for compare="diff" are mainly based on Ng et al 2007
and ...
- wald
- score
- etest-score
- etest-wald
Note the etests use the constraint maximum likelihood estimate (cmle) as
parameters for the underlying Poisson probabilities. The constraint cmle
parameters are the same as in the score test.
The E-test in Krishnamoorty and Thomson uses a moment estimator instead of
the score estimator.
References
----------
.. [1] Gu, Ng, Tang, Schucany 2008: Testing the Ratio of Two Poisson Rates,
Biometrical Journal 50 (2008) 2, 2008
.. [2] Ng, H. K. T., K. Gu, and M. L. Tang. 2007. “A Comparative Study of
Tests for the Difference of Two Poisson Means.”
Computational Statistics & Data Analysis 51 (6): 3085–99.
https://doi.org/10.1016/j.csda.2006.02.004.
'''
# shortcut names
y1, n1, y2, n2 = map(np.asarray, [count1, exposure1, count2, exposure2])
d = n2 / n1
rate1, rate2 = y1 / n1, y2 / n2
rates_cmle = None
if compare == 'ratio':
if method is None:
# default method
method = 'score'
if ratio_null is not None:
warnings.warn("'ratio_null' is deprecated, use 'value' keyword",
FutureWarning)
value = ratio_null
if ratio_null is None and value is None:
# default value
value = ratio_null = 1
else:
# for results holder instance, it still contains ratio_null
ratio_null = value
r = value
r_d = r / d # r1 * n1 / (r2 * n2)
if method in ['score']:
stat = (y1 - y2 * r_d) / np.sqrt((y1 + y2) * r_d)
dist = 'normal'
elif method in ['wald']:
stat = (y1 - y2 * r_d) / np.sqrt(y1 + y2 * r_d**2)
dist = 'normal'
elif method in ['score-log']:
stat = (np.log(y1 / y2) - np.log(r_d))
stat /= np.sqrt((2 + 1 / r_d + r_d) / (y1 + y2))
dist = 'normal'
elif method in ['wald-log']:
stat = (np.log(y1 / y2) - np.log(r_d)) / np.sqrt(1 / y1 + 1 / y2)
dist = 'normal'
elif method in ['sqrt']:
stat = 2 * (np.sqrt(y1 + 3 / 8.) - np.sqrt((y2 + 3 / 8.) * r_d))
stat /= np.sqrt(1 + r_d)
dist = 'normal'
elif method in ['exact-cond', 'cond-midp']:
from statsmodels.stats import proportion
bp = r_d / (1 + r_d)
y_total = y1 + y2
stat = np.nan
# TODO: why y2 in here and not y1, check definition of H1 "larger"
pvalue = proportion.binom_test(y1, y_total, prop=bp,
alternative=alternative)
if method in ['cond-midp']:
# not inplace in case we still want binom pvalue
pvalue = pvalue - 0.5 * stats.binom.pmf(y1, y_total, bp)
dist = 'binomial'
elif method.startswith('etest'):
if method.endswith('wald'):
method_etest = 'wald'
else:
method_etest = 'score'
if etest_kwds is None:
etest_kwds = {}
stat, pvalue = etest_poisson_2indep(
count1, exposure1, count2, exposure2, value=value,
method=method_etest, alternative=alternative, **etest_kwds)
dist = 'poisson'
else:
raise ValueError(f'method "{method}" not recognized')
elif compare == "diff":
if value is None:
value = 0
if method in ['wald']:
stat = (rate1 - rate2 - value) / np.sqrt(rate1 / n1 + rate2 / n2)
dist = 'normal'
"waldccv"
elif method in ['waldccv']:
stat = (rate1 - rate2 - value)
stat /= np.sqrt((count1 + 0.5) / n1**2 + (count2 + 0.5) / n2**2)
dist = 'normal'
elif method in ['score']:
# estimate rates with constraint MLE
count_pooled = y1 + y2
rate_pooled = count_pooled / (n1 + n2)
dt = rate_pooled - value
r2_cmle = 0.5 * (dt + np.sqrt(dt**2 + 4 * value * y2 / (n1 + n2)))
r1_cmle = r2_cmle + value
stat = ((rate1 - rate2 - value) /
np.sqrt(r1_cmle / n1 + r2_cmle / n2))
rates_cmle = (r1_cmle, r2_cmle)
dist = 'normal'
elif method.startswith('etest'):
if method.endswith('wald'):
method_etest = 'wald'
else:
method_etest = 'score'
if method == "etest":
method = method + "-score"
if etest_kwds is None:
etest_kwds = {}
stat, pvalue = etest_poisson_2indep(
count1, exposure1, count2, exposure2, value=value,
method=method_etest, compare="diff",
alternative=alternative, **etest_kwds)
dist = 'poisson'
else:
raise ValueError(f'method "{method}" not recognized')
else:
raise NotImplementedError('"compare" needs to be ratio or diff')
if dist == 'normal':
stat, pvalue = _zstat_generic2(stat, 1, alternative)
rates = (rate1, rate2)
ratio = rate1 / rate2
diff = rate1 - rate2
res = HolderTuple(statistic=stat,
pvalue=pvalue,
distribution=dist,
compare=compare,
method=method,
alternative=alternative,
rates=rates,
ratio=ratio,
diff=diff,
value=value,
rates_cmle=rates_cmle,
ratio_null=ratio_null,
)
return res | Test for comparing two sample Poisson intensity rates.
Rates are defined as expected count divided by exposure.
The Null and alternative hypothesis for the rates, rate1 and rate2, of two
independent Poisson samples are
for compare = 'diff'
- H0: rate1 - rate2 - value = 0
- H1: rate1 - rate2 - value != 0 if alternative = 'two-sided'
- H1: rate1 - rate2 - value > 0 if alternative = 'larger'
- H1: rate1 - rate2 - value < 0 if alternative = 'smaller'
for compare = 'ratio'
- H0: rate1 / rate2 - value = 0
- H1: rate1 / rate2 - value != 0 if alternative = 'two-sided'
- H1: rate1 / rate2 - value > 0 if alternative = 'larger'
- H1: rate1 / rate2 - value < 0 if alternative = 'smaller'
Parameters
----------
count1 : int
Number of events in first sample, treatment group.
exposure1 : float
Total exposure (time * subjects) in first sample.
count2 : int
Number of events in second sample, control group.
exposure2 : float
Total exposure (time * subjects) in second sample.
ratio_null: float
Ratio of the two Poisson rates under the Null hypothesis. Default is 1.
Deprecated, use ``value`` instead.
.. deprecated:: 0.14.0
Use ``value`` instead.
value : float
Value of the ratio or difference of 2 independent rates under the null
hypothesis. Default is equal rates, i.e. 1 for ratio and 0 for diff.
.. versionadded:: 0.14.0
Replacement for ``ratio_null``.
method : string
Method for the test statistic and the p-value. Defaults to `'score'`.
see Notes.
ratio:
- 'wald': method W1A, wald test, variance based on observed rates
- 'score': method W2A, score test, variance based on estimate under
the Null hypothesis
- 'wald-log': W3A, uses log-ratio, variance based on observed rates
- 'score-log' W4A, uses log-ratio, variance based on estimate under
the Null hypothesis
- 'sqrt': W5A, based on variance stabilizing square root transformation
- 'exact-cond': exact conditional test based on binomial distribution
This uses ``binom_test`` which is minlike in the two-sided case.
- 'cond-midp': midpoint-pvalue of exact conditional test
- 'etest' or 'etest-score: etest with score test statistic
- 'etest-wald': etest with wald test statistic
diff:
- 'wald',
- 'waldccv'
- 'score'
- 'etest-score' or 'etest: etest with score test statistic
- 'etest-wald': etest with wald test statistic
compare : {'diff', 'ratio'}
Default is "ratio".
If compare is `ratio`, then the hypothesis test is for the
rate ratio defined by ratio = rate1 / rate2.
If compare is `diff`, then the hypothesis test is for
diff = rate1 - rate2.
alternative : {"two-sided" (default), "larger", smaller}
The alternative hypothesis, H1, has to be one of the following
- 'two-sided': H1: ratio, or diff, of rates is not equal to value
- 'larger' : H1: ratio, or diff, of rates is larger than value
- 'smaller' : H1: ratio, or diff, of rates is smaller than value
etest_kwds: dictionary
Additional optional parameters to be passed to the etest_poisson_2indep
function, namely y_grid.
Returns
-------
results : instance of HolderTuple class
The two main attributes are test statistic `statistic` and p-value
`pvalue`.
See Also
--------
tost_poisson_2indep
etest_poisson_2indep
Notes
-----
The hypothesis tests for compare="ratio" are based on Gu et al 2018.
The e-tests are also based on ...
- 'wald': method W1A, wald test, variance based on separate estimates
- 'score': method W2A, score test, variance based on estimate under Null
- 'wald-log': W3A, wald test for log transformed ratio
- 'score-log' W4A, score test for log transformed ratio
- 'sqrt': W5A, based on variance stabilizing square root transformation
- 'exact-cond': exact conditional test based on binomial distribution
- 'cond-midp': midpoint-pvalue of exact conditional test
- 'etest': etest with score test statistic
- 'etest-wald': etest with wald test statistic
The hypothesis test for compare="diff" are mainly based on Ng et al 2007
and ...
- wald
- score
- etest-score
- etest-wald
Note the etests use the constraint maximum likelihood estimate (cmle) as
parameters for the underlying Poisson probabilities. The constraint cmle
parameters are the same as in the score test.
The E-test in Krishnamoorty and Thomson uses a moment estimator instead of
the score estimator.
References
----------
.. [1] Gu, Ng, Tang, Schucany 2008: Testing the Ratio of Two Poisson Rates,
Biometrical Journal 50 (2008) 2, 2008
.. [2] Ng, H. K. T., K. Gu, and M. L. Tang. 2007. “A Comparative Study of
Tests for the Difference of Two Poisson Means.”
Computational Statistics & Data Analysis 51 (6): 3085–99.
https://doi.org/10.1016/j.csda.2006.02.004. | test_poisson_2indep | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def _score_diff(y1, n1, y2, n2, value=0, return_cmle=False):
"""score test and cmle for difference of 2 independent poisson rates
"""
count_pooled = y1 + y2
rate1, rate2 = y1 / n1, y2 / n2
rate_pooled = count_pooled / (n1 + n2)
dt = rate_pooled - value
r2_cmle = 0.5 * (dt + np.sqrt(dt**2 + 4 * value * y2 / (n1 + n2)))
r1_cmle = r2_cmle + value
eps = 1e-20 # avoid zero division in stat_func
v = r1_cmle / n1 + r2_cmle / n2
stat = (rate1 - rate2 - value) / np.sqrt(v + eps)
if return_cmle:
return stat, r1_cmle, r2_cmle
else:
return stat | score test and cmle for difference of 2 independent poisson rates | _score_diff | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def etest_poisson_2indep(count1, exposure1, count2, exposure2, ratio_null=None,
value=None, method='score', compare="ratio",
alternative='two-sided', ygrid=None,
y_grid=None):
"""
E-test for ratio of two sample Poisson rates.
Rates are defined as expected count divided by exposure. The Null and
alternative hypothesis for the rates, rate1 and rate2, of two independent
Poisson samples are:
for compare = 'diff'
- H0: rate1 - rate2 - value = 0
- H1: rate1 - rate2 - value != 0 if alternative = 'two-sided'
- H1: rate1 - rate2 - value > 0 if alternative = 'larger'
- H1: rate1 - rate2 - value < 0 if alternative = 'smaller'
for compare = 'ratio'
- H0: rate1 / rate2 - value = 0
- H1: rate1 / rate2 - value != 0 if alternative = 'two-sided'
- H1: rate1 / rate2 - value > 0 if alternative = 'larger'
- H1: rate1 / rate2 - value < 0 if alternative = 'smaller'
Parameters
----------
count1 : int
Number of events in first sample
exposure1 : float
Total exposure (time * subjects) in first sample
count2 : int
Number of events in first sample
exposure2 : float
Total exposure (time * subjects) in first sample
ratio_null: float
Ratio of the two Poisson rates under the Null hypothesis. Default is 1.
Deprecated, use ``value`` instead.
.. deprecated:: 0.14.0
Use ``value`` instead.
value : float
Value of the ratio or diff of 2 independent rates under the null
hypothesis. Default is equal rates, i.e. 1 for ratio and 0 for diff.
.. versionadded:: 0.14.0
Replacement for ``ratio_null``.
method : {"score", "wald"}
Method for the test statistic that defines the rejection region.
alternative : string
The alternative hypothesis, H1, has to be one of the following
- 'two-sided': H1: ratio of rates is not equal to ratio_null (default)
- 'larger' : H1: ratio of rates is larger than ratio_null
- 'smaller' : H1: ratio of rates is smaller than ratio_null
y_grid : None or 1-D ndarray
Grid values for counts of the Poisson distribution used for computing
the pvalue. By default truncation is based on an upper tail Poisson
quantiles.
ygrid : None or 1-D ndarray
Same as y_grid. Deprecated. If both y_grid and ygrid are provided,
ygrid will be ignored.
.. deprecated:: 0.14.0
Use ``y_grid`` instead.
Returns
-------
stat_sample : float
test statistic for the sample
pvalue : float
References
----------
Gu, Ng, Tang, Schucany 2008: Testing the Ratio of Two Poisson Rates,
Biometrical Journal 50 (2008) 2, 2008
Ng, H. K. T., K. Gu, and M. L. Tang. 2007. “A Comparative Study of Tests
for the Difference of Two Poisson Means.” Computational Statistics & Data
Analysis 51 (6): 3085–99. https://doi.org/10.1016/j.csda.2006.02.004.
"""
y1, n1, y2, n2 = map(np.asarray, [count1, exposure1, count2, exposure2])
d = n2 / n1
eps = 1e-20 # avoid zero division in stat_func
if compare == "ratio":
if ratio_null is None and value is None:
# default value
value = 1
elif ratio_null is not None:
warnings.warn("'ratio_null' is deprecated, use 'value' keyword",
FutureWarning)
value = ratio_null
r = value # rate1 / rate2
r_d = r / d
rate2_cmle = (y1 + y2) / n2 / (1 + r_d)
rate1_cmle = rate2_cmle * r
if method in ['score']:
def stat_func(x1, x2):
return (x1 - x2 * r_d) / np.sqrt((x1 + x2) * r_d + eps)
# TODO: do I need these? return_results ?
# rate2_cmle = (y1 + y2) / n2 / (1 + r_d)
# rate1_cmle = rate2_cmle * r
# rate1 = rate1_cmle
# rate2 = rate2_cmle
elif method in ['wald']:
def stat_func(x1, x2):
return (x1 - x2 * r_d) / np.sqrt(x1 + x2 * r_d**2 + eps)
# rate2_mle = y2 / n2
# rate1_mle = y1 / n1
# rate1 = rate1_mle
# rate2 = rate2_mle
else:
raise ValueError('method not recognized')
elif compare == "diff":
if value is None:
value = 0
tmp = _score_diff(y1, n1, y2, n2, value=value, return_cmle=True)
_, rate1_cmle, rate2_cmle = tmp
if method in ['score']:
def stat_func(x1, x2):
return _score_diff(x1, n1, x2, n2, value=value)
elif method in ['wald']:
def stat_func(x1, x2):
rate1, rate2 = x1 / n1, x2 / n2
stat = (rate1 - rate2 - value)
stat /= np.sqrt(rate1 / n1 + rate2 / n2 + eps)
return stat
else:
raise ValueError('method not recognized')
# The sampling distribution needs to be based on the null hypotheis
# use constrained MLE from 'score' calculation
rate1 = rate1_cmle
rate2 = rate2_cmle
mean1 = n1 * rate1
mean2 = n2 * rate2
stat_sample = stat_func(y1, y2)
if ygrid is not None:
warnings.warn("ygrid is deprecated, use y_grid", FutureWarning)
y_grid = y_grid if y_grid is not None else ygrid
# The following uses a fixed truncation for evaluating the probabilities
# It will currently only work for small counts, so that sf at truncation
# point is small
# We can make it depend on the amount of truncated sf.
# Some numerical optimization or checks for large means need to be added.
if y_grid is None:
threshold = stats.poisson.isf(1e-13, max(mean1, mean2))
threshold = max(threshold, 100) # keep at least 100
y_grid = np.arange(threshold + 1)
else:
y_grid = np.asarray(y_grid)
if y_grid.ndim != 1:
raise ValueError("y_grid needs to be None or 1-dimensional array")
pdf1 = stats.poisson.pmf(y_grid, mean1)
pdf2 = stats.poisson.pmf(y_grid, mean2)
stat_space = stat_func(y_grid[:, None], y_grid[None, :]) # broadcasting
eps = 1e-15 # correction for strict inequality check
if alternative in ['two-sided', '2-sided', '2s']:
mask = np.abs(stat_space) >= (np.abs(stat_sample) - eps)
elif alternative in ['larger', 'l']:
mask = stat_space >= (stat_sample - eps)
elif alternative in ['smaller', 's']:
mask = stat_space <= (stat_sample + eps)
else:
raise ValueError('invalid alternative')
pvalue = ((pdf1[:, None] * pdf2[None, :])[mask]).sum()
return stat_sample, pvalue | E-test for ratio of two sample Poisson rates.
Rates are defined as expected count divided by exposure. The Null and
alternative hypothesis for the rates, rate1 and rate2, of two independent
Poisson samples are:
for compare = 'diff'
- H0: rate1 - rate2 - value = 0
- H1: rate1 - rate2 - value != 0 if alternative = 'two-sided'
- H1: rate1 - rate2 - value > 0 if alternative = 'larger'
- H1: rate1 - rate2 - value < 0 if alternative = 'smaller'
for compare = 'ratio'
- H0: rate1 / rate2 - value = 0
- H1: rate1 / rate2 - value != 0 if alternative = 'two-sided'
- H1: rate1 / rate2 - value > 0 if alternative = 'larger'
- H1: rate1 / rate2 - value < 0 if alternative = 'smaller'
Parameters
----------
count1 : int
Number of events in first sample
exposure1 : float
Total exposure (time * subjects) in first sample
count2 : int
Number of events in first sample
exposure2 : float
Total exposure (time * subjects) in first sample
ratio_null: float
Ratio of the two Poisson rates under the Null hypothesis. Default is 1.
Deprecated, use ``value`` instead.
.. deprecated:: 0.14.0
Use ``value`` instead.
value : float
Value of the ratio or diff of 2 independent rates under the null
hypothesis. Default is equal rates, i.e. 1 for ratio and 0 for diff.
.. versionadded:: 0.14.0
Replacement for ``ratio_null``.
method : {"score", "wald"}
Method for the test statistic that defines the rejection region.
alternative : string
The alternative hypothesis, H1, has to be one of the following
- 'two-sided': H1: ratio of rates is not equal to ratio_null (default)
- 'larger' : H1: ratio of rates is larger than ratio_null
- 'smaller' : H1: ratio of rates is smaller than ratio_null
y_grid : None or 1-D ndarray
Grid values for counts of the Poisson distribution used for computing
the pvalue. By default truncation is based on an upper tail Poisson
quantiles.
ygrid : None or 1-D ndarray
Same as y_grid. Deprecated. If both y_grid and ygrid are provided,
ygrid will be ignored.
.. deprecated:: 0.14.0
Use ``y_grid`` instead.
Returns
-------
stat_sample : float
test statistic for the sample
pvalue : float
References
----------
Gu, Ng, Tang, Schucany 2008: Testing the Ratio of Two Poisson Rates,
Biometrical Journal 50 (2008) 2, 2008
Ng, H. K. T., K. Gu, and M. L. Tang. 2007. “A Comparative Study of Tests
for the Difference of Two Poisson Means.” Computational Statistics & Data
Analysis 51 (6): 3085–99. https://doi.org/10.1016/j.csda.2006.02.004. | etest_poisson_2indep | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def tost_poisson_2indep(count1, exposure1, count2, exposure2, low, upp,
method='score', compare='ratio'):
'''Equivalence test based on two one-sided `test_proportions_2indep`
This assumes that we have two independent poisson samples.
The Null and alternative hypothesis for equivalence testing are
for compare = 'ratio'
- H0: rate1 / rate2 <= low or upp <= rate1 / rate2
- H1: low < rate1 / rate2 < upp
for compare = 'diff'
- H0: rate1 - rate2 <= low or upp <= rate1 - rate2
- H1: low < rate - rate < upp
Parameters
----------
count1 : int
Number of events in first sample
exposure1 : float
Total exposure (time * subjects) in first sample
count2 : int
Number of events in second sample
exposure2 : float
Total exposure (time * subjects) in second sample
low, upp :
equivalence margin for the ratio or difference of Poisson rates
method: string
TOST uses ``test_poisson_2indep`` and has the same methods.
ratio:
- 'wald': method W1A, wald test, variance based on observed rates
- 'score': method W2A, score test, variance based on estimate under
the Null hypothesis
- 'wald-log': W3A, uses log-ratio, variance based on observed rates
- 'score-log' W4A, uses log-ratio, variance based on estimate under
the Null hypothesis
- 'sqrt': W5A, based on variance stabilizing square root transformation
- 'exact-cond': exact conditional test based on binomial distribution
This uses ``binom_test`` which is minlike in the two-sided case.
- 'cond-midp': midpoint-pvalue of exact conditional test
- 'etest' or 'etest-score: etest with score test statistic
- 'etest-wald': etest with wald test statistic
diff:
- 'wald',
- 'waldccv'
- 'score'
- 'etest-score' or 'etest: etest with score test statistic
- 'etest-wald': etest with wald test statistic
Returns
-------
results : instance of HolderTuple class
The two main attributes are test statistic `statistic` and p-value
`pvalue`.
References
----------
Gu, Ng, Tang, Schucany 2008: Testing the Ratio of Two Poisson Rates,
Biometrical Journal 50 (2008) 2, 2008
See Also
--------
test_poisson_2indep
confint_poisson_2indep
'''
tt1 = test_poisson_2indep(count1, exposure1, count2, exposure2,
value=low, method=method,
compare=compare,
alternative='larger')
tt2 = test_poisson_2indep(count1, exposure1, count2, exposure2,
value=upp, method=method,
compare=compare,
alternative='smaller')
# idx_max = 1 if t1.pvalue < t2.pvalue else 0
idx_max = np.asarray(tt1.pvalue < tt2.pvalue, int)
statistic = np.choose(idx_max, [tt1.statistic, tt2.statistic])
pvalue = np.choose(idx_max, [tt1.pvalue, tt2.pvalue])
res = HolderTuple(statistic=statistic,
pvalue=pvalue,
method=method,
compare=compare,
equiv_limits=(low, upp),
results_larger=tt1,
results_smaller=tt2,
title="Equivalence test for 2 independent Poisson rates"
)
return res | Equivalence test based on two one-sided `test_proportions_2indep`
This assumes that we have two independent poisson samples.
The Null and alternative hypothesis for equivalence testing are
for compare = 'ratio'
- H0: rate1 / rate2 <= low or upp <= rate1 / rate2
- H1: low < rate1 / rate2 < upp
for compare = 'diff'
- H0: rate1 - rate2 <= low or upp <= rate1 - rate2
- H1: low < rate - rate < upp
Parameters
----------
count1 : int
Number of events in first sample
exposure1 : float
Total exposure (time * subjects) in first sample
count2 : int
Number of events in second sample
exposure2 : float
Total exposure (time * subjects) in second sample
low, upp :
equivalence margin for the ratio or difference of Poisson rates
method: string
TOST uses ``test_poisson_2indep`` and has the same methods.
ratio:
- 'wald': method W1A, wald test, variance based on observed rates
- 'score': method W2A, score test, variance based on estimate under
the Null hypothesis
- 'wald-log': W3A, uses log-ratio, variance based on observed rates
- 'score-log' W4A, uses log-ratio, variance based on estimate under
the Null hypothesis
- 'sqrt': W5A, based on variance stabilizing square root transformation
- 'exact-cond': exact conditional test based on binomial distribution
This uses ``binom_test`` which is minlike in the two-sided case.
- 'cond-midp': midpoint-pvalue of exact conditional test
- 'etest' or 'etest-score: etest with score test statistic
- 'etest-wald': etest with wald test statistic
diff:
- 'wald',
- 'waldccv'
- 'score'
- 'etest-score' or 'etest: etest with score test statistic
- 'etest-wald': etest with wald test statistic
Returns
-------
results : instance of HolderTuple class
The two main attributes are test statistic `statistic` and p-value
`pvalue`.
References
----------
Gu, Ng, Tang, Schucany 2008: Testing the Ratio of Two Poisson Rates,
Biometrical Journal 50 (2008) 2, 2008
See Also
--------
test_poisson_2indep
confint_poisson_2indep | tost_poisson_2indep | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def nonequivalence_poisson_2indep(count1, exposure1, count2, exposure2,
low, upp, method='score', compare="ratio"):
"""Test for non-equivalence, minimum effect for poisson.
This reverses null and alternative hypothesis compared to equivalence
testing. The null hypothesis is that the effect, ratio (or diff), is in
an interval that specifies a range of irrelevant or unimportant
differences between the two samples.
The Null and alternative hypothesis comparing the ratio of rates are
for compare = 'ratio':
- H0: low < rate1 / rate2 < upp
- H1: rate1 / rate2 <= low or upp <= rate1 / rate2
for compare = 'diff':
- H0: rate1 - rate2 <= low or upp <= rate1 - rate2
- H1: low < rate - rate < upp
Notes
-----
This is implemented as two one-sided tests at the minimum effect boundaries
(low, upp) with (nominal) size alpha / 2 each.
The size of the test is the sum of the two one-tailed tests, which
corresponds to an equal-tailed two-sided test.
If low and upp are equal, then the result is the same as the standard
two-sided test.
The p-value is computed as `2 * min(pvalue_low, pvalue_upp)` in analogy to
two-sided equal-tail tests.
In large samples the nominal size of the test will be below alpha.
References
----------
.. [1] Hodges, J. L., Jr., and E. L. Lehmann. 1954. Testing the Approximate
Validity of Statistical Hypotheses. Journal of the Royal Statistical
Society, Series B (Methodological) 16: 261–68.
.. [2] Kim, Jae H., and Andrew P. Robinson. 2019. “Interval-Based
Hypothesis Testing and Its Applications to Economics and Finance.”
Econometrics 7 (2): 21. https://doi.org/10.3390/econometrics7020021.
"""
tt1 = test_poisson_2indep(count1, exposure1, count2, exposure2,
value=low, method=method, compare=compare,
alternative='smaller')
tt2 = test_poisson_2indep(count1, exposure1, count2, exposure2,
value=upp, method=method, compare=compare,
alternative='larger')
# idx_min = 0 if tt1.pvalue < tt2.pvalue else 1
idx_min = np.asarray(tt1.pvalue < tt2.pvalue, int)
pvalue = 2 * np.minimum(tt1.pvalue, tt2.pvalue)
statistic = np.choose(idx_min, [tt1.statistic, tt2.statistic])
res = HolderTuple(statistic=statistic,
pvalue=pvalue,
method=method,
results_larger=tt1,
results_smaller=tt2,
title="Equivalence test for 2 independent Poisson rates"
)
return res | Test for non-equivalence, minimum effect for poisson.
This reverses null and alternative hypothesis compared to equivalence
testing. The null hypothesis is that the effect, ratio (or diff), is in
an interval that specifies a range of irrelevant or unimportant
differences between the two samples.
The Null and alternative hypothesis comparing the ratio of rates are
for compare = 'ratio':
- H0: low < rate1 / rate2 < upp
- H1: rate1 / rate2 <= low or upp <= rate1 / rate2
for compare = 'diff':
- H0: rate1 - rate2 <= low or upp <= rate1 - rate2
- H1: low < rate - rate < upp
Notes
-----
This is implemented as two one-sided tests at the minimum effect boundaries
(low, upp) with (nominal) size alpha / 2 each.
The size of the test is the sum of the two one-tailed tests, which
corresponds to an equal-tailed two-sided test.
If low and upp are equal, then the result is the same as the standard
two-sided test.
The p-value is computed as `2 * min(pvalue_low, pvalue_upp)` in analogy to
two-sided equal-tail tests.
In large samples the nominal size of the test will be below alpha.
References
----------
.. [1] Hodges, J. L., Jr., and E. L. Lehmann. 1954. Testing the Approximate
Validity of Statistical Hypotheses. Journal of the Royal Statistical
Society, Series B (Methodological) 16: 261–68.
.. [2] Kim, Jae H., and Andrew P. Robinson. 2019. “Interval-Based
Hypothesis Testing and Its Applications to Economics and Finance.”
Econometrics 7 (2): 21. https://doi.org/10.3390/econometrics7020021. | nonequivalence_poisson_2indep | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def confint_poisson_2indep(count1, exposure1, count2, exposure2,
method='score', compare='ratio', alpha=0.05,
method_mover="score",
):
"""Confidence interval for ratio or difference of 2 indep poisson rates.
Parameters
----------
count1 : int
Number of events in first sample.
exposure1 : float
Total exposure (time * subjects) in first sample.
count2 : int
Number of events in second sample.
exposure2 : float
Total exposure (time * subjects) in second sample.
method : string
Method for the test statistic and the p-value. Defaults to `'score'`.
see Notes.
ratio:
- 'wald': NOT YET, method W1A, wald test, variance based on observed
rates
- 'waldcc' :
- 'score': method W2A, score test, variance based on estimate under
the Null hypothesis
- 'wald-log': W3A, uses log-ratio, variance based on observed rates
- 'score-log' W4A, uses log-ratio, variance based on estimate under
the Null hypothesis
- 'sqrt': W5A, based on variance stabilizing square root transformation
- 'sqrtcc' :
- 'exact-cond': NOT YET, exact conditional test based on binomial
distribution
This uses ``binom_test`` which is minlike in the two-sided case.
- 'cond-midp': NOT YET, midpoint-pvalue of exact conditional test
- 'mover' :
diff:
- 'wald',
- 'waldccv'
- 'score'
- 'mover'
compare : {'diff', 'ratio'}
Default is "ratio".
If compare is `diff`, then the hypothesis test is for
diff = rate1 - rate2.
If compare is `ratio`, then the hypothesis test is for the
rate ratio defined by ratio = rate1 / rate2.
alternative : string
The alternative hypothesis, H1, has to be one of the following
- 'two-sided': H1: ratio of rates is not equal to ratio_null (default)
- 'larger' : H1: ratio of rates is larger than ratio_null
- 'smaller' : H1: ratio of rates is smaller than ratio_null
alpha : float in (0, 1)
Significance level, nominal coverage of the confidence interval is
1 - alpha.
Returns
-------
tuple (low, upp) : confidence limits.
"""
# shortcut names
y1, n1, y2, n2 = map(np.asarray, [count1, exposure1, count2, exposure2])
rate1, rate2 = y1 / n1, y2 / n2
alpha = alpha / 2 # two-sided only
if compare == "ratio":
if method == "score":
low, upp = _invert_test_confint_2indep(
count1, exposure1, count2, exposure2,
alpha=alpha * 2, # check how alpha is defined
method="score",
compare="ratio",
method_start="waldcc"
)
ci = (low, upp)
elif method == "wald-log":
crit = stats.norm.isf(alpha)
c = 0
center = (count1 + c) / (count2 + c) * n2 / n1
std = np.sqrt(1 / (count1 + c) + 1 / (count2 + c))
ci = (center * np.exp(- crit * std), center * np.exp(crit * std))
elif method == "score-log":
low, upp = _invert_test_confint_2indep(
count1, exposure1, count2, exposure2,
alpha=alpha * 2, # check how alpha is defined
method="score-log",
compare="ratio",
method_start="waldcc"
)
ci = (low, upp)
elif method == "waldcc":
crit = stats.norm.isf(alpha)
center = (count1 + 0.5) / (count2 + 0.5) * n2 / n1
std = np.sqrt(1 / (count1 + 0.5) + 1 / (count2 + 0.5))
ci = (center * np.exp(- crit * std), center * np.exp(crit * std))
elif method == "sqrtcc":
# coded based on Price, Bonett 2000 equ (2.4)
crit = stats.norm.isf(alpha)
center = np.sqrt((count1 + 0.5) * (count2 + 0.5))
std = 0.5 * np.sqrt(count1 + 0.5 + count2 + 0.5 - 0.25 * crit)
denom = (count2 + 0.5 - 0.25 * crit**2)
low_sqrt = (center - crit * std) / denom
upp_sqrt = (center + crit * std) / denom
ci = (low_sqrt**2, upp_sqrt**2)
elif method == "mover":
method_p = method_mover
ci1 = confint_poisson(y1, n1, method=method_p, alpha=2*alpha)
ci2 = confint_poisson(y2, n2, method=method_p, alpha=2*alpha)
ci = _mover_confint(rate1, rate2, ci1, ci2, contrast="ratio")
else:
raise ValueError(f'method "{method}" not recognized')
ci = (np.maximum(ci[0], 0), ci[1])
elif compare == "diff":
if method in ['wald']:
crit = stats.norm.isf(alpha)
center = rate1 - rate2
half = crit * np.sqrt(rate1 / n1 + rate2 / n2)
ci = center - half, center + half
elif method in ['waldccv']:
crit = stats.norm.isf(alpha)
center = rate1 - rate2
std = np.sqrt((count1 + 0.5) / n1**2 + (count2 + 0.5) / n2**2)
half = crit * std
ci = center - half, center + half
elif method == "score":
low, upp = _invert_test_confint_2indep(
count1, exposure1, count2, exposure2,
alpha=alpha * 2, # check how alpha is defined
method="score",
compare="diff",
method_start="waldccv"
)
ci = (low, upp)
elif method == "mover":
method_p = method_mover
ci1 = confint_poisson(y1, n1, method=method_p, alpha=2*alpha)
ci2 = confint_poisson(y2, n2, method=method_p, alpha=2*alpha)
ci = _mover_confint(rate1, rate2, ci1, ci2, contrast="diff")
else:
raise ValueError(f'method "{method}" not recognized')
else:
raise NotImplementedError('"compare" needs to be ratio or diff')
return ci | Confidence interval for ratio or difference of 2 indep poisson rates.
Parameters
----------
count1 : int
Number of events in first sample.
exposure1 : float
Total exposure (time * subjects) in first sample.
count2 : int
Number of events in second sample.
exposure2 : float
Total exposure (time * subjects) in second sample.
method : string
Method for the test statistic and the p-value. Defaults to `'score'`.
see Notes.
ratio:
- 'wald': NOT YET, method W1A, wald test, variance based on observed
rates
- 'waldcc' :
- 'score': method W2A, score test, variance based on estimate under
the Null hypothesis
- 'wald-log': W3A, uses log-ratio, variance based on observed rates
- 'score-log' W4A, uses log-ratio, variance based on estimate under
the Null hypothesis
- 'sqrt': W5A, based on variance stabilizing square root transformation
- 'sqrtcc' :
- 'exact-cond': NOT YET, exact conditional test based on binomial
distribution
This uses ``binom_test`` which is minlike in the two-sided case.
- 'cond-midp': NOT YET, midpoint-pvalue of exact conditional test
- 'mover' :
diff:
- 'wald',
- 'waldccv'
- 'score'
- 'mover'
compare : {'diff', 'ratio'}
Default is "ratio".
If compare is `diff`, then the hypothesis test is for
diff = rate1 - rate2.
If compare is `ratio`, then the hypothesis test is for the
rate ratio defined by ratio = rate1 / rate2.
alternative : string
The alternative hypothesis, H1, has to be one of the following
- 'two-sided': H1: ratio of rates is not equal to ratio_null (default)
- 'larger' : H1: ratio of rates is larger than ratio_null
- 'smaller' : H1: ratio of rates is smaller than ratio_null
alpha : float in (0, 1)
Significance level, nominal coverage of the confidence interval is
1 - alpha.
Returns
-------
tuple (low, upp) : confidence limits. | confint_poisson_2indep | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def power_poisson_ratio_2indep(
rate1, rate2, nobs1,
nobs_ratio=1,
exposure=1,
value=0,
alpha=0.05,
dispersion=1,
alternative="smaller",
method_var="alt",
return_results=True,
):
"""Power of test of ratio of 2 independent poisson rates.
This is based on Zhu and Zhu and Lakkis. It does not directly correspond
to `test_poisson_2indep`.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
exposure : float
Exposure for each observation. Total exposure is nobs1 * exposure
and nobs2 * exposure.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
value : float
Rate ratio, rate1 / rate2, under the null hypothesis.
dispersion : float
Dispersion coefficient for quasi-Poisson. Dispersion different from
one can capture over or under dispersion relative to Poisson
distribution.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates under the alternative can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Zhu, Haiyuan. 2017. “Sample Size Calculation for Comparing Two
Poisson or Negative Binomial Rates in Noninferiority or Equivalence
Trials.” Statistics in Biopharmaceutical Research, March.
https://doi.org/10.1080/19466315.2016.1225594
.. [2] Zhu, Haiyuan, and Hassan Lakkis. 2014. “Sample Size Calculation for
Comparing Two Negative Binomial Rates.” Statistics in Medicine 33 (3):
376–87. https://doi.org/10.1002/sim.5947.
.. [3] PASS documentation
"""
# TODO: avoid possible circular import, check if needed
from statsmodels.stats.power import normal_power_het
rate1, rate2, nobs1 = map(np.asarray, [rate1, rate2, nobs1])
nobs2 = nobs_ratio * nobs1
v1 = dispersion / exposure * (1 / rate1 + 1 / (nobs_ratio * rate2))
if method_var == "alt":
v0 = v1
elif method_var == "score":
# nobs_ratio = 1 / nobs_ratio
v0 = dispersion / exposure * (1 + value / nobs_ratio)**2
v0 /= value / nobs_ratio * (rate1 + (nobs_ratio * rate2))
else:
raise NotImplementedError(f"method_var {method_var} not recognized")
std_null = np.sqrt(v0)
std_alt = np.sqrt(v1)
es = np.log(rate1 / rate2) - np.log(value)
pow_ = normal_power_het(es, nobs1, alpha, std_null=std_null,
std_alternative=std_alt,
alternative=alternative)
p_pooled = None # TODO: replace or remove
if return_results:
res = HolderTuple(
power=pow_,
p_pooled=p_pooled,
std_null=std_null,
std_alt=std_alt,
nobs1=nobs1,
nobs2=nobs2,
nobs_ratio=nobs_ratio,
alpha=alpha,
tuple_=("power",), # override default
)
return res
return pow_ | Power of test of ratio of 2 independent poisson rates.
This is based on Zhu and Zhu and Lakkis. It does not directly correspond
to `test_poisson_2indep`.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
exposure : float
Exposure for each observation. Total exposure is nobs1 * exposure
and nobs2 * exposure.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
value : float
Rate ratio, rate1 / rate2, under the null hypothesis.
dispersion : float
Dispersion coefficient for quasi-Poisson. Dispersion different from
one can capture over or under dispersion relative to Poisson
distribution.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates under the alternative can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Zhu, Haiyuan. 2017. “Sample Size Calculation for Comparing Two
Poisson or Negative Binomial Rates in Noninferiority or Equivalence
Trials.” Statistics in Biopharmaceutical Research, March.
https://doi.org/10.1080/19466315.2016.1225594
.. [2] Zhu, Haiyuan, and Hassan Lakkis. 2014. “Sample Size Calculation for
Comparing Two Negative Binomial Rates.” Statistics in Medicine 33 (3):
376–87. https://doi.org/10.1002/sim.5947.
.. [3] PASS documentation | power_poisson_ratio_2indep | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def power_equivalence_poisson_2indep(rate1, rate2, nobs1,
low, upp, nobs_ratio=1,
exposure=1, alpha=0.05, dispersion=1,
method_var="alt",
return_results=False):
"""Power of equivalence test of ratio of 2 independent poisson rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
low : float
Lower equivalence margin for the rate ratio, rate1 / rate2.
upp : float
Upper equivalence margin for the rate ratio, rate1 / rate2.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
exposure : float
Exposure for each observation. Total exposure is nobs1 * exposure
and nobs2 * exposure.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
value : float
Difference between rates 1 and 2 under the null hypothesis.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates uder the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Zhu, Haiyuan. 2017. “Sample Size Calculation for Comparing Two
Poisson or Negative Binomial Rates in Noninferiority or Equivalence
Trials.” Statistics in Biopharmaceutical Research, March.
https://doi.org/10.1080/19466315.2016.1225594
.. [2] Zhu, Haiyuan, and Hassan Lakkis. 2014. “Sample Size Calculation for
Comparing Two Negative Binomial Rates.” Statistics in Medicine 33 (3):
376–87. https://doi.org/10.1002/sim.5947.
.. [3] PASS documentation
"""
rate1, rate2, nobs1 = map(np.asarray, [rate1, rate2, nobs1])
nobs2 = nobs_ratio * nobs1
v1 = dispersion / exposure * (1 / rate1 + 1 / (nobs_ratio * rate2))
if method_var == "alt":
v0_low = v0_upp = v1
elif method_var == "score":
v0_low = dispersion / exposure * (1 + low * nobs_ratio)**2
v0_low /= low * nobs_ratio * (rate1 + (nobs_ratio * rate2))
v0_upp = dispersion / exposure * (1 + upp * nobs_ratio)**2
v0_upp /= upp * nobs_ratio * (rate1 + (nobs_ratio * rate2))
else:
raise NotImplementedError(f"method_var {method_var} not recognized")
es_low = np.log(rate1 / rate2) - np.log(low)
es_upp = np.log(rate1 / rate2) - np.log(upp)
std_null_low = np.sqrt(v0_low)
std_null_upp = np.sqrt(v0_upp)
std_alternative = np.sqrt(v1)
pow_ = _power_equivalence_het(es_low, es_upp, nobs2, alpha=alpha,
std_null_low=std_null_low,
std_null_upp=std_null_upp,
std_alternative=std_alternative)
if return_results:
res = HolderTuple(
power=pow_[0],
power_margins=pow[1:],
std_null_low=std_null_low,
std_null_upp=std_null_upp,
std_alt=std_alternative,
nobs1=nobs1,
nobs2=nobs2,
nobs_ratio=nobs_ratio,
alpha=alpha,
tuple_=("power",), # override default
)
return res
else:
return pow_[0] | Power of equivalence test of ratio of 2 independent poisson rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
low : float
Lower equivalence margin for the rate ratio, rate1 / rate2.
upp : float
Upper equivalence margin for the rate ratio, rate1 / rate2.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
exposure : float
Exposure for each observation. Total exposure is nobs1 * exposure
and nobs2 * exposure.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
value : float
Difference between rates 1 and 2 under the null hypothesis.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates uder the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Zhu, Haiyuan. 2017. “Sample Size Calculation for Comparing Two
Poisson or Negative Binomial Rates in Noninferiority or Equivalence
Trials.” Statistics in Biopharmaceutical Research, March.
https://doi.org/10.1080/19466315.2016.1225594
.. [2] Zhu, Haiyuan, and Hassan Lakkis. 2014. “Sample Size Calculation for
Comparing Two Negative Binomial Rates.” Statistics in Medicine 33 (3):
376–87. https://doi.org/10.1002/sim.5947.
.. [3] PASS documentation | power_equivalence_poisson_2indep | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def _power_equivalence_het_v0(es_low, es_upp, nobs, alpha=0.05,
std_null_low=None,
std_null_upp=None,
std_alternative=None):
"""power for equivalence test
"""
s0_low = std_null_low
s0_upp = std_null_upp
s1 = std_alternative
crit = norm.isf(alpha)
pow_ = (
norm.cdf((np.sqrt(nobs) * es_low - crit * s0_low) / s1) +
norm.cdf((np.sqrt(nobs) * es_upp - crit * s0_upp) / s1) - 1
)
return pow_ | power for equivalence test | _power_equivalence_het_v0 | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def _power_equivalence_het(es_low, es_upp, nobs, alpha=0.05,
std_null_low=None,
std_null_upp=None,
std_alternative=None):
"""power for equivalence test
"""
s0_low = std_null_low
s0_upp = std_null_upp
s1 = std_alternative
crit = norm.isf(alpha)
# Note: rejection region is an interval [low, upp]
# Here we compute the complement of the two tail probabilities
p1 = norm.sf((np.sqrt(nobs) * es_low - crit * s0_low) / s1)
p2 = norm.cdf((np.sqrt(nobs) * es_upp + crit * s0_upp) / s1)
pow_ = 1 - (p1 + p2)
return pow_, p1, p2 | power for equivalence test | _power_equivalence_het | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def power_poisson_diff_2indep(rate1, rate2, nobs1, nobs_ratio=1, alpha=0.05,
value=0,
method_var="score",
alternative='two-sided',
return_results=True):
"""Power of ztest for the difference between two independent poisson rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
value : float
Difference between rates 1 and 2 under the null hypothesis.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates uder the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Stucke, Kathrin, and Meinhard Kieser. 2013. “Sample Size
Calculations for Noninferiority Trials with Poisson Distributed Count
Data.” Biometrical Journal 55 (2): 203–16.
https://doi.org/10.1002/bimj.201200142.
.. [2] PASS manual chapter 436
"""
# TODO: avoid possible circular import, check if needed
from statsmodels.stats.power import normal_power_het
rate1, rate2, nobs1 = map(np.asarray, [rate1, rate2, nobs1])
diff = rate1 - rate2
_, std_null, std_alt = _std_2poisson_power(
rate1,
rate2,
nobs_ratio=nobs_ratio,
alpha=alpha,
value=value,
method_var=method_var,
)
pow_ = normal_power_het(diff - value, nobs1, alpha, std_null=std_null,
std_alternative=std_alt,
alternative=alternative)
if return_results:
res = HolderTuple(
power=pow_,
rates_alt=(rate2 + diff, rate2),
std_null=std_null,
std_alt=std_alt,
nobs1=nobs1,
nobs2=nobs_ratio * nobs1,
nobs_ratio=nobs_ratio,
alpha=alpha,
tuple_=("power",), # override default
)
return res
else:
return pow_ | Power of ztest for the difference between two independent poisson rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
value : float
Difference between rates 1 and 2 under the null hypothesis.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates uder the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Stucke, Kathrin, and Meinhard Kieser. 2013. “Sample Size
Calculations for Noninferiority Trials with Poisson Distributed Count
Data.” Biometrical Journal 55 (2): 203–16.
https://doi.org/10.1002/bimj.201200142.
.. [2] PASS manual chapter 436 | power_poisson_diff_2indep | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def _var_cmle_negbin(rate1, rate2, nobs_ratio, exposure=1, value=1,
dispersion=0):
"""
variance based on constrained cmle, for score test version
for ratio comparison of two negative binomial samples
value = rate1 / rate2 under the null
"""
# definitions in Zhu
# nobs_ratio = n1 / n0
# value = ratio = r1 / r0
rate0 = rate2 # control
nobs_ratio = 1 / nobs_ratio
a = - dispersion * exposure * value * (1 + nobs_ratio)
b = (dispersion * exposure * (rate0 * value + nobs_ratio * rate1) -
(1 + nobs_ratio * value))
c = rate0 + nobs_ratio * rate1
if dispersion == 0:
r0 = -c / b
else:
r0 = (-b - np.sqrt(b**2 - 4 * a * c)) / (2 * a)
r1 = r0 * value
v = (1 / exposure / r0 * (1 + 1 / value / nobs_ratio) +
(1 + nobs_ratio) / nobs_ratio * dispersion)
r2 = r0
return v * nobs_ratio, r1, r2 | variance based on constrained cmle, for score test version
for ratio comparison of two negative binomial samples
value = rate1 / rate2 under the null | _var_cmle_negbin | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def power_negbin_ratio_2indep(
rate1, rate2, nobs1,
nobs_ratio=1,
exposure=1,
value=1,
alpha=0.05,
dispersion=0.01,
alternative="two-sided",
method_var="alt",
return_results=True):
"""
Power of test of ratio of 2 independent negative binomial rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
low : float
Lower equivalence margin for the rate ratio, rate1 / rate2.
upp : float
Upper equivalence margin for the rate ratio, rate1 / rate2.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
exposure : float
Exposure for each observation. Total exposure is nobs1 * exposure
and nobs2 * exposure.
value : float
Rate ratio, rate1 / rate2, under the null hypothesis.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
dispersion : float >= 0.
Dispersion parameter for Negative Binomial distribution.
The Poisson limiting case corresponds to ``dispersion=0``.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates under the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``, or based on a moment
constrained estimate, ``method_var="ftotal"``. see references.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Zhu, Haiyuan. 2017. “Sample Size Calculation for Comparing Two
Poisson or Negative Binomial Rates in Noninferiority or Equivalence
Trials.” Statistics in Biopharmaceutical Research, March.
https://doi.org/10.1080/19466315.2016.1225594
.. [2] Zhu, Haiyuan, and Hassan Lakkis. 2014. “Sample Size Calculation for
Comparing Two Negative Binomial Rates.” Statistics in Medicine 33 (3):
376–87. https://doi.org/10.1002/sim.5947.
.. [3] PASS documentation
"""
# TODO: avoid possible circular import, check if needed
from statsmodels.stats.power import normal_power_het
rate1, rate2, nobs1 = map(np.asarray, [rate1, rate2, nobs1])
nobs2 = nobs_ratio * nobs1
v1 = ((1 / rate1 + 1 / (nobs_ratio * rate2)) / exposure +
(1 + nobs_ratio) / nobs_ratio * dispersion)
if method_var == "alt":
v0 = v1
elif method_var == "ftotal":
v0 = (1 + value * nobs_ratio)**2 / (
exposure * nobs_ratio * value * (rate1 + nobs_ratio * rate2))
v0 += (1 + nobs_ratio) / nobs_ratio * dispersion
elif method_var == "score":
v0 = _var_cmle_negbin(rate1, rate2, nobs_ratio,
exposure=exposure, value=value,
dispersion=dispersion)[0]
else:
raise NotImplementedError(f"method_var {method_var} not recognized")
std_null = np.sqrt(v0)
std_alt = np.sqrt(v1)
es = np.log(rate1 / rate2) - np.log(value)
pow_ = normal_power_het(es, nobs1, alpha, std_null=std_null,
std_alternative=std_alt,
alternative=alternative)
if return_results:
res = HolderTuple(
power=pow_,
std_null=std_null,
std_alt=std_alt,
nobs1=nobs1,
nobs2=nobs2,
nobs_ratio=nobs_ratio,
alpha=alpha,
tuple_=("power",), # override default
)
return res
return pow_ | Power of test of ratio of 2 independent negative binomial rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
low : float
Lower equivalence margin for the rate ratio, rate1 / rate2.
upp : float
Upper equivalence margin for the rate ratio, rate1 / rate2.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
exposure : float
Exposure for each observation. Total exposure is nobs1 * exposure
and nobs2 * exposure.
value : float
Rate ratio, rate1 / rate2, under the null hypothesis.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
dispersion : float >= 0.
Dispersion parameter for Negative Binomial distribution.
The Poisson limiting case corresponds to ``dispersion=0``.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates under the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``, or based on a moment
constrained estimate, ``method_var="ftotal"``. see references.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Zhu, Haiyuan. 2017. “Sample Size Calculation for Comparing Two
Poisson or Negative Binomial Rates in Noninferiority or Equivalence
Trials.” Statistics in Biopharmaceutical Research, March.
https://doi.org/10.1080/19466315.2016.1225594
.. [2] Zhu, Haiyuan, and Hassan Lakkis. 2014. “Sample Size Calculation for
Comparing Two Negative Binomial Rates.” Statistics in Medicine 33 (3):
376–87. https://doi.org/10.1002/sim.5947.
.. [3] PASS documentation | power_negbin_ratio_2indep | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def power_equivalence_neginb_2indep(rate1, rate2, nobs1,
low, upp, nobs_ratio=1,
exposure=1, alpha=0.05, dispersion=0,
method_var="alt",
return_results=False):
"""
Power of equivalence test of ratio of 2 indep. negative binomial rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
low : float
Lower equivalence margin for the rate ratio, rate1 / rate2.
upp : float
Upper equivalence margin for the rate ratio, rate1 / rate2.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
dispersion : float >= 0.
Dispersion parameter for Negative Binomial distribution.
The Poisson limiting case corresponds to ``dispersion=0``.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates under the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``, or based on a moment
constrained estimate, ``method_var="ftotal"``. see references.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Zhu, Haiyuan. 2017. “Sample Size Calculation for Comparing Two
Poisson or Negative Binomial Rates in Noninferiority or Equivalence
Trials.” Statistics in Biopharmaceutical Research, March.
https://doi.org/10.1080/19466315.2016.1225594
.. [2] Zhu, Haiyuan, and Hassan Lakkis. 2014. “Sample Size Calculation for
Comparing Two Negative Binomial Rates.” Statistics in Medicine 33 (3):
376–87. https://doi.org/10.1002/sim.5947.
.. [3] PASS documentation
"""
rate1, rate2, nobs1 = map(np.asarray, [rate1, rate2, nobs1])
nobs2 = nobs_ratio * nobs1
v1 = ((1 / rate2 + 1 / (nobs_ratio * rate1)) / exposure +
(1 + nobs_ratio) / nobs_ratio * dispersion)
if method_var == "alt":
v0_low = v0_upp = v1
elif method_var == "ftotal":
v0_low = (1 + low * nobs_ratio)**2 / (
exposure * nobs_ratio * low * (rate1 + nobs_ratio * rate2))
v0_low += (1 + nobs_ratio) / nobs_ratio * dispersion
v0_upp = (1 + upp * nobs_ratio)**2 / (
exposure * nobs_ratio * upp * (rate1 + nobs_ratio * rate2))
v0_upp += (1 + nobs_ratio) / nobs_ratio * dispersion
elif method_var == "score":
v0_low = _var_cmle_negbin(rate1, rate2, nobs_ratio,
exposure=exposure, value=low,
dispersion=dispersion)[0]
v0_upp = _var_cmle_negbin(rate1, rate2, nobs_ratio,
exposure=exposure, value=upp,
dispersion=dispersion)[0]
else:
raise NotImplementedError(f"method_var {method_var} not recognized")
es_low = np.log(rate1 / rate2) - np.log(low)
es_upp = np.log(rate1 / rate2) - np.log(upp)
std_null_low = np.sqrt(v0_low)
std_null_upp = np.sqrt(v0_upp)
std_alternative = np.sqrt(v1)
pow_ = _power_equivalence_het(es_low, es_upp, nobs1, alpha=alpha,
std_null_low=std_null_low,
std_null_upp=std_null_upp,
std_alternative=std_alternative)
if return_results:
res = HolderTuple(
power=pow_[0],
power_margins=pow[1:],
std_null_low=std_null_low,
std_null_upp=std_null_upp,
std_alt=std_alternative,
nobs1=nobs1,
nobs2=nobs2,
nobs_ratio=nobs_ratio,
alpha=alpha,
tuple_=("power",), # override default
)
return res
else:
return pow_[0] | Power of equivalence test of ratio of 2 indep. negative binomial rates.
Parameters
----------
rate1 : float
Poisson rate for the first sample, treatment group, under the
alternative hypothesis.
rate2 : float
Poisson rate for the second sample, reference group, under the
alternative hypothesis.
nobs1 : float or int
Number of observations in sample 1.
low : float
Lower equivalence margin for the rate ratio, rate1 / rate2.
upp : float
Upper equivalence margin for the rate ratio, rate1 / rate2.
nobs_ratio : float
Sample size ratio, nobs2 = nobs_ratio * nobs1.
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
dispersion : float >= 0.
Dispersion parameter for Negative Binomial distribution.
The Poisson limiting case corresponds to ``dispersion=0``.
method_var : {"score", "alt"}
The variance of the test statistic for the null hypothesis given the
rates under the alternative, can be either equal to the rates under the
alternative ``method_var="alt"``, or estimated under the constrained
of the null hypothesis, ``method_var="score"``, or based on a moment
constrained estimate, ``method_var="ftotal"``. see references.
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is False, then only the power is returned.
If return_results is True, then a results instance with the
information in attributes is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
References
----------
.. [1] Zhu, Haiyuan. 2017. “Sample Size Calculation for Comparing Two
Poisson or Negative Binomial Rates in Noninferiority or Equivalence
Trials.” Statistics in Biopharmaceutical Research, March.
https://doi.org/10.1080/19466315.2016.1225594
.. [2] Zhu, Haiyuan, and Hassan Lakkis. 2014. “Sample Size Calculation for
Comparing Two Negative Binomial Rates.” Statistics in Medicine 33 (3):
376–87. https://doi.org/10.1002/sim.5947.
.. [3] PASS documentation | power_equivalence_neginb_2indep | python | statsmodels/statsmodels | statsmodels/stats/rates.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/rates.py | BSD-3-Clause |
def test_chisquare_binning(counts, expected, sort_var=None, bins=10,
df=None, ordered=False, sort_method="quicksort",
alpha_nc=0.05):
"""chisquare gof test with binning of data, Hosmer-Lemeshow type
``observed`` and ``expected`` are observation specific and should have
observations in rows and choices in columns
Parameters
----------
counts : array_like
Observed frequency, i.e. counts for all choices
expected : array_like
Expected counts or probability. If expected are counts, then they
need to sum to the same total count as the sum of observed.
If those sums are unequal and all expected values are smaller or equal
to 1, then they are interpreted as probabilities and will be rescaled
to match counts.
sort_var : array_like
1-dimensional array for binning. Groups will be formed according to
quantiles of the sorted array ``sort_var``, so that group sizes have
equal or approximately equal sizes.
Returns
-------
Holdertuple instance
This instance contains the results of the chisquare test and some
information about the data
- statistic : chisquare statistic of the goodness-of-fit test
- pvalue : pvalue of the chisquare test
= df : degrees of freedom of the test
Notes
-----
Degrees of freedom for Hosmer-Lemeshow tests are given by
g groups, c choices
- binary: `df = (g - 2)` for insample,
Stata uses `df = g` for outsample
- multinomial: `df = (g−2) *(c−1)`, reduces to (g-2) for binary c=2,
(Fagerland, Hosmer, Bofin SIM 2008)
- ordinal: `df = (g - 2) * (c - 1) + (c - 2)`, reduces to (g-2) for c=2,
(Hosmer, ... ?)
Note: If there are ties in the ``sort_var`` array, then the split of
observations into groups will depend on the sort algorithm.
"""
observed = np.asarray(counts)
expected = np.asarray(expected)
n_observed = counts.sum()
n_expected = expected.sum()
if not np.allclose(n_observed, n_expected, atol=1e-13):
if np.max(expected) < 1 + 1e-13:
# expected seems to be probability, warn and rescale
import warnings
warnings.warn("sum of expected and of observed differ, "
"rescaling ``expected``")
expected = expected / n_expected * n_observed
else:
# expected doesn't look like fractions or probabilities
raise ValueError("total counts of expected and observed differ")
# k = 1 if observed.ndim == 1 else observed.shape[1]
if sort_var is not None:
argsort = np.argsort(sort_var, kind=sort_method)
else:
argsort = np.arange(observed.shape[0])
# indices = [arr for arr in np.array_split(argsort, bins, axis=0)]
indices = np.array_split(argsort, bins, axis=0)
# in one loop, observed expected in last dimension, too messy,
# freqs_probs = np.array([np.vstack([observed[idx].mean(0),
# expected[idx].mean(0)]).T
# for idx in indices])
freqs = np.array([observed[idx].sum(0) for idx in indices])
probs = np.array([expected[idx].sum(0) for idx in indices])
# chisquare test
resid_pearson = (freqs - probs) / np.sqrt(probs)
chi2_stat_groups = ((freqs - probs)**2 / probs).sum(1)
chi2_stat = chi2_stat_groups.sum()
if df is None:
g, c = freqs.shape
if ordered is True:
df = (g - 2) * (c - 1) + (c - 2)
else:
df = (g - 2) * (c - 1)
pvalue = stats.chi2.sf(chi2_stat, df)
noncentrality = _noncentrality_chisquare(chi2_stat, df, alpha=alpha_nc)
res = HolderTuple(statistic=chi2_stat,
pvalue=pvalue,
df=df,
freqs=freqs,
probs=probs,
noncentrality=noncentrality,
resid_pearson=resid_pearson,
chi2_stat_groups=chi2_stat_groups,
indices=indices
)
return res | chisquare gof test with binning of data, Hosmer-Lemeshow type
``observed`` and ``expected`` are observation specific and should have
observations in rows and choices in columns
Parameters
----------
counts : array_like
Observed frequency, i.e. counts for all choices
expected : array_like
Expected counts or probability. If expected are counts, then they
need to sum to the same total count as the sum of observed.
If those sums are unequal and all expected values are smaller or equal
to 1, then they are interpreted as probabilities and will be rescaled
to match counts.
sort_var : array_like
1-dimensional array for binning. Groups will be formed according to
quantiles of the sorted array ``sort_var``, so that group sizes have
equal or approximately equal sizes.
Returns
-------
Holdertuple instance
This instance contains the results of the chisquare test and some
information about the data
- statistic : chisquare statistic of the goodness-of-fit test
- pvalue : pvalue of the chisquare test
= df : degrees of freedom of the test
Notes
-----
Degrees of freedom for Hosmer-Lemeshow tests are given by
g groups, c choices
- binary: `df = (g - 2)` for insample,
Stata uses `df = g` for outsample
- multinomial: `df = (g−2) *(c−1)`, reduces to (g-2) for binary c=2,
(Fagerland, Hosmer, Bofin SIM 2008)
- ordinal: `df = (g - 2) * (c - 1) + (c - 2)`, reduces to (g-2) for c=2,
(Hosmer, ... ?)
Note: If there are ties in the ``sort_var`` array, then the split of
observations into groups will depend on the sort algorithm. | test_chisquare_binning | python | statsmodels/statsmodels | statsmodels/stats/diagnostic_gen.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic_gen.py | BSD-3-Clause |
def prob_larger_ordinal_choice(prob):
"""probability that observed category is larger than distribution prob
This is a helper function for Ordinal models, where endog is a 1-dim
categorical variable and predicted probabilities are 2-dimensional with
observations in rows and choices in columns.
Parameter
---------
prob : array_like
Expected probabilities for ordinal choices, e.g. from prediction of
an ordinal model with observations in rows and choices in columns.
Returns
-------
cdf_mid : ndarray
mid cdf, i.e ``P(x < y) + 0.5 P(x=y)``
r : ndarray
Probability residual ``P(x > y) - P(x < y)`` for all possible choices.
Computed as ``r = cdf_mid * 2 - 1``
References
----------
.. [2] Li, Chun, and Bryan E. Shepherd. 2012. “A New Residual for Ordinal
Outcomes.” Biometrika 99 (2): 473–80.
See Also
--------
`statsmodels.stats.nonparametric.rank_compare_2ordinal`
"""
# similar to `nonparametric rank_compare_2ordinal`
prob = np.asarray(prob)
cdf = prob.cumsum(-1)
if cdf.ndim == 1:
cdf_ = np.concatenate(([0], cdf))
elif cdf.ndim == 2:
cdf_ = np.concatenate((np.zeros((len(cdf), 1)), cdf), axis=1)
# r_1 = cdf_[..., 1:] + cdf_[..., :-1] - 1
cdf_mid = (cdf_[..., 1:] + cdf_[..., :-1]) / 2
r = cdf_mid * 2 - 1
return cdf_mid, r | probability that observed category is larger than distribution prob
This is a helper function for Ordinal models, where endog is a 1-dim
categorical variable and predicted probabilities are 2-dimensional with
observations in rows and choices in columns.
Parameter
---------
prob : array_like
Expected probabilities for ordinal choices, e.g. from prediction of
an ordinal model with observations in rows and choices in columns.
Returns
-------
cdf_mid : ndarray
mid cdf, i.e ``P(x < y) + 0.5 P(x=y)``
r : ndarray
Probability residual ``P(x > y) - P(x < y)`` for all possible choices.
Computed as ``r = cdf_mid * 2 - 1``
References
----------
.. [2] Li, Chun, and Bryan E. Shepherd. 2012. “A New Residual for Ordinal
Outcomes.” Biometrika 99 (2): 473–80.
See Also
--------
`statsmodels.stats.nonparametric.rank_compare_2ordinal` | prob_larger_ordinal_choice | python | statsmodels/statsmodels | statsmodels/stats/diagnostic_gen.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic_gen.py | BSD-3-Clause |
def prob_larger_2ordinal(probs1, probs2):
"""Stochastically large probability for two ordinal distributions
Computes Pr(x1 > x2) + 0.5 * Pr(x1 = x2) for two ordered multinomial
(ordinal) distributed random variables x1 and x2.
This is vectorized with choices along last axis.
Broadcasting if freq2 is 1-dim also seems to work correctly.
Returns
-------
prob1 : float
Probability that random draw from distribution 1 is larger than a
random draw from distribution 2. Pr(x1 > x2) + 0.5 * Pr(x1 = x2)
prob2 : float
prob2 = 1 - prob1 = Pr(x1 < x2) + 0.5 * Pr(x1 = x2)
"""
# count1 = np.asarray(count1)
# count2 = np.asarray(count2)
# nobs1, nobs2 = count1.sum(), count2.sum()
# freq1 = count1 / nobs1
# freq2 = count2 / nobs2
# if freq1.ndim == 1:
# freq1_ = np.concatenate(([0], freq1))
# elif freq1.ndim == 2:
# freq1_ = np.concatenate((np.zeros((len(freq1), 1)), freq1), axis=1)
# if freq2.ndim == 1:
# freq2_ = np.concatenate(([0], freq2))
# elif freq2.ndim == 2:
# freq2_ = np.concatenate((np.zeros((len(freq2), 1)), freq2), axis=1)
freq1 = np.asarray(probs1)
freq2 = np.asarray(probs2)
# add zero at beginning of choices for cdf computation
freq1_ = np.concatenate((np.zeros(freq1.shape[:-1] + (1,)), freq1),
axis=-1)
freq2_ = np.concatenate((np.zeros(freq2.shape[:-1] + (1,)), freq2),
axis=-1)
cdf1 = freq1_.cumsum(axis=-1)
cdf2 = freq2_.cumsum(axis=-1)
# mid rank cdf
cdfm1 = (cdf1[..., 1:] + cdf1[..., :-1]) / 2
cdfm2 = (cdf2[..., 1:] + cdf2[..., :-1]) / 2
prob1 = (cdfm2 * freq1).sum(-1)
prob2 = (cdfm1 * freq2).sum(-1)
return prob1, prob2 | Stochastically large probability for two ordinal distributions
Computes Pr(x1 > x2) + 0.5 * Pr(x1 = x2) for two ordered multinomial
(ordinal) distributed random variables x1 and x2.
This is vectorized with choices along last axis.
Broadcasting if freq2 is 1-dim also seems to work correctly.
Returns
-------
prob1 : float
Probability that random draw from distribution 1 is larger than a
random draw from distribution 2. Pr(x1 > x2) + 0.5 * Pr(x1 = x2)
prob2 : float
prob2 = 1 - prob1 = Pr(x1 < x2) + 0.5 * Pr(x1 = x2) | prob_larger_2ordinal | python | statsmodels/statsmodels | statsmodels/stats/diagnostic_gen.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic_gen.py | BSD-3-Clause |
def cov_multinomial(probs):
"""covariance matrix of multinomial distribution
This is vectorized with choices along last axis.
cov = diag(probs) - outer(probs, probs)
"""
k = probs.shape[-1]
di = np.diag_indices(k, 2)
cov = probs[..., None] * probs[..., None, :]
cov *= - 1
cov[..., di[0], di[1]] += probs
return cov | covariance matrix of multinomial distribution
This is vectorized with choices along last axis.
cov = diag(probs) - outer(probs, probs) | cov_multinomial | python | statsmodels/statsmodels | statsmodels/stats/diagnostic_gen.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic_gen.py | BSD-3-Clause |
def var_multinomial(probs):
"""variance of multinomial distribution
var = probs * (1 - probs)
"""
var = probs * (1 - probs)
return var | variance of multinomial distribution
var = probs * (1 - probs) | var_multinomial | python | statsmodels/statsmodels | statsmodels/stats/diagnostic_gen.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic_gen.py | BSD-3-Clause |
def _critvals(self, n):
"""
Rows of the table, linearly interpolated for given sample size
Parameters
----------
n : float
sample size, second parameter of the table
Returns
-------
critv : ndarray, 1d
critical values (ppf) corresponding to a row of the table
Notes
-----
This is used in two step interpolation, or if we want to know the
critical values for all alphas for any sample size that we can obtain
through interpolation
"""
if n > self.max_size:
if self.asymptotic is not None:
cv = self.asymptotic(n)
else:
raise ValueError('n is above max(size) and no asymptotic '
'distribtuion is provided')
else:
cv = ([p(n) for p in self.polyn])
if n > self.min_nobs:
w = (n - self.min_nobs) / (self.max_nobs - self.min_nobs)
w = min(1.0, w)
a_cv = self.asymptotic(n)
cv = w * a_cv + (1 - w) * cv
return cv | Rows of the table, linearly interpolated for given sample size
Parameters
----------
n : float
sample size, second parameter of the table
Returns
-------
critv : ndarray, 1d
critical values (ppf) corresponding to a row of the table
Notes
-----
This is used in two step interpolation, or if we want to know the
critical values for all alphas for any sample size that we can obtain
through interpolation | _critvals | python | statsmodels/statsmodels | statsmodels/stats/tabledist.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tabledist.py | BSD-3-Clause |
def prob(self, x, n):
"""
Find pvalues by interpolation, either cdf(x)
Returns extreme probabilities, 0.001 and 0.2, for out of range
Parameters
----------
x : array_like
observed value, assumed to follow the distribution in the table
n : float
sample size, second parameter of the table
Returns
-------
prob : array_like
This is the probability for each value of x, the p-value in
underlying distribution is for a statistical test.
"""
critv = self._critvals(n)
alpha = self.alpha
if self.signcrit < 1:
# reverse if critv is decreasing
critv, alpha = critv[::-1], alpha[::-1]
# now critv is increasing
if np.size(x) == 1:
if x < critv[0]:
return alpha[0]
elif x > critv[-1]:
return alpha[-1]
return interp1d(critv, alpha)(x)[()]
else:
# vectorized
cond_low = (x < critv[0])
cond_high = (x > critv[-1])
cond_interior = ~np.logical_or(cond_low, cond_high)
probs = np.nan * np.ones(x.shape) # mistake if nan left
probs[cond_low] = alpha[0]
probs[cond_low] = alpha[-1]
probs[cond_interior] = interp1d(critv, alpha)(x[cond_interior])
return probs | Find pvalues by interpolation, either cdf(x)
Returns extreme probabilities, 0.001 and 0.2, for out of range
Parameters
----------
x : array_like
observed value, assumed to follow the distribution in the table
n : float
sample size, second parameter of the table
Returns
-------
prob : array_like
This is the probability for each value of x, the p-value in
underlying distribution is for a statistical test. | prob | python | statsmodels/statsmodels | statsmodels/stats/tabledist.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tabledist.py | BSD-3-Clause |
def crit(self, prob, n):
"""
Returns interpolated quantiles, similar to ppf or isf
use two sequential 1d interpolation, first by n then by prob
Parameters
----------
prob : array_like
probabilities corresponding to the definition of table columns
n : int or float
sample size, second parameter of the table
Returns
-------
ppf : array_like
critical values with same shape as prob
"""
prob = np.asarray(prob)
alpha = self.alpha
critv = self._critvals(n)
# vectorized
cond_ilow = (prob > alpha[0])
cond_ihigh = (prob < alpha[-1])
cond_interior = np.logical_or(cond_ilow, cond_ihigh)
# scalar
if prob.size == 1:
if cond_interior:
return interp1d(alpha, critv)(prob)
else:
return np.nan
# vectorized
quantile = np.nan * np.ones(prob.shape) # nans for outside
quantile[cond_interior] = interp1d(alpha, critv)(prob[cond_interior])
return quantile | Returns interpolated quantiles, similar to ppf or isf
use two sequential 1d interpolation, first by n then by prob
Parameters
----------
prob : array_like
probabilities corresponding to the definition of table columns
n : int or float
sample size, second parameter of the table
Returns
-------
ppf : array_like
critical values with same shape as prob | crit | python | statsmodels/statsmodels | statsmodels/stats/tabledist.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tabledist.py | BSD-3-Clause |
def crit3(self, prob, n):
"""
Returns interpolated quantiles, similar to ppf or isf
uses Rbf to interpolate critical values as function of `prob` and `n`
Parameters
----------
prob : array_like
probabilities corresponding to the definition of table columns
n : int or float
sample size, second parameter of the table
Returns
-------
ppf : array_like
critical values with same shape as prob, returns nan for arguments
that are outside of the table bounds
"""
prob = np.asarray(prob)
alpha = self.alpha
# vectorized
cond_ilow = (prob > alpha[0])
cond_ihigh = (prob < alpha[-1])
cond_interior = np.logical_or(cond_ilow, cond_ihigh)
# scalar
if prob.size == 1:
if cond_interior:
return self.polyrbf(n, prob)
else:
return np.nan
# vectorized
quantile = np.nan * np.ones(prob.shape) # nans for outside
quantile[cond_interior] = self.polyrbf(n, prob[cond_interior])
return quantile | Returns interpolated quantiles, similar to ppf or isf
uses Rbf to interpolate critical values as function of `prob` and `n`
Parameters
----------
prob : array_like
probabilities corresponding to the definition of table columns
n : int or float
sample size, second parameter of the table
Returns
-------
ppf : array_like
critical values with same shape as prob, returns nan for arguments
that are outside of the table bounds | crit3 | python | statsmodels/statsmodels | statsmodels/stats/tabledist.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/tabledist.py | BSD-3-Clause |
def rankdata_2samp(x1, x2):
"""Compute midranks for two samples
Parameters
----------
x1, x2 : array_like
Original data for two samples that will be converted to midranks.
Returns
-------
rank1 : ndarray
Midranks of the first sample in the pooled sample.
rank2 : ndarray
Midranks of the second sample in the pooled sample.
ranki1 : ndarray
Internal midranks of the first sample.
ranki2 : ndarray
Internal midranks of the second sample.
"""
x1 = np.asarray(x1)
x2 = np.asarray(x2)
nobs1 = len(x1)
nobs2 = len(x2)
if nobs1 == 0 or nobs2 == 0:
raise ValueError("one sample has zero length")
x_combined = np.concatenate((x1, x2))
if x_combined.ndim > 1:
rank = np.apply_along_axis(rankdata, 0, x_combined)
else:
rank = rankdata(x_combined) # no axis in older scipy
rank1 = rank[:nobs1]
rank2 = rank[nobs1:]
if x_combined.ndim > 1:
ranki1 = np.apply_along_axis(rankdata, 0, x1)
ranki2 = np.apply_along_axis(rankdata, 0, x2)
else:
ranki1 = rankdata(x1)
ranki2 = rankdata(x2)
return rank1, rank2, ranki1, ranki2 | Compute midranks for two samples
Parameters
----------
x1, x2 : array_like
Original data for two samples that will be converted to midranks.
Returns
-------
rank1 : ndarray
Midranks of the first sample in the pooled sample.
rank2 : ndarray
Midranks of the second sample in the pooled sample.
ranki1 : ndarray
Internal midranks of the first sample.
ranki2 : ndarray
Internal midranks of the second sample. | rankdata_2samp | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def conf_int(self, value=None, alpha=0.05, alternative="two-sided"):
"""
Confidence interval for probability that sample 1 has larger values
Confidence interval is for the shifted probability
P(x1 > x2) + 0.5 * P(x1 = x2) - value
Parameters
----------
value : float
Value, default 0, shifts the confidence interval,
e.g. ``value=0.5`` centers the confidence interval at zero.
alpha : float
Significance level for the confidence interval, coverage is
``1-alpha``
alternative : str
The alternative hypothesis, H1, has to be one of the following
* 'two-sided' : H1: ``prob - value`` not equal to 0.
* 'larger' : H1: ``prob - value > 0``
* 'smaller' : H1: ``prob - value < 0``
Returns
-------
lower : float or ndarray
Lower confidence limit. This is -inf for the one-sided alternative
"smaller".
upper : float or ndarray
Upper confidence limit. This is inf for the one-sided alternative
"larger".
"""
p0 = value
if p0 is None:
p0 = 0
diff = self.prob1 - p0
std_diff = np.sqrt(self.var / self.nobs)
if self.use_t is False:
return _zconfint_generic(diff, std_diff, alpha, alternative)
else:
return _tconfint_generic(diff, std_diff, self.df, alpha,
alternative) | Confidence interval for probability that sample 1 has larger values
Confidence interval is for the shifted probability
P(x1 > x2) + 0.5 * P(x1 = x2) - value
Parameters
----------
value : float
Value, default 0, shifts the confidence interval,
e.g. ``value=0.5`` centers the confidence interval at zero.
alpha : float
Significance level for the confidence interval, coverage is
``1-alpha``
alternative : str
The alternative hypothesis, H1, has to be one of the following
* 'two-sided' : H1: ``prob - value`` not equal to 0.
* 'larger' : H1: ``prob - value > 0``
* 'smaller' : H1: ``prob - value < 0``
Returns
-------
lower : float or ndarray
Lower confidence limit. This is -inf for the one-sided alternative
"smaller".
upper : float or ndarray
Upper confidence limit. This is inf for the one-sided alternative
"larger". | conf_int | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def test_prob_superior(self, value=0.5, alternative="two-sided"):
"""test for superiority probability
H0: P(x1 > x2) + 0.5 * P(x1 = x2) = value
The alternative is that the probability is either not equal, larger
or smaller than the null-value depending on the chosen alternative.
Parameters
----------
value : float
Value of the probability under the Null hypothesis.
alternative : str
The alternative hypothesis, H1, has to be one of the following
* 'two-sided' : H1: ``prob - value`` not equal to 0.
* 'larger' : H1: ``prob - value > 0``
* 'smaller' : H1: ``prob - value < 0``
Returns
-------
res : HolderTuple
HolderTuple instance with the following main attributes
statistic : float
Test statistic for z- or t-test
pvalue : float
Pvalue of the test based on either normal or t distribution.
"""
p0 = value # alias
# diff = self.prob1 - p0 # for reporting, not used in computation
# TODO: use var_prob
std_diff = np.sqrt(self.var / self.nobs)
# corresponds to a one-sample test and either p0 or diff could be used
if not self.use_t:
stat, pv = _zstat_generic(self.prob1, p0, std_diff, alternative,
diff=0)
distr = "normal"
else:
stat, pv = _tstat_generic(self.prob1, p0, std_diff, self.df,
alternative, diff=0)
distr = "t"
res = HolderTuple(statistic=stat,
pvalue=pv,
df=self.df,
distribution=distr
)
return res | test for superiority probability
H0: P(x1 > x2) + 0.5 * P(x1 = x2) = value
The alternative is that the probability is either not equal, larger
or smaller than the null-value depending on the chosen alternative.
Parameters
----------
value : float
Value of the probability under the Null hypothesis.
alternative : str
The alternative hypothesis, H1, has to be one of the following
* 'two-sided' : H1: ``prob - value`` not equal to 0.
* 'larger' : H1: ``prob - value > 0``
* 'smaller' : H1: ``prob - value < 0``
Returns
-------
res : HolderTuple
HolderTuple instance with the following main attributes
statistic : float
Test statistic for z- or t-test
pvalue : float
Pvalue of the test based on either normal or t distribution. | test_prob_superior | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def tost_prob_superior(self, low, upp):
'''test of stochastic (non-)equivalence of p = P(x1 > x2)
Null hypothesis: p < low or p > upp
Alternative hypothesis: low < p < upp
where p is the probability that a random draw from the population of
the first sample has a larger value than a random draw from the
population of the second sample, specifically
p = P(x1 > x2) + 0.5 * P(x1 = x2)
If the pvalue is smaller than a threshold, say 0.05, then we reject the
hypothesis that the probability p that distribution 1 is stochastically
superior to distribution 2 is outside of the interval given by
thresholds low and upp.
Parameters
----------
low, upp : float
equivalence interval low < mean < upp
Returns
-------
res : HolderTuple
HolderTuple instance with the following main attributes
pvalue : float
Pvalue of the equivalence test given by the larger pvalue of
the two one-sided tests.
statistic : float
Test statistic of the one-sided test that has the larger
pvalue.
results_larger : HolderTuple
Results instanc with test statistic, pvalue and degrees of
freedom for lower threshold test.
results_smaller : HolderTuple
Results instanc with test statistic, pvalue and degrees of
freedom for upper threshold test.
'''
t1 = self.test_prob_superior(low, alternative='larger')
t2 = self.test_prob_superior(upp, alternative='smaller')
# idx_max = 1 if t1.pvalue < t2.pvalue else 0
idx_max = np.asarray(t1.pvalue < t2.pvalue, int)
title = "Equivalence test for Prob(x1 > x2) + 0.5 Prob(x1 = x2) "
res = HolderTuple(statistic=np.choose(idx_max,
[t1.statistic, t2.statistic]),
# pvalue=[t1.pvalue, t2.pvalue][idx_max], # python
# use np.choose for vectorized selection
pvalue=np.choose(idx_max, [t1.pvalue, t2.pvalue]),
results_larger=t1,
results_smaller=t2,
title=title
)
return res | test of stochastic (non-)equivalence of p = P(x1 > x2)
Null hypothesis: p < low or p > upp
Alternative hypothesis: low < p < upp
where p is the probability that a random draw from the population of
the first sample has a larger value than a random draw from the
population of the second sample, specifically
p = P(x1 > x2) + 0.5 * P(x1 = x2)
If the pvalue is smaller than a threshold, say 0.05, then we reject the
hypothesis that the probability p that distribution 1 is stochastically
superior to distribution 2 is outside of the interval given by
thresholds low and upp.
Parameters
----------
low, upp : float
equivalence interval low < mean < upp
Returns
-------
res : HolderTuple
HolderTuple instance with the following main attributes
pvalue : float
Pvalue of the equivalence test given by the larger pvalue of
the two one-sided tests.
statistic : float
Test statistic of the one-sided test that has the larger
pvalue.
results_larger : HolderTuple
Results instanc with test statistic, pvalue and degrees of
freedom for lower threshold test.
results_smaller : HolderTuple
Results instanc with test statistic, pvalue and degrees of
freedom for upper threshold test. | tost_prob_superior | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def confint_lintransf(self, const=-1, slope=2, alpha=0.05,
alternative="two-sided"):
"""confidence interval of a linear transformation of prob1
This computes the confidence interval for
d = const + slope * prob1
Default values correspond to Somers' d.
Parameters
----------
const, slope : float
Constant and slope for linear (affine) transformation.
alpha : float
Significance level for the confidence interval, coverage is
``1-alpha``
alternative : str
The alternative hypothesis, H1, has to be one of the following
* 'two-sided' : H1: ``prob - value`` not equal to 0.
* 'larger' : H1: ``prob - value > 0``
* 'smaller' : H1: ``prob - value < 0``
Returns
-------
lower : float or ndarray
Lower confidence limit. This is -inf for the one-sided alternative
"smaller".
upper : float or ndarray
Upper confidence limit. This is inf for the one-sided alternative
"larger".
"""
low_p, upp_p = self.conf_int(alpha=alpha, alternative=alternative)
low = const + slope * low_p
upp = const + slope * upp_p
if slope < 0:
low, upp = upp, low
return low, upp | confidence interval of a linear transformation of prob1
This computes the confidence interval for
d = const + slope * prob1
Default values correspond to Somers' d.
Parameters
----------
const, slope : float
Constant and slope for linear (affine) transformation.
alpha : float
Significance level for the confidence interval, coverage is
``1-alpha``
alternative : str
The alternative hypothesis, H1, has to be one of the following
* 'two-sided' : H1: ``prob - value`` not equal to 0.
* 'larger' : H1: ``prob - value > 0``
* 'smaller' : H1: ``prob - value < 0``
Returns
-------
lower : float or ndarray
Lower confidence limit. This is -inf for the one-sided alternative
"smaller".
upper : float or ndarray
Upper confidence limit. This is inf for the one-sided alternative
"larger". | confint_lintransf | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def effectsize_normal(self, prob=None):
"""
Cohen's d, standardized mean difference under normality assumption.
This computes the standardized mean difference, Cohen's d, effect size
that is equivalent to the rank based probability ``p`` of being
stochastically larger if we assume that the data is normally
distributed, given by
:math: `d = F^{-1}(p) * \\sqrt{2}`
where :math:`F^{-1}` is the inverse of the cdf of the normal
distribution.
Parameters
----------
prob : float in (0, 1)
Probability to be converted to Cohen's d effect size.
If prob is None, then the ``prob1`` attribute is used.
Returns
-------
equivalent Cohen's d effect size under normality assumption.
"""
if prob is None:
prob = self.prob1
return stats.norm.ppf(prob) * np.sqrt(2) | Cohen's d, standardized mean difference under normality assumption.
This computes the standardized mean difference, Cohen's d, effect size
that is equivalent to the rank based probability ``p`` of being
stochastically larger if we assume that the data is normally
distributed, given by
:math: `d = F^{-1}(p) * \\sqrt{2}`
where :math:`F^{-1}` is the inverse of the cdf of the normal
distribution.
Parameters
----------
prob : float in (0, 1)
Probability to be converted to Cohen's d effect size.
If prob is None, then the ``prob1`` attribute is used.
Returns
-------
equivalent Cohen's d effect size under normality assumption. | effectsize_normal | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def summary(self, alpha=0.05, xname=None):
"""summary table for probability that random draw x1 is larger than x2
Parameters
----------
alpha : float
Significance level for confidence intervals. Coverage is 1 - alpha
xname : None or list of str
If None, then each row has a name column with generic names.
If xname is a list of strings, then it will be included as part
of those names.
Returns
-------
SimpleTable instance with methods to convert to different output
formats.
"""
yname = "None"
effect = np.atleast_1d(self.prob1)
if self.pvalue is None:
statistic, pvalue = self.test_prob_superior()
else:
pvalue = self.pvalue
statistic = self.statistic
pvalues = np.atleast_1d(pvalue)
ci = np.atleast_2d(self.conf_int(alpha=alpha))
if ci.shape[0] > 1:
ci = ci.T
use_t = self.use_t
sd = np.atleast_1d(np.sqrt(self.var_prob))
statistic = np.atleast_1d(statistic)
if xname is None:
xname = ['c%d' % ii for ii in range(len(effect))]
xname2 = ['prob(x1>x2) %s' % ii for ii in xname]
title = "Probability sample 1 is stochastically larger"
from statsmodels.iolib.summary import summary_params
summ = summary_params((self, effect, sd, statistic,
pvalues, ci),
yname=yname, xname=xname2, use_t=use_t,
title=title, alpha=alpha)
return summ | summary table for probability that random draw x1 is larger than x2
Parameters
----------
alpha : float
Significance level for confidence intervals. Coverage is 1 - alpha
xname : None or list of str
If None, then each row has a name column with generic names.
If xname is a list of strings, then it will be included as part
of those names.
Returns
-------
SimpleTable instance with methods to convert to different output
formats. | summary | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def rank_compare_2indep(x1, x2, use_t=True):
"""
Statistics and tests for the probability that x1 has larger values than x2.
p is the probability that a random draw from the population of
the first sample has a larger value than a random draw from the
population of the second sample, specifically
p = P(x1 > x2) + 0.5 * P(x1 = x2)
This is a measure underlying Wilcoxon-Mann-Whitney's U test,
Fligner-Policello test and Brunner-Munzel test, and
Inference is based on the asymptotic distribution of the Brunner-Munzel
test. The half probability for ties corresponds to the use of midranks
and make it valid for discrete variables.
The Null hypothesis for stochastic equality is p = 0.5, which corresponds
to the Brunner-Munzel test.
Parameters
----------
x1, x2 : array_like
Array of samples, should be one-dimensional.
use_t : boolean
If use_t is true, the t distribution with Welch-Satterthwaite type
degrees of freedom is used for p-value and confidence interval.
If use_t is false, then the normal distribution is used.
Returns
-------
res : RankCompareResult
The results instance contains the results for the Brunner-Munzel test
and has methods for hypothesis tests, confidence intervals and summary.
statistic : float
The Brunner-Munzel W statistic.
pvalue : float
p-value assuming an t distribution. One-sided or
two-sided, depending on the choice of `alternative` and `use_t`.
See Also
--------
RankCompareResult
scipy.stats.brunnermunzel : Brunner-Munzel test for stochastic equality
scipy.stats.mannwhitneyu : Mann-Whitney rank test on two samples.
Notes
-----
Wilcoxon-Mann-Whitney assumes equal variance or equal distribution under
the Null hypothesis. Fligner-Policello test allows for unequal variances
but assumes continuous distribution, i.e. no ties.
Brunner-Munzel extend the test to allow for unequal variance and discrete
or ordered categorical random variables.
Brunner and Munzel recommended to estimate the p-value by t-distribution
when the size of data is 50 or less. If the size is lower than 10, it would
be better to use permuted Brunner Munzel test (see [2]_) for the test
of stochastic equality.
This measure has been introduced in the literature under many different
names relying on a variety of assumptions.
In psychology, McGraw and Wong (1992) introduced it as Common Language
effect size for the continuous, normal distribution case,
Vargha and Delaney (2000) [3]_ extended it to the nonparametric
continuous distribution case as in Fligner-Policello.
WMW and related tests can only be interpreted as test of medians or tests
of central location only under very restrictive additional assumptions
such as both distribution are identical under the equality null hypothesis
(assumed by Mann-Whitney) or both distributions are symmetric (shown by
Fligner-Policello). If the distribution of the two samples can differ in
an arbitrary way, then the equality Null hypothesis corresponds to p=0.5
against an alternative p != 0.5. see for example Conroy (2012) [4]_ and
Divine et al (2018) [5]_ .
Note: Brunner-Munzel and related literature define the probability that x1
is stochastically smaller than x2, while here we use stochastically larger.
This equivalent to switching x1 and x2 in the two sample case.
References
----------
.. [1] Brunner, E. and Munzel, U. "The nonparametric Benhrens-Fisher
problem: Asymptotic theory and a small-sample approximation".
Biometrical Journal. Vol. 42(2000): 17-25.
.. [2] Neubert, K. and Brunner, E. "A studentized permutation test for the
non-parametric Behrens-Fisher problem". Computational Statistics and
Data Analysis. Vol. 51(2007): 5192-5204.
.. [3] Vargha, András, and Harold D. Delaney. 2000. “A Critique and
Improvement of the CL Common Language Effect Size Statistics of
McGraw and Wong.” Journal of Educational and Behavioral Statistics
25 (2): 101–32. https://doi.org/10.3102/10769986025002101.
.. [4] Conroy, Ronán M. 2012. “What Hypotheses Do ‘Nonparametric’ Two-Group
Tests Actually Test?” The Stata Journal: Promoting Communications on
Statistics and Stata 12 (2): 182–90.
https://doi.org/10.1177/1536867X1201200202.
.. [5] Divine, George W., H. James Norton, Anna E. Barón, and Elizabeth
Juarez-Colunga. 2018. “The Wilcoxon–Mann–Whitney Procedure Fails as
a Test of Medians.” The American Statistician 72 (3): 278–86.
https://doi.org/10.1080/00031305.2017.1305291.
"""
x1 = np.asarray(x1)
x2 = np.asarray(x2)
nobs1 = len(x1)
nobs2 = len(x2)
nobs = nobs1 + nobs2
if nobs1 == 0 or nobs2 == 0:
raise ValueError("one sample has zero length")
rank1, rank2, ranki1, ranki2 = rankdata_2samp(x1, x2)
meanr1 = np.mean(rank1, axis=0)
meanr2 = np.mean(rank2, axis=0)
meanri1 = np.mean(ranki1, axis=0)
meanri2 = np.mean(ranki2, axis=0)
S1 = np.sum(np.power(rank1 - ranki1 - meanr1 + meanri1, 2.0), axis=0)
S1 /= nobs1 - 1
S2 = np.sum(np.power(rank2 - ranki2 - meanr2 + meanri2, 2.0), axis=0)
S2 /= nobs2 - 1
wbfn = nobs1 * nobs2 * (meanr1 - meanr2)
wbfn /= (nobs1 + nobs2) * np.sqrt(nobs1 * S1 + nobs2 * S2)
# Here we only use alternative == "two-sided"
if use_t:
df_numer = np.power(nobs1 * S1 + nobs2 * S2, 2.0)
df_denom = np.power(nobs1 * S1, 2.0) / (nobs1 - 1)
df_denom += np.power(nobs2 * S2, 2.0) / (nobs2 - 1)
df = df_numer / df_denom
pvalue = 2 * stats.t.sf(np.abs(wbfn), df)
else:
pvalue = 2 * stats.norm.sf(np.abs(wbfn))
df = None
# other info
var1 = S1 / (nobs - nobs1)**2
var2 = S2 / (nobs - nobs2)**2
var_prob = (var1 / nobs1 + var2 / nobs2)
var = nobs * (var1 / nobs1 + var2 / nobs2)
prob1 = (meanr1 - (nobs1 + 1) / 2) / nobs2
prob2 = (meanr2 - (nobs2 + 1) / 2) / nobs1
return RankCompareResult(statistic=wbfn, pvalue=pvalue, s1=S1, s2=S2,
var1=var1, var2=var2, var=var,
var_prob=var_prob,
nobs1=nobs1, nobs2=nobs2, nobs=nobs,
mean1=meanr1, mean2=meanr2,
prob1=prob1, prob2=prob2,
somersd1=prob1 * 2 - 1, somersd2=prob2 * 2 - 1,
df=df, use_t=use_t
) | Statistics and tests for the probability that x1 has larger values than x2.
p is the probability that a random draw from the population of
the first sample has a larger value than a random draw from the
population of the second sample, specifically
p = P(x1 > x2) + 0.5 * P(x1 = x2)
This is a measure underlying Wilcoxon-Mann-Whitney's U test,
Fligner-Policello test and Brunner-Munzel test, and
Inference is based on the asymptotic distribution of the Brunner-Munzel
test. The half probability for ties corresponds to the use of midranks
and make it valid for discrete variables.
The Null hypothesis for stochastic equality is p = 0.5, which corresponds
to the Brunner-Munzel test.
Parameters
----------
x1, x2 : array_like
Array of samples, should be one-dimensional.
use_t : boolean
If use_t is true, the t distribution with Welch-Satterthwaite type
degrees of freedom is used for p-value and confidence interval.
If use_t is false, then the normal distribution is used.
Returns
-------
res : RankCompareResult
The results instance contains the results for the Brunner-Munzel test
and has methods for hypothesis tests, confidence intervals and summary.
statistic : float
The Brunner-Munzel W statistic.
pvalue : float
p-value assuming an t distribution. One-sided or
two-sided, depending on the choice of `alternative` and `use_t`.
See Also
--------
RankCompareResult
scipy.stats.brunnermunzel : Brunner-Munzel test for stochastic equality
scipy.stats.mannwhitneyu : Mann-Whitney rank test on two samples.
Notes
-----
Wilcoxon-Mann-Whitney assumes equal variance or equal distribution under
the Null hypothesis. Fligner-Policello test allows for unequal variances
but assumes continuous distribution, i.e. no ties.
Brunner-Munzel extend the test to allow for unequal variance and discrete
or ordered categorical random variables.
Brunner and Munzel recommended to estimate the p-value by t-distribution
when the size of data is 50 or less. If the size is lower than 10, it would
be better to use permuted Brunner Munzel test (see [2]_) for the test
of stochastic equality.
This measure has been introduced in the literature under many different
names relying on a variety of assumptions.
In psychology, McGraw and Wong (1992) introduced it as Common Language
effect size for the continuous, normal distribution case,
Vargha and Delaney (2000) [3]_ extended it to the nonparametric
continuous distribution case as in Fligner-Policello.
WMW and related tests can only be interpreted as test of medians or tests
of central location only under very restrictive additional assumptions
such as both distribution are identical under the equality null hypothesis
(assumed by Mann-Whitney) or both distributions are symmetric (shown by
Fligner-Policello). If the distribution of the two samples can differ in
an arbitrary way, then the equality Null hypothesis corresponds to p=0.5
against an alternative p != 0.5. see for example Conroy (2012) [4]_ and
Divine et al (2018) [5]_ .
Note: Brunner-Munzel and related literature define the probability that x1
is stochastically smaller than x2, while here we use stochastically larger.
This equivalent to switching x1 and x2 in the two sample case.
References
----------
.. [1] Brunner, E. and Munzel, U. "The nonparametric Benhrens-Fisher
problem: Asymptotic theory and a small-sample approximation".
Biometrical Journal. Vol. 42(2000): 17-25.
.. [2] Neubert, K. and Brunner, E. "A studentized permutation test for the
non-parametric Behrens-Fisher problem". Computational Statistics and
Data Analysis. Vol. 51(2007): 5192-5204.
.. [3] Vargha, András, and Harold D. Delaney. 2000. “A Critique and
Improvement of the CL Common Language Effect Size Statistics of
McGraw and Wong.” Journal of Educational and Behavioral Statistics
25 (2): 101–32. https://doi.org/10.3102/10769986025002101.
.. [4] Conroy, Ronán M. 2012. “What Hypotheses Do ‘Nonparametric’ Two-Group
Tests Actually Test?” The Stata Journal: Promoting Communications on
Statistics and Stata 12 (2): 182–90.
https://doi.org/10.1177/1536867X1201200202.
.. [5] Divine, George W., H. James Norton, Anna E. Barón, and Elizabeth
Juarez-Colunga. 2018. “The Wilcoxon–Mann–Whitney Procedure Fails as
a Test of Medians.” The American Statistician 72 (3): 278–86.
https://doi.org/10.1080/00031305.2017.1305291. | rank_compare_2indep | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def rank_compare_2ordinal(count1, count2, ddof=1, use_t=True):
"""
Stochastically larger probability for 2 independent ordinal samples.
This is a special case of `rank_compare_2indep` when the data are given as
counts of two independent ordinal, i.e. ordered multinomial, samples.
The statistic of interest is the probability that a random draw from the
population of the first sample has a larger value than a random draw from
the population of the second sample, specifically
p = P(x1 > x2) + 0.5 * P(x1 = x2)
Parameters
----------
count1 : array_like
Counts of the first sample, categories are assumed to be ordered.
count2 : array_like
Counts of the second sample, number of categories and ordering needs
to be the same as for sample 1.
ddof : scalar
Degrees of freedom correction for variance estimation. The default
ddof=1 corresponds to `rank_compare_2indep`.
use_t : bool
If use_t is true, the t distribution with Welch-Satterthwaite type
degrees of freedom is used for p-value and confidence interval.
If use_t is false, then the normal distribution is used.
Returns
-------
res : RankCompareResult
This includes methods for hypothesis tests and confidence intervals
for the probability that sample 1 is stochastically larger than
sample 2.
See Also
--------
rank_compare_2indep
RankCompareResult
Notes
-----
The implementation is based on the appendix of Munzel and Hauschke (2003)
with the addition of ``ddof`` so that the results match the general
function `rank_compare_2indep`.
"""
count1 = np.asarray(count1)
count2 = np.asarray(count2)
nobs1, nobs2 = count1.sum(), count2.sum()
freq1 = count1 / nobs1
freq2 = count2 / nobs2
cdf1 = np.concatenate(([0], freq1)).cumsum(axis=0)
cdf2 = np.concatenate(([0], freq2)).cumsum(axis=0)
# mid rank cdf
cdfm1 = (cdf1[1:] + cdf1[:-1]) / 2
cdfm2 = (cdf2[1:] + cdf2[:-1]) / 2
prob1 = (cdfm2 * freq1).sum()
prob2 = (cdfm1 * freq2).sum()
var1 = (cdfm2**2 * freq1).sum() - prob1**2
var2 = (cdfm1**2 * freq2).sum() - prob2**2
var_prob = (var1 / (nobs1 - ddof) + var2 / (nobs2 - ddof))
nobs = nobs1 + nobs2
var = nobs * var_prob
vn1 = var1 * nobs2 * nobs1 / (nobs1 - ddof)
vn2 = var2 * nobs1 * nobs2 / (nobs2 - ddof)
df = (vn1 + vn2)**2 / (vn1**2 / (nobs1 - 1) + vn2**2 / (nobs2 - 1))
res = RankCompareResult(statistic=None, pvalue=None, s1=None, s2=None,
var1=var1, var2=var2, var=var,
var_prob=var_prob,
nobs1=nobs1, nobs2=nobs2, nobs=nobs,
mean1=None, mean2=None,
prob1=prob1, prob2=prob2,
somersd1=prob1 * 2 - 1, somersd2=prob2 * 2 - 1,
df=df, use_t=use_t
)
return res | Stochastically larger probability for 2 independent ordinal samples.
This is a special case of `rank_compare_2indep` when the data are given as
counts of two independent ordinal, i.e. ordered multinomial, samples.
The statistic of interest is the probability that a random draw from the
population of the first sample has a larger value than a random draw from
the population of the second sample, specifically
p = P(x1 > x2) + 0.5 * P(x1 = x2)
Parameters
----------
count1 : array_like
Counts of the first sample, categories are assumed to be ordered.
count2 : array_like
Counts of the second sample, number of categories and ordering needs
to be the same as for sample 1.
ddof : scalar
Degrees of freedom correction for variance estimation. The default
ddof=1 corresponds to `rank_compare_2indep`.
use_t : bool
If use_t is true, the t distribution with Welch-Satterthwaite type
degrees of freedom is used for p-value and confidence interval.
If use_t is false, then the normal distribution is used.
Returns
-------
res : RankCompareResult
This includes methods for hypothesis tests and confidence intervals
for the probability that sample 1 is stochastically larger than
sample 2.
See Also
--------
rank_compare_2indep
RankCompareResult
Notes
-----
The implementation is based on the appendix of Munzel and Hauschke (2003)
with the addition of ``ddof`` so that the results match the general
function `rank_compare_2indep`. | rank_compare_2ordinal | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def prob_larger_continuous(distr1, distr2):
"""
Probability indicating that distr1 is stochastically larger than distr2.
This computes
p = P(x1 > x2)
for two continuous distributions, where `distr1` and `distr2` are the
distributions of random variables x1 and x2 respectively.
Parameters
----------
distr1, distr2 : distributions
Two instances of scipy.stats.distributions. The required methods are
cdf of the second distribution and expect of the first distribution.
Returns
-------
p : probability x1 is larger than x2
Notes
-----
This is a one-liner that is added mainly as reference.
Examples
--------
>>> from scipy import stats
>>> prob_larger_continuous(stats.norm, stats.t(5))
0.4999999999999999
# which is the same as
>>> stats.norm.expect(stats.t(5).cdf)
0.4999999999999999
# distribution 1 with smaller mean (loc) than distribution 2
>>> prob_larger_continuous(stats.norm, stats.norm(loc=1))
0.23975006109347669
"""
return distr1.expect(distr2.cdf) | Probability indicating that distr1 is stochastically larger than distr2.
This computes
p = P(x1 > x2)
for two continuous distributions, where `distr1` and `distr2` are the
distributions of random variables x1 and x2 respectively.
Parameters
----------
distr1, distr2 : distributions
Two instances of scipy.stats.distributions. The required methods are
cdf of the second distribution and expect of the first distribution.
Returns
-------
p : probability x1 is larger than x2
Notes
-----
This is a one-liner that is added mainly as reference.
Examples
--------
>>> from scipy import stats
>>> prob_larger_continuous(stats.norm, stats.t(5))
0.4999999999999999
# which is the same as
>>> stats.norm.expect(stats.t(5).cdf)
0.4999999999999999
# distribution 1 with smaller mean (loc) than distribution 2
>>> prob_larger_continuous(stats.norm, stats.norm(loc=1))
0.23975006109347669 | prob_larger_continuous | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def cohensd2problarger(d):
"""
Convert Cohen's d effect size to stochastically-larger-probability.
This assumes observations are normally distributed.
Computed as
p = Prob(x1 > x2) = F(d / sqrt(2))
where `F` is cdf of normal distribution. Cohen's d is defined as
d = (mean1 - mean2) / std
where ``std`` is the pooled within standard deviation.
Parameters
----------
d : float or array_like
Cohen's d effect size for difference mean1 - mean2.
Returns
-------
prob : float or ndarray
Prob(x1 > x2)
"""
return stats.norm.cdf(d / np.sqrt(2)) | Convert Cohen's d effect size to stochastically-larger-probability.
This assumes observations are normally distributed.
Computed as
p = Prob(x1 > x2) = F(d / sqrt(2))
where `F` is cdf of normal distribution. Cohen's d is defined as
d = (mean1 - mean2) / std
where ``std`` is the pooled within standard deviation.
Parameters
----------
d : float or array_like
Cohen's d effect size for difference mean1 - mean2.
Returns
-------
prob : float or ndarray
Prob(x1 > x2) | cohensd2problarger | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def _compute_rank_placements(x1, x2) -> Holder:
"""
Compute ranks and placements for two samples.
This helper is used by `samplesize_rank_compare_onetail`
to calculate rank-based statistics for two input samples.
It assumes that the input data has been validated beforehand.
Parameters
----------
x1, x2 : array_like
Data samples used to compute ranks and placements.
Returns
-------
res : Holder
An instance of Holder containing the following attributes:
n_1 : int
Number of observations in the first sample.
n_2 : int
Number of observations in the second sample.
overall_ranks_pooled : ndarray
Ranks of the pooled sample.
overall_ranks_1 : ndarray
Ranks of the first sample in the pooled sample.
overall_ranks_2 : ndarray
Ranks of the second sample in the pooled sample.
within_group_ranks_1 : ndarray
Internal ranks of the first sample.
within_group_ranks_2 : ndarray
Internal ranks of the second sample.
placements_1 : ndarray
Placements of the first sample in the pooled sample.
placements_2 : ndarray
Placements of the second sample in the pooled sample.
Notes
-----
* The overall rank for each observation is determined
by ranking all data points from both samples combined
(`x1` and `x2`) in ascending order, with ties averaged.
* The within-group rank for each observation is determined
by ranking the data points within each sample separately,
* The placement of each observation is calculated by
taking the difference between the overall rank and the
within-group rank of the observation. Placements can be
thought of as measuress of the degree of overlap or
separation between two samples.
"""
n_1 = len(x1)
n_2 = len(x2)
# Overall ranks for each obs among combined sample
overall_ranks_pooled = rankdata(
np.r_[x1, x2], method="average"
)
overall_ranks_1 = overall_ranks_pooled[:n_1]
overall_ranks_2 = overall_ranks_pooled[n_1:]
# Within group ranks for each obs
within_group_ranks_1 = rankdata(x1, method="average")
within_group_ranks_2 = rankdata(x2, method="average")
placements_1 = overall_ranks_1 - within_group_ranks_1
placements_2 = overall_ranks_2 - within_group_ranks_2
return Holder(
n_1=n_1,
n_2=n_2,
overall_ranks_pooled=overall_ranks_pooled,
overall_ranks_1=overall_ranks_1,
overall_ranks_2=overall_ranks_2,
within_group_ranks_1=within_group_ranks_1,
within_group_ranks_2=within_group_ranks_2,
placements_1=placements_1,
placements_2=placements_2,
) | Compute ranks and placements for two samples.
This helper is used by `samplesize_rank_compare_onetail`
to calculate rank-based statistics for two input samples.
It assumes that the input data has been validated beforehand.
Parameters
----------
x1, x2 : array_like
Data samples used to compute ranks and placements.
Returns
-------
res : Holder
An instance of Holder containing the following attributes:
n_1 : int
Number of observations in the first sample.
n_2 : int
Number of observations in the second sample.
overall_ranks_pooled : ndarray
Ranks of the pooled sample.
overall_ranks_1 : ndarray
Ranks of the first sample in the pooled sample.
overall_ranks_2 : ndarray
Ranks of the second sample in the pooled sample.
within_group_ranks_1 : ndarray
Internal ranks of the first sample.
within_group_ranks_2 : ndarray
Internal ranks of the second sample.
placements_1 : ndarray
Placements of the first sample in the pooled sample.
placements_2 : ndarray
Placements of the second sample in the pooled sample.
Notes
-----
* The overall rank for each observation is determined
by ranking all data points from both samples combined
(`x1` and `x2`) in ascending order, with ties averaged.
* The within-group rank for each observation is determined
by ranking the data points within each sample separately,
* The placement of each observation is calculated by
taking the difference between the overall rank and the
within-group rank of the observation. Placements can be
thought of as measuress of the degree of overlap or
separation between two samples. | _compute_rank_placements | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def samplesize_rank_compare_onetail(
synthetic_sample,
reference_sample,
alpha,
power,
nobs_ratio=1,
alternative="two-sided",
) -> Holder:
"""
Compute sample size for the non-parametric Mann-Whitney U test.
This function implements the method of Happ et al (2019).
Parameters
----------
synthetic_sample : array_like
Generated `synthetic` data representing the treatment
group under the research hypothesis.
reference_sample : array_like
Advance information for the reference group.
alpha : float
The type I error rate for the test (two-sided).
power : float
The desired power of the test.
nobs_ratio : float, optional
Sample size ratio, `nobs_ref` = `nobs_ratio` *
`nobs_treat`. This is the ratio of the reference
group sample size to the treatment group sample
size, by default 1 (balanced design). See Notes.
alternative : str, ‘two-sided’ (default), ‘larger’, or ‘smaller’
Extra argument to choose whether the sample size is
calculated for a two-sided (default) or one-sided test.
See Notes.
Returns
-------
res : Holder
An instance of Holder containing the following attributes:
nobs_total : float
The total sample size required for the experiment.
nobs_treat : float
Sample size for the treatment group.
nobs_ref : float
Sample size for the reference group.
relative_effect : float
The estimated relative effect size.
power : float
The desired power for the test.
alpha : float
The type I error rate for the test.
Notes
-----
In the context of the two-sample Wilcoxon Mann-Whitney
U test, the `reference_sample` typically represents data
from the control group or previous studies. The
`synthetic_sample` is generated based on this reference
data and a prespecified relative effect size that is
meaningful for the research question. This effect size
is often determined in collaboration with subject matter
experts to reflect a significant difference worth detecting.
By comparing the reference and synthetic samples, this
function estimates the sample size needed to acheve the
desired power at the specified Type-I error rate.
Choosing between `one-sided` and `two-sided` tests has
important implications for sample size planning. A
`two-sided` test is more conservative and requires a
larger sample size but covers effects in both directions.
In contrast, a `larger` (`relative_effect > 0.5`) or `smaller`
(`relative_effect < 0.5`) one-sided test assumes the effect
occurs only in one direction, leading to a smaller required
sample size. However, if the true effect is in the opposite
direction, the `one-sided` test have virtually no power to
detect it. Additionally, if a two-sided test ends up being
used instead of the planned one-sided test, the original
sample size may be insufficient, resulting in an underpowered
study. It is important to carefully consider these trade-offs
when planning a study.
For `nobs_ratio > 1`, `nobs_ratio = 1`, or `nobs_ratio < 1`,
the reference group sample size is larger, equal to, or smaller
than the treatment group sample size, respectively.
Example
-------
The data for the placebo group of a clinical trial published in
Thall and Vail [2] is shown below. A relevant effect for the treatment
under investigation is considered to be a 50% reduction in the number
of seizures. To compute the required sample size with a power of 0.8
and holding the type I error rate at 0.05, we generate synthetic data
for the treatment group under the alternative assuming this reduction.
>>> from statsmodels.stats.nonparametric import samplesize_rank_compare_onetail
>>> import numpy as np
>>> reference_sample = np.array([3, 3, 5, 4, 21, 7, 2, 12, 5, 0, 22, 4, 2, 12,
... 9, 5, 3, 29, 5, 7, 4, 4, 5, 8, 25, 1, 2, 12])
>>> # Apply 50% reduction in seizure counts and floor operation
>>> synthetic_sample = np.floor(reference_sample / 2)
>>> result = samplesize_rank_compare_onetail(
... synthetic_sample=synthetic_sample,
... reference_sample=reference_sample,
... alpha=0.05, power=0.8
... )
>>> print(f"Total sample size: {result.nobs_total}, "
... f"Treatment group: {result.nobs_treat}, "
... f"Reference group: {result.nobs_ref}")
References
----------
.. [1] Happ, M., Bathke, A. C., and Brunner, E. "Optimal sample size
planning for the Wilcoxon-Mann-Whitney test". Statistics in Medicine.
Vol. 38(2019): 363-375. https://doi.org/10.1002/sim.7983.
.. [2] Thall, P. F., and Vail, S. C. "Some covariance models for longitudinal
count data with overdispersion". Biometrics, pp. 657-671, 1990.
"""
synthetic_sample = np.asarray(synthetic_sample)
reference_sample = np.asarray(reference_sample)
if not (len(synthetic_sample) > 0 and len(reference_sample) > 0):
raise ValueError(
"Both `synthetic_sample` and `reference_sample`"
" must have at least one element."
)
if not (
np.all(np.isfinite(reference_sample))
and np.all(np.isfinite(synthetic_sample))
):
raise ValueError(
"All elements of `synthetic_sample` and `reference_sample`"
" must be finite; check for missing values."
)
if not (0 < alpha < 1):
raise ValueError("Alpha must be between 0 and 1 non-inclusive.")
if not (0 < power < 1):
raise ValueError("Power must be between 0 and 1 non-inclusive.")
if not (0 < nobs_ratio):
raise ValueError(
"Ratio of reference group to treatment group must be"
" strictly positive."
)
if alternative not in ("two-sided", "larger", "smaller"):
raise ValueError(
"Alternative must be one of `two-sided`, `larger`, or `smaller`."
)
# Group 1 is the treatment group, Group 2 is the reference group
rank_place = _compute_rank_placements(
synthetic_sample,
reference_sample,
)
# Extra few bytes of name binding for explicitness & readability
n_syn = rank_place.n_1
n_ref = rank_place.n_2
overall_ranks_pooled = rank_place.overall_ranks_pooled
placements_syn = rank_place.placements_1
placements_ref = rank_place.placements_2
relative_effect = (
np.mean(placements_syn) - np.mean(placements_ref)
) / (n_syn + n_ref) + 0.5
# Values [0.499, 0.501] considered 'practically' = 0.5 (0.1% atol)
if np.isclose(relative_effect, 0.5, atol=1e-3):
raise ValueError(
"Estimated relative effect is effectively 0.5, i.e."
" stochastic equality between `synthetic_sample` and"
" `reference_sample`. Given null hypothesis is true,"
" sample size cannot be calculated. Please review data"
" samples to ensure they reflect appropriate relative"
" effect size assumptions."
)
if relative_effect < 0.5 and alternative == "larger":
raise ValueError(
"Estimated relative effect is smaller than 0.5;"
" `synthetic_sample` is stochastically smaller than"
" `reference_sample`. No sample size can be calculated"
" for `alternative == 'larger'`. Please review data"
" samples to ensure they reflect appropriate relative"
" effect size assumptions."
)
if relative_effect > 0.5 and alternative == "smaller":
raise ValueError(
"Estimated relative effect is larger than 0.5;"
" `synthetic_sample` is stochastically larger than"
" `reference_sample`. No sample size can be calculated"
" for `alternative == 'smaller'`. Please review data"
" samples to ensure they reflect appropriate relative"
" effect size assumptions."
)
sd_overall = np.sqrt(
np.sum(
(overall_ranks_pooled - (n_syn + n_ref + 1) / 2) ** 2
)
/ (n_syn + n_ref) ** 3
)
var_ref = (
np.sum(
(placements_ref - np.mean(placements_ref)) ** 2
) / (n_ref * (n_syn ** 2))
)
var_syn = (
np.sum(
(placements_syn - np.mean(placements_syn)) ** 2
) / ((n_ref ** 2) * n_syn)
)
quantile_prob = (1 - alpha / 2) if alternative == "two-sided" else (1 - alpha)
quantile_alpha = stats.norm.ppf(quantile_prob, loc=0, scale=1)
quantile_power = stats.norm.ppf(power, loc=0, scale=1)
# Convert `nobs_ratio` to proportion of total allocated to reference group
prop_treatment = 1 / (1 + nobs_ratio)
prop_reference = 1 - prop_treatment
var_terms = np.sqrt(
prop_reference * var_syn + (1 - prop_reference) * var_ref
)
quantiles_terms = sd_overall * quantile_alpha + quantile_power * var_terms
# Add a small epsilon to avoid division by zero when there is no
# treatment effect, i.e. p_hat = 0.5
nobs_total = (quantiles_terms**2) / (
prop_reference
* (1 - prop_reference)
* (relative_effect - 0.5 + 1e-12) ** 2
)
nobs_treat = nobs_total * (1 - prop_reference)
nobs_ref = nobs_total * prop_reference
return Holder(
nobs_total=nobs_total.item(),
nobs_treat=nobs_treat.item(),
nobs_ref=nobs_ref.item(),
relative_effect=relative_effect.item(),
power=power,
alpha=alpha,
) | Compute sample size for the non-parametric Mann-Whitney U test.
This function implements the method of Happ et al (2019).
Parameters
----------
synthetic_sample : array_like
Generated `synthetic` data representing the treatment
group under the research hypothesis.
reference_sample : array_like
Advance information for the reference group.
alpha : float
The type I error rate for the test (two-sided).
power : float
The desired power of the test.
nobs_ratio : float, optional
Sample size ratio, `nobs_ref` = `nobs_ratio` *
`nobs_treat`. This is the ratio of the reference
group sample size to the treatment group sample
size, by default 1 (balanced design). See Notes.
alternative : str, ‘two-sided’ (default), ‘larger’, or ‘smaller’
Extra argument to choose whether the sample size is
calculated for a two-sided (default) or one-sided test.
See Notes.
Returns
-------
res : Holder
An instance of Holder containing the following attributes:
nobs_total : float
The total sample size required for the experiment.
nobs_treat : float
Sample size for the treatment group.
nobs_ref : float
Sample size for the reference group.
relative_effect : float
The estimated relative effect size.
power : float
The desired power for the test.
alpha : float
The type I error rate for the test.
Notes
-----
In the context of the two-sample Wilcoxon Mann-Whitney
U test, the `reference_sample` typically represents data
from the control group or previous studies. The
`synthetic_sample` is generated based on this reference
data and a prespecified relative effect size that is
meaningful for the research question. This effect size
is often determined in collaboration with subject matter
experts to reflect a significant difference worth detecting.
By comparing the reference and synthetic samples, this
function estimates the sample size needed to acheve the
desired power at the specified Type-I error rate.
Choosing between `one-sided` and `two-sided` tests has
important implications for sample size planning. A
`two-sided` test is more conservative and requires a
larger sample size but covers effects in both directions.
In contrast, a `larger` (`relative_effect > 0.5`) or `smaller`
(`relative_effect < 0.5`) one-sided test assumes the effect
occurs only in one direction, leading to a smaller required
sample size. However, if the true effect is in the opposite
direction, the `one-sided` test have virtually no power to
detect it. Additionally, if a two-sided test ends up being
used instead of the planned one-sided test, the original
sample size may be insufficient, resulting in an underpowered
study. It is important to carefully consider these trade-offs
when planning a study.
For `nobs_ratio > 1`, `nobs_ratio = 1`, or `nobs_ratio < 1`,
the reference group sample size is larger, equal to, or smaller
than the treatment group sample size, respectively.
Example
-------
The data for the placebo group of a clinical trial published in
Thall and Vail [2] is shown below. A relevant effect for the treatment
under investigation is considered to be a 50% reduction in the number
of seizures. To compute the required sample size with a power of 0.8
and holding the type I error rate at 0.05, we generate synthetic data
for the treatment group under the alternative assuming this reduction.
>>> from statsmodels.stats.nonparametric import samplesize_rank_compare_onetail
>>> import numpy as np
>>> reference_sample = np.array([3, 3, 5, 4, 21, 7, 2, 12, 5, 0, 22, 4, 2, 12,
... 9, 5, 3, 29, 5, 7, 4, 4, 5, 8, 25, 1, 2, 12])
>>> # Apply 50% reduction in seizure counts and floor operation
>>> synthetic_sample = np.floor(reference_sample / 2)
>>> result = samplesize_rank_compare_onetail(
... synthetic_sample=synthetic_sample,
... reference_sample=reference_sample,
... alpha=0.05, power=0.8
... )
>>> print(f"Total sample size: {result.nobs_total}, "
... f"Treatment group: {result.nobs_treat}, "
... f"Reference group: {result.nobs_ref}")
References
----------
.. [1] Happ, M., Bathke, A. C., and Brunner, E. "Optimal sample size
planning for the Wilcoxon-Mann-Whitney test". Statistics in Medicine.
Vol. 38(2019): 363-375. https://doi.org/10.1002/sim.7983.
.. [2] Thall, P. F., and Vail, S. C. "Some covariance models for longitudinal
count data with overdispersion". Biometrics, pp. 657-671, 1990. | samplesize_rank_compare_onetail | python | statsmodels/statsmodels | statsmodels/stats/nonparametric.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/nonparametric.py | BSD-3-Clause |
def grad(self, params=None, **kwds):
"""First derivative, jacobian of func evaluated at params.
Parameters
----------
params : None or ndarray
Values at which gradient is evaluated. If params is None, then
the attached params are used.
TODO: should we drop this
kwds : keyword arguments
This keyword arguments are used without changes in the calulation
of numerical derivatives. These are only used if a `deriv` function
was not provided.
Returns
-------
grad : ndarray
gradient or jacobian of the function
"""
if params is None:
params = self.params
if self._grad is not None:
return self._grad(params)
else:
# copied from discrete_margins
try:
from statsmodels.tools.numdiff import approx_fprime_cs
jac = approx_fprime_cs(params, self.fun, **kwds)
except TypeError: # norm.cdf doesn't take complex values
from statsmodels.tools.numdiff import approx_fprime
jac = approx_fprime(params, self.fun, **kwds)
return jac | First derivative, jacobian of func evaluated at params.
Parameters
----------
params : None or ndarray
Values at which gradient is evaluated. If params is None, then
the attached params are used.
TODO: should we drop this
kwds : keyword arguments
This keyword arguments are used without changes in the calulation
of numerical derivatives. These are only used if a `deriv` function
was not provided.
Returns
-------
grad : ndarray
gradient or jacobian of the function | grad | python | statsmodels/statsmodels | statsmodels/stats/_delta_method.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_delta_method.py | BSD-3-Clause |
def cov(self):
"""Covariance matrix of the transformed random variable.
"""
g = self.grad()
covar = np.dot(np.dot(g, self.cov_params), g.T)
return covar | Covariance matrix of the transformed random variable. | cov | python | statsmodels/statsmodels | statsmodels/stats/_delta_method.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_delta_method.py | BSD-3-Clause |
def predicted(self):
"""Value of the function evaluated at the attached params.
Note: This is not equal to the expected value if the transformation is
nonlinear. If params is the maximum likelihood estimate, then
`predicted` is the maximum likelihood estimate of the value of the
nonlinear function.
"""
predicted = self.fun(self.params)
# TODO: why do I need to squeeze in poisson example
if predicted.ndim > 1:
predicted = predicted.squeeze()
return predicted | Value of the function evaluated at the attached params.
Note: This is not equal to the expected value if the transformation is
nonlinear. If params is the maximum likelihood estimate, then
`predicted` is the maximum likelihood estimate of the value of the
nonlinear function. | predicted | python | statsmodels/statsmodels | statsmodels/stats/_delta_method.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_delta_method.py | BSD-3-Clause |
def wald_test(self, value):
"""Joint hypothesis tests that H0: f(params) = value.
The alternative hypothesis is two-sided H1: f(params) != value.
Warning: this might be replaced with more general version that returns
ContrastResults.
currently uses chisquare distribution, use_f option not yet implemented
Parameters
----------
value : float or ndarray
value of f(params) under the Null Hypothesis
Returns
-------
statistic : float
Value of the test statistic.
pvalue : float
The p-value for the hypothesis test, based and chisquare
distribution and implies a two-sided hypothesis test
"""
# TODO: add use_t option or not?
m = self.predicted()
v = self.cov()
df_constraints = np.size(m)
diff = m - value
lmstat = np.dot(np.dot(diff.T, np.linalg.inv(v)), diff)
return lmstat, stats.chi2.sf(lmstat, df_constraints) | Joint hypothesis tests that H0: f(params) = value.
The alternative hypothesis is two-sided H1: f(params) != value.
Warning: this might be replaced with more general version that returns
ContrastResults.
currently uses chisquare distribution, use_f option not yet implemented
Parameters
----------
value : float or ndarray
value of f(params) under the Null Hypothesis
Returns
-------
statistic : float
Value of the test statistic.
pvalue : float
The p-value for the hypothesis test, based and chisquare
distribution and implies a two-sided hypothesis test | wald_test | python | statsmodels/statsmodels | statsmodels/stats/_delta_method.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_delta_method.py | BSD-3-Clause |
def var(self):
"""standard error for each equation (row) treated separately
"""
g = self.grad()
var = (np.dot(g, self.cov_params) * g).sum(-1)
if var.ndim == 2:
var = var.T
return var | standard error for each equation (row) treated separately | var | python | statsmodels/statsmodels | statsmodels/stats/_delta_method.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_delta_method.py | BSD-3-Clause |
def se_vectorized(self):
"""standard error for each equation (row) treated separately
"""
var = self.var()
return np.sqrt(var) | standard error for each equation (row) treated separately | se_vectorized | python | statsmodels/statsmodels | statsmodels/stats/_delta_method.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_delta_method.py | BSD-3-Clause |
def conf_int(self, alpha=0.05, use_t=False, df=None, var_extra=None,
predicted=None, se=None):
"""
Confidence interval for predicted based on delta method.
Parameters
----------
alpha : float, optional
The significance level for the confidence interval.
ie., The default `alpha` = .05 returns a 95% confidence interval.
use_t : boolean
If use_t is False (default), then the normal distribution is used
for the confidence interval, otherwise the t distribution with
`df` degrees of freedom is used.
df : int or float
degrees of freedom for t distribution. Only used and required if
use_t is True.
var_extra : None or array_like float
Additional variance that is added to the variance based on the
delta method. This can be used to obtain confidence intervalls for
new observations (prediction interval).
predicted : ndarray (float)
Predicted value, can be used to avoid repeated calculations if it
is already available.
se : ndarray (float)
Standard error, can be used to avoid repeated calculations if it
is already available.
Returns
-------
conf_int : array
Each row contains [lower, upper] limits of the confidence interval
for the corresponding parameter. The first column contains all
lower, the second column contains all upper limits.
"""
# TODO: predicted and se as arguments to avoid duplicate calculations
# or leave unchanged?
if not use_t:
dist = stats.norm
dist_args = ()
else:
if df is None:
raise ValueError('t distribution requires df')
dist = stats.t
dist_args = (df,)
if predicted is None:
predicted = self.predicted()
if se is None:
se = self.se_vectorized()
if var_extra is not None:
se = np.sqrt(se**2 + var_extra)
q = dist.ppf(1 - alpha / 2., *dist_args)
lower = predicted - q * se
upper = predicted + q * se
ci = np.column_stack((lower, upper))
if ci.shape[1] != 2:
raise RuntimeError('something wrong: ci not 2 columns')
return ci | Confidence interval for predicted based on delta method.
Parameters
----------
alpha : float, optional
The significance level for the confidence interval.
ie., The default `alpha` = .05 returns a 95% confidence interval.
use_t : boolean
If use_t is False (default), then the normal distribution is used
for the confidence interval, otherwise the t distribution with
`df` degrees of freedom is used.
df : int or float
degrees of freedom for t distribution. Only used and required if
use_t is True.
var_extra : None or array_like float
Additional variance that is added to the variance based on the
delta method. This can be used to obtain confidence intervalls for
new observations (prediction interval).
predicted : ndarray (float)
Predicted value, can be used to avoid repeated calculations if it
is already available.
se : ndarray (float)
Standard error, can be used to avoid repeated calculations if it
is already available.
Returns
-------
conf_int : array
Each row contains [lower, upper] limits of the confidence interval
for the corresponding parameter. The first column contains all
lower, the second column contains all upper limits. | conf_int | python | statsmodels/statsmodels | statsmodels/stats/_delta_method.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_delta_method.py | BSD-3-Clause |
def summary(self, xname=None, alpha=0.05, title=None, use_t=False,
df=None):
"""Summarize the Results of the nonlinear transformation.
This provides a parameter table equivalent to `t_test` and reuses
`ContrastResults`.
Parameters
-----------
xname : list of strings, optional
Default is `c_##` for ## in p the number of regressors
alpha : float
Significance level for the confidence intervals. Default is
alpha = 0.05 which implies a confidence level of 95%.
title : string, optional
Title for the params table. If not None, then this replaces the
default title
use_t : boolean
If use_t is False (default), then the normal distribution is used
for the confidence interval, otherwise the t distribution with
`df` degrees of freedom is used.
df : int or float
degrees of freedom for t distribution. Only used and required if
use_t is True.
Returns
-------
smry : string or Summary instance
This contains a parameter results table in the case of t or z test
in the same form as the parameter results table in the model
results summary.
For F or Wald test, the return is a string.
"""
# this is an experimental reuse of ContrastResults
from statsmodels.stats.contrast import ContrastResults
predicted = self.predicted()
se = self.se_vectorized()
# TODO check shape for scalar case, ContrastResults requires iterable
predicted = np.atleast_1d(predicted)
if predicted.ndim > 1:
predicted = predicted.squeeze()
se = np.atleast_1d(se)
statistic = predicted / se
if use_t:
df_resid = df
cr = ContrastResults(effect=predicted, t=statistic, sd=se,
df_denom=df_resid)
else:
cr = ContrastResults(effect=predicted, statistic=statistic, sd=se,
df_denom=None, distribution='norm')
return cr.summary(xname=xname, alpha=alpha, title=title) | Summarize the Results of the nonlinear transformation.
This provides a parameter table equivalent to `t_test` and reuses
`ContrastResults`.
Parameters
-----------
xname : list of strings, optional
Default is `c_##` for ## in p the number of regressors
alpha : float
Significance level for the confidence intervals. Default is
alpha = 0.05 which implies a confidence level of 95%.
title : string, optional
Title for the params table. If not None, then this replaces the
default title
use_t : boolean
If use_t is False (default), then the normal distribution is used
for the confidence interval, otherwise the t distribution with
`df` degrees of freedom is used.
df : int or float
degrees of freedom for t distribution. Only used and required if
use_t is True.
Returns
-------
smry : string or Summary instance
This contains a parameter results table in the case of t or z test
in the same form as the parameter results table in the model
results summary.
For F or Wald test, the return is a string. | summary | python | statsmodels/statsmodels | statsmodels/stats/_delta_method.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_delta_method.py | BSD-3-Clause |
def _make_df_square(table):
"""
Reindex a pandas DataFrame so that it becomes square, meaning that
the row and column indices contain the same values, in the same
order. The row and column index are extended to achieve this.
"""
if not isinstance(table, pd.DataFrame):
return table
# If the table is not square, make it square
if not table.index.equals(table.columns):
ix = list(set(table.index) | set(table.columns))
ix.sort()
table = table.reindex(index=ix, columns=ix, fill_value=0)
# Ensures that the rows and columns are in the same order.
table = table.reindex(table.columns)
return table | Reindex a pandas DataFrame so that it becomes square, meaning that
the row and column indices contain the same values, in the same
order. The row and column index are extended to achieve this. | _make_df_square | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def from_data(cls, data, shift_zeros=True):
"""
Construct a Table object from data.
Parameters
----------
data : array_like
The raw data, from which a contingency table is constructed
using the first two columns.
shift_zeros : bool
If True and any cell count is zero, add 0.5 to all values
in the table.
Returns
-------
A Table instance.
"""
if isinstance(data, pd.DataFrame):
table = pd.crosstab(data.iloc[:, 0], data.iloc[:, 1])
else:
table = pd.crosstab(data[:, 0], data[:, 1])
return cls(table, shift_zeros) | Construct a Table object from data.
Parameters
----------
data : array_like
The raw data, from which a contingency table is constructed
using the first two columns.
shift_zeros : bool
If True and any cell count is zero, add 0.5 to all values
in the table.
Returns
-------
A Table instance. | from_data | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def test_nominal_association(self):
"""
Assess independence for nominal factors.
Assessment of independence between rows and columns using
chi^2 testing. The rows and columns are treated as nominal
(unordered) categorical variables.
Returns
-------
A bunch containing the following attributes:
statistic : float
The chi^2 test statistic.
df : int
The degrees of freedom of the reference distribution
pvalue : float
The p-value for the test.
"""
statistic = np.asarray(self.chi2_contribs).sum()
df = np.prod(np.asarray(self.table.shape) - 1)
pvalue = 1 - stats.chi2.cdf(statistic, df)
b = _Bunch()
b.statistic = statistic
b.df = df
b.pvalue = pvalue
return b | Assess independence for nominal factors.
Assessment of independence between rows and columns using
chi^2 testing. The rows and columns are treated as nominal
(unordered) categorical variables.
Returns
-------
A bunch containing the following attributes:
statistic : float
The chi^2 test statistic.
df : int
The degrees of freedom of the reference distribution
pvalue : float
The p-value for the test. | test_nominal_association | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def test_ordinal_association(self, row_scores=None, col_scores=None):
"""
Assess independence between two ordinal variables.
This is the 'linear by linear' association test, which uses
weights or scores to target the test to have more power
against ordered alternatives.
Parameters
----------
row_scores : array_like
An array of numeric row scores
col_scores : array_like
An array of numeric column scores
Returns
-------
A bunch with the following attributes:
statistic : float
The test statistic.
null_mean : float
The expected value of the test statistic under the null
hypothesis.
null_sd : float
The standard deviation of the test statistic under the
null hypothesis.
zscore : float
The Z-score for the test statistic.
pvalue : float
The p-value for the test.
Notes
-----
The scores define the trend to which the test is most sensitive.
Using the default row and column scores gives the
Cochran-Armitage trend test.
"""
if row_scores is None:
row_scores = np.arange(self.table.shape[0])
if col_scores is None:
col_scores = np.arange(self.table.shape[1])
if len(row_scores) != self.table.shape[0]:
msg = ("The length of `row_scores` must match the first " +
"dimension of `table`.")
raise ValueError(msg)
if len(col_scores) != self.table.shape[1]:
msg = ("The length of `col_scores` must match the second " +
"dimension of `table`.")
raise ValueError(msg)
# The test statistic
statistic = np.dot(row_scores, np.dot(self.table, col_scores))
# Some needed quantities
n_obs = self.table.sum()
rtot = self.table.sum(1)
um = np.dot(row_scores, rtot)
u2m = np.dot(row_scores**2, rtot)
ctot = self.table.sum(0)
vn = np.dot(col_scores, ctot)
v2n = np.dot(col_scores**2, ctot)
# The null mean and variance of the test statistic
e_stat = um * vn / n_obs
v_stat = (u2m - um**2 / n_obs) * (v2n - vn**2 / n_obs) / (n_obs - 1)
sd_stat = np.sqrt(v_stat)
zscore = (statistic - e_stat) / sd_stat
pvalue = 2 * stats.norm.cdf(-np.abs(zscore))
b = _Bunch()
b.statistic = statistic
b.null_mean = e_stat
b.null_sd = sd_stat
b.zscore = zscore
b.pvalue = pvalue
return b | Assess independence between two ordinal variables.
This is the 'linear by linear' association test, which uses
weights or scores to target the test to have more power
against ordered alternatives.
Parameters
----------
row_scores : array_like
An array of numeric row scores
col_scores : array_like
An array of numeric column scores
Returns
-------
A bunch with the following attributes:
statistic : float
The test statistic.
null_mean : float
The expected value of the test statistic under the null
hypothesis.
null_sd : float
The standard deviation of the test statistic under the
null hypothesis.
zscore : float
The Z-score for the test statistic.
pvalue : float
The p-value for the test.
Notes
-----
The scores define the trend to which the test is most sensitive.
Using the default row and column scores gives the
Cochran-Armitage trend test. | test_ordinal_association | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def marginal_probabilities(self):
"""
Estimate marginal probability distributions for the rows and columns.
Returns
-------
row : ndarray
Marginal row probabilities
col : ndarray
Marginal column probabilities
"""
n = self.table.sum()
row = self.table.sum(1) / n
col = self.table.sum(0) / n
if isinstance(self.table_orig, pd.DataFrame):
row = pd.Series(row, self.table_orig.index)
col = pd.Series(col, self.table_orig.columns)
return row, col | Estimate marginal probability distributions for the rows and columns.
Returns
-------
row : ndarray
Marginal row probabilities
col : ndarray
Marginal column probabilities | marginal_probabilities | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def independence_probabilities(self):
"""
Returns fitted joint probabilities under independence.
The returned table is outer(row, column), where row and
column are the estimated marginal distributions
of the rows and columns.
"""
row, col = self.marginal_probabilities
itab = np.outer(row, col)
if isinstance(self.table_orig, pd.DataFrame):
itab = pd.DataFrame(itab, self.table_orig.index,
self.table_orig.columns)
return itab | Returns fitted joint probabilities under independence.
The returned table is outer(row, column), where row and
column are the estimated marginal distributions
of the rows and columns. | independence_probabilities | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
def fittedvalues(self):
"""
Returns fitted cell counts under independence.
The returned cell counts are estimates under a model
where the rows and columns of the table are independent.
"""
probs = self.independence_probabilities
fit = self.table.sum() * probs
return fit | Returns fitted cell counts under independence.
The returned cell counts are estimates under a model
where the rows and columns of the table are independent. | fittedvalues | python | statsmodels/statsmodels | statsmodels/stats/contingency_tables.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contingency_tables.py | BSD-3-Clause |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.