code
stringlengths 26
870k
| docstring
stringlengths 1
65.6k
| func_name
stringlengths 1
194
| language
stringclasses 1
value | repo
stringlengths 8
68
| path
stringlengths 5
194
| url
stringlengths 46
254
| license
stringclasses 4
values |
---|---|---|---|---|---|---|---|
def test_proportions_2indep(count1, nobs1, count2, nobs2, value=None,
method=None, compare='diff',
alternative='two-sided', correction=True,
return_results=True):
"""
Hypothesis test for comparing two independent proportions
This assumes that we have two independent binomial samples.
The Null and alternative hypothesis are
for compare = 'diff'
- H0: prop1 - prop2 - value = 0
- H1: prop1 - prop2 - value != 0 if alternative = 'two-sided'
- H1: prop1 - prop2 - value > 0 if alternative = 'larger'
- H1: prop1 - prop2 - value < 0 if alternative = 'smaller'
for compare = 'ratio'
- H0: prop1 / prop2 - value = 0
- H1: prop1 / prop2 - value != 0 if alternative = 'two-sided'
- H1: prop1 / prop2 - value > 0 if alternative = 'larger'
- H1: prop1 / prop2 - value < 0 if alternative = 'smaller'
for compare = 'odds-ratio'
- H0: or - value = 0
- H1: or - value != 0 if alternative = 'two-sided'
- H1: or - value > 0 if alternative = 'larger'
- H1: or - value < 0 if alternative = 'smaller'
where odds-ratio or = prop1 / (1 - prop1) / (prop2 / (1 - prop2))
Parameters
----------
count1 : int
Count for first sample.
nobs1 : int
Sample size for first sample.
count2 : int
Count for the second sample.
nobs2 : int
Sample size for the second sample.
value : float
Value of the difference, risk ratio or odds ratio of 2 independent
proportions under the null hypothesis.
Default is equal proportions, 0 for diff and 1 for risk-ratio and for
odds-ratio.
method : string
Method for computing the hypothesis test. If method is None, then a
default method is used. The default might change as more methods are
added.
diff:
- 'wald',
- 'agresti-caffo'
- 'score' if correction is True, then this uses the degrees of freedom
correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985
ratio:
- 'log': wald test using log transformation
- 'log-adjusted': wald test using log transformation,
adds 0.5 to counts
- 'score': if correction is True, then this uses the degrees of freedom
correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985
odds-ratio:
- 'logit': wald test using logit transformation
- 'logit-adjusted': wald test using logit transformation,
adds 0.5 to counts
- 'logit-smoothed': wald test using logit transformation, biases
cell counts towards independence by adding two observations in
total.
- 'score' if correction is True, then this uses the degrees of freedom
correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985
compare : {'diff', 'ratio' 'odds-ratio'}
If compare is `diff`, then the hypothesis test is for the risk
difference diff = p1 - p2.
If compare is `ratio`, then the hypothesis test is for the
risk ratio defined by ratio = p1 / p2.
If compare is `odds-ratio`, then the hypothesis test is for the
odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2)
alternative : {'two-sided', 'smaller', 'larger'}
alternative hypothesis, which can be two-sided or either one of the
one-sided tests.
correction : bool
If correction is True (default), then the Miettinen and Nurminen
small sample correction to the variance nobs / (nobs - 1) is used.
Applies only if method='score'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise a tuple with statistic and pvalue is returned.
Returns
-------
results : results instance or tuple
If return_results is True, then a results instance with the
information in attributes is returned.
If return_results is False, then only ``statistic`` and ``pvalue``
are returned.
statistic : float
test statistic asymptotically normal distributed N(0, 1)
pvalue : float
p-value based on normal distribution
other attributes :
additional information about the hypothesis test
See Also
--------
tost_proportions_2indep
confint_proportions_2indep
Notes
-----
Status: experimental, API and defaults might still change.
More ``methods`` will be added.
The current default methods are
- 'diff': 'agresti-caffo',
- 'ratio': 'log-adjusted',
- 'odds-ratio': 'logit-adjusted'
"""
method_default = {'diff': 'agresti-caffo',
'ratio': 'log-adjusted',
'odds-ratio': 'logit-adjusted'}
# normalize compare name
if compare.lower() == 'or':
compare = 'odds-ratio'
if method is None:
method = method_default[compare]
method = method.lower()
if method.startswith('agr'):
method = 'agresti-caffo'
if value is None:
# TODO: odds ratio does not work if value=1 for score test
value = 0 if compare == 'diff' else 1
count1, nobs1, count2, nobs2 = map(np.asarray,
[count1, nobs1, count2, nobs2])
p1 = count1 / nobs1
p2 = count2 / nobs2
diff = p1 - p2
ratio = p1 / p2
odds_ratio = p1 / (1 - p1) / p2 * (1 - p2)
res = None
if compare == 'diff':
if method in ['wald', 'agresti-caffo']:
addone = 1 if method == 'agresti-caffo' else 0
count1_, nobs1_ = count1 + addone, nobs1 + 2 * addone
count2_, nobs2_ = count2 + addone, nobs2 + 2 * addone
p1_ = count1_ / nobs1_
p2_ = count2_ / nobs2_
diff_stat = p1_ - p2_ - value
var = p1_ * (1 - p1_) / nobs1_ + p2_ * (1 - p2_) / nobs2_
statistic = diff_stat / np.sqrt(var)
distr = 'normal'
elif method.startswith('newcomb'):
msg = 'newcomb not available for hypothesis test'
raise NotImplementedError(msg)
elif method == 'score':
# Note score part is the same call for all compare
res = score_test_proportions_2indep(count1, nobs1, count2, nobs2,
value=value, compare=compare,
alternative=alternative,
correction=correction,
return_results=return_results)
if return_results is False:
statistic, pvalue = res[:2]
distr = 'normal'
# TODO/Note score_test_proportion_2samp returns statistic and
# not diff_stat
diff_stat = None
else:
raise ValueError('method not recognized')
elif compare == 'ratio':
if method in ['log', 'log-adjusted']:
addhalf = 0.5 if method == 'log-adjusted' else 0
count1_, nobs1_ = count1 + addhalf, nobs1 + addhalf
count2_, nobs2_ = count2 + addhalf, nobs2 + addhalf
p1_ = count1_ / nobs1_
p2_ = count2_ / nobs2_
ratio_ = p1_ / p2_
var = (1 / count1_) - 1 / nobs1_ + 1 / count2_ - 1 / nobs2_
diff_stat = np.log(ratio_) - np.log(value)
statistic = diff_stat / np.sqrt(var)
distr = 'normal'
elif method == 'score':
res = score_test_proportions_2indep(count1, nobs1, count2, nobs2,
value=value, compare=compare,
alternative=alternative,
correction=correction,
return_results=return_results)
if return_results is False:
statistic, pvalue = res[:2]
distr = 'normal'
diff_stat = None
else:
raise ValueError('method not recognized')
elif compare == "odds-ratio":
if method in ['logit', 'logit-adjusted', 'logit-smoothed']:
if method in ['logit-smoothed']:
adjusted = _shrink_prob(count1, nobs1, count2, nobs2,
shrink_factor=2, return_corr=False)[0]
count1_, nobs1_, count2_, nobs2_ = adjusted
else:
addhalf = 0.5 if method == 'logit-adjusted' else 0
count1_, nobs1_ = count1 + addhalf, nobs1 + 2 * addhalf
count2_, nobs2_ = count2 + addhalf, nobs2 + 2 * addhalf
p1_ = count1_ / nobs1_
p2_ = count2_ / nobs2_
odds_ratio_ = p1_ / (1 - p1_) / p2_ * (1 - p2_)
var = (1 / count1_ + 1 / (nobs1_ - count1_) +
1 / count2_ + 1 / (nobs2_ - count2_))
diff_stat = np.log(odds_ratio_) - np.log(value)
statistic = diff_stat / np.sqrt(var)
distr = 'normal'
elif method == 'score':
res = score_test_proportions_2indep(count1, nobs1, count2, nobs2,
value=value, compare=compare,
alternative=alternative,
correction=correction,
return_results=return_results)
if return_results is False:
statistic, pvalue = res[:2]
distr = 'normal'
diff_stat = None
else:
raise ValueError('method "%s" not recognized' % method)
else:
raise ValueError('compare "%s" not recognized' % compare)
if distr == 'normal' and diff_stat is not None:
statistic, pvalue = _zstat_generic2(diff_stat, np.sqrt(var),
alternative=alternative)
if return_results:
if res is None:
res = HolderTuple(statistic=statistic,
pvalue=pvalue,
compare=compare,
method=method,
diff=diff,
ratio=ratio,
odds_ratio=odds_ratio,
variance=var,
alternative=alternative,
value=value,
)
else:
# we already have a return result from score test
# add missing attributes
res.diff = diff
res.ratio = ratio
res.odds_ratio = odds_ratio
res.value = value
return res
else:
return statistic, pvalue | Hypothesis test for comparing two independent proportions
This assumes that we have two independent binomial samples.
The Null and alternative hypothesis are
for compare = 'diff'
- H0: prop1 - prop2 - value = 0
- H1: prop1 - prop2 - value != 0 if alternative = 'two-sided'
- H1: prop1 - prop2 - value > 0 if alternative = 'larger'
- H1: prop1 - prop2 - value < 0 if alternative = 'smaller'
for compare = 'ratio'
- H0: prop1 / prop2 - value = 0
- H1: prop1 / prop2 - value != 0 if alternative = 'two-sided'
- H1: prop1 / prop2 - value > 0 if alternative = 'larger'
- H1: prop1 / prop2 - value < 0 if alternative = 'smaller'
for compare = 'odds-ratio'
- H0: or - value = 0
- H1: or - value != 0 if alternative = 'two-sided'
- H1: or - value > 0 if alternative = 'larger'
- H1: or - value < 0 if alternative = 'smaller'
where odds-ratio or = prop1 / (1 - prop1) / (prop2 / (1 - prop2))
Parameters
----------
count1 : int
Count for first sample.
nobs1 : int
Sample size for first sample.
count2 : int
Count for the second sample.
nobs2 : int
Sample size for the second sample.
value : float
Value of the difference, risk ratio or odds ratio of 2 independent
proportions under the null hypothesis.
Default is equal proportions, 0 for diff and 1 for risk-ratio and for
odds-ratio.
method : string
Method for computing the hypothesis test. If method is None, then a
default method is used. The default might change as more methods are
added.
diff:
- 'wald',
- 'agresti-caffo'
- 'score' if correction is True, then this uses the degrees of freedom
correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985
ratio:
- 'log': wald test using log transformation
- 'log-adjusted': wald test using log transformation,
adds 0.5 to counts
- 'score': if correction is True, then this uses the degrees of freedom
correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985
odds-ratio:
- 'logit': wald test using logit transformation
- 'logit-adjusted': wald test using logit transformation,
adds 0.5 to counts
- 'logit-smoothed': wald test using logit transformation, biases
cell counts towards independence by adding two observations in
total.
- 'score' if correction is True, then this uses the degrees of freedom
correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985
compare : {'diff', 'ratio' 'odds-ratio'}
If compare is `diff`, then the hypothesis test is for the risk
difference diff = p1 - p2.
If compare is `ratio`, then the hypothesis test is for the
risk ratio defined by ratio = p1 / p2.
If compare is `odds-ratio`, then the hypothesis test is for the
odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2)
alternative : {'two-sided', 'smaller', 'larger'}
alternative hypothesis, which can be two-sided or either one of the
one-sided tests.
correction : bool
If correction is True (default), then the Miettinen and Nurminen
small sample correction to the variance nobs / (nobs - 1) is used.
Applies only if method='score'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise a tuple with statistic and pvalue is returned.
Returns
-------
results : results instance or tuple
If return_results is True, then a results instance with the
information in attributes is returned.
If return_results is False, then only ``statistic`` and ``pvalue``
are returned.
statistic : float
test statistic asymptotically normal distributed N(0, 1)
pvalue : float
p-value based on normal distribution
other attributes :
additional information about the hypothesis test
See Also
--------
tost_proportions_2indep
confint_proportions_2indep
Notes
-----
Status: experimental, API and defaults might still change.
More ``methods`` will be added.
The current default methods are
- 'diff': 'agresti-caffo',
- 'ratio': 'log-adjusted',
- 'odds-ratio': 'logit-adjusted' | test_proportions_2indep | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def tost_proportions_2indep(count1, nobs1, count2, nobs2, low, upp,
method=None, compare='diff', correction=True):
"""
Equivalence test based on two one-sided `test_proportions_2indep`
This assumes that we have two independent binomial samples.
The Null and alternative hypothesis for equivalence testing are
for compare = 'diff'
- H0: prop1 - prop2 <= low or upp <= prop1 - prop2
- H1: low < prop1 - prop2 < upp
for compare = 'ratio'
- H0: prop1 / prop2 <= low or upp <= prop1 / prop2
- H1: low < prop1 / prop2 < upp
for compare = 'odds-ratio'
- H0: or <= low or upp <= or
- H1: low < or < upp
where odds-ratio or = prop1 / (1 - prop1) / (prop2 / (1 - prop2))
Parameters
----------
count1, nobs1 :
count and sample size for first sample
count2, nobs2 :
count and sample size for the second sample
low, upp :
equivalence margin for diff, risk ratio or odds ratio
method : string
method for computing the hypothesis test. If method is None, then a
default method is used. The default might change as more methods are
added.
diff:
- 'wald',
- 'agresti-caffo'
- 'score' if correction is True, then this uses the degrees of freedom
correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985.
ratio:
- 'log': wald test using log transformation
- 'log-adjusted': wald test using log transformation,
adds 0.5 to counts
- 'score' if correction is True, then this uses the degrees of freedom
correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985.
odds-ratio:
- 'logit': wald test using logit transformation
- 'logit-adjusted': : wald test using logit transformation,
adds 0.5 to counts
- 'logit-smoothed': : wald test using logit transformation, biases
cell counts towards independence by adding two observations in
total.
- 'score' if correction is True, then this uses the degrees of freedom
correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985
compare : string in ['diff', 'ratio' 'odds-ratio']
If compare is `diff`, then the hypothesis test is for
diff = p1 - p2.
If compare is `ratio`, then the hypothesis test is for the
risk ratio defined by ratio = p1 / p2.
If compare is `odds-ratio`, then the hypothesis test is for the
odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2).
correction : bool
If correction is True (default), then the Miettinen and Nurminen
small sample correction to the variance nobs / (nobs - 1) is used.
Applies only if method='score'.
Returns
-------
pvalue : float
p-value is the max of the pvalues of the two one-sided tests
t1 : test results
results instance for one-sided hypothesis at the lower margin
t1 : test results
results instance for one-sided hypothesis at the upper margin
See Also
--------
test_proportions_2indep
confint_proportions_2indep
Notes
-----
Status: experimental, API and defaults might still change.
The TOST equivalence test delegates to `test_proportions_2indep` and has
the same method and comparison options.
"""
tt1 = test_proportions_2indep(count1, nobs1, count2, nobs2, value=low,
method=method, compare=compare,
alternative='larger',
correction=correction,
return_results=True)
tt2 = test_proportions_2indep(count1, nobs1, count2, nobs2, value=upp,
method=method, compare=compare,
alternative='smaller',
correction=correction,
return_results=True)
# idx_max = 1 if t1.pvalue < t2.pvalue else 0
idx_max = np.asarray(tt1.pvalue < tt2.pvalue, int)
statistic = np.choose(idx_max, [tt1.statistic, tt2.statistic])
pvalue = np.choose(idx_max, [tt1.pvalue, tt2.pvalue])
res = HolderTuple(statistic=statistic,
pvalue=pvalue,
compare=compare,
method=method,
results_larger=tt1,
results_smaller=tt2,
title="Equivalence test for 2 independent proportions"
)
return res | Equivalence test based on two one-sided `test_proportions_2indep`
This assumes that we have two independent binomial samples.
The Null and alternative hypothesis for equivalence testing are
for compare = 'diff'
- H0: prop1 - prop2 <= low or upp <= prop1 - prop2
- H1: low < prop1 - prop2 < upp
for compare = 'ratio'
- H0: prop1 / prop2 <= low or upp <= prop1 / prop2
- H1: low < prop1 / prop2 < upp
for compare = 'odds-ratio'
- H0: or <= low or upp <= or
- H1: low < or < upp
where odds-ratio or = prop1 / (1 - prop1) / (prop2 / (1 - prop2))
Parameters
----------
count1, nobs1 :
count and sample size for first sample
count2, nobs2 :
count and sample size for the second sample
low, upp :
equivalence margin for diff, risk ratio or odds ratio
method : string
method for computing the hypothesis test. If method is None, then a
default method is used. The default might change as more methods are
added.
diff:
- 'wald',
- 'agresti-caffo'
- 'score' if correction is True, then this uses the degrees of freedom
correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985.
ratio:
- 'log': wald test using log transformation
- 'log-adjusted': wald test using log transformation,
adds 0.5 to counts
- 'score' if correction is True, then this uses the degrees of freedom
correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985.
odds-ratio:
- 'logit': wald test using logit transformation
- 'logit-adjusted': : wald test using logit transformation,
adds 0.5 to counts
- 'logit-smoothed': : wald test using logit transformation, biases
cell counts towards independence by adding two observations in
total.
- 'score' if correction is True, then this uses the degrees of freedom
correction ``nobs / (nobs - 1)`` as in Miettinen Nurminen 1985
compare : string in ['diff', 'ratio' 'odds-ratio']
If compare is `diff`, then the hypothesis test is for
diff = p1 - p2.
If compare is `ratio`, then the hypothesis test is for the
risk ratio defined by ratio = p1 / p2.
If compare is `odds-ratio`, then the hypothesis test is for the
odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2).
correction : bool
If correction is True (default), then the Miettinen and Nurminen
small sample correction to the variance nobs / (nobs - 1) is used.
Applies only if method='score'.
Returns
-------
pvalue : float
p-value is the max of the pvalues of the two one-sided tests
t1 : test results
results instance for one-sided hypothesis at the lower margin
t1 : test results
results instance for one-sided hypothesis at the upper margin
See Also
--------
test_proportions_2indep
confint_proportions_2indep
Notes
-----
Status: experimental, API and defaults might still change.
The TOST equivalence test delegates to `test_proportions_2indep` and has
the same method and comparison options. | tost_proportions_2indep | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def _std_2prop_power(diff, p2, ratio=1, alpha=0.05, value=0):
"""
Compute standard error under null and alternative for 2 proportions
helper function for power and sample size computation
"""
if value != 0:
msg = 'non-zero diff under null, value, is not yet implemented'
raise NotImplementedError(msg)
nobs_ratio = ratio
p1 = p2 + diff
# The following contains currently redundant variables that will
# be useful for different options for the null variance
p_pooled = (p1 + p2 * ratio) / (1 + ratio)
# probabilities for the variance for the null statistic
p1_vnull, p2_vnull = p_pooled, p_pooled
p2_alt = p2
p1_alt = p2_alt + diff
std_null = _std_diff_prop(p1_vnull, p2_vnull, ratio=nobs_ratio)
std_alt = _std_diff_prop(p1_alt, p2_alt, ratio=nobs_ratio)
return p_pooled, std_null, std_alt | Compute standard error under null and alternative for 2 proportions
helper function for power and sample size computation | _std_2prop_power | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def power_proportions_2indep(diff, prop2, nobs1, ratio=1, alpha=0.05,
value=0, alternative='two-sided',
return_results=True):
"""
Power for ztest that two independent proportions are equal
This assumes that the variance is based on the pooled proportion
under the null and the non-pooled variance under the alternative
Parameters
----------
diff : float
difference between proportion 1 and 2 under the alternative
prop2 : float
proportion for the reference case, prop2, proportions for the
first case will be computed using p2 and diff
p1 = p2 + diff
nobs1 : float or int
number of observations in sample 1
ratio : float
sample size ratio, nobs2 = ratio * nobs1
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
value : float
currently only `value=0`, i.e. equality testing, is supported
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is True, then a results instance with the
information in attributes is returned.
If return_results is False, then only the power is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
p_pooled
pooled proportion, used for std_null
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1))
"""
# TODO: avoid possible circular import, check if needed
from statsmodels.stats.power import normal_power_het
p_pooled, std_null, std_alt = _std_2prop_power(diff, prop2, ratio=ratio,
alpha=alpha, value=value)
pow_ = normal_power_het(diff, nobs1, alpha, std_null=std_null,
std_alternative=std_alt,
alternative=alternative)
if return_results:
res = Holder(power=pow_,
p_pooled=p_pooled,
std_null=std_null,
std_alt=std_alt,
nobs1=nobs1,
nobs2=ratio * nobs1,
nobs_ratio=ratio,
alpha=alpha,
)
return res
else:
return pow_ | Power for ztest that two independent proportions are equal
This assumes that the variance is based on the pooled proportion
under the null and the non-pooled variance under the alternative
Parameters
----------
diff : float
difference between proportion 1 and 2 under the alternative
prop2 : float
proportion for the reference case, prop2, proportions for the
first case will be computed using p2 and diff
p1 = p2 + diff
nobs1 : float or int
number of observations in sample 1
ratio : float
sample size ratio, nobs2 = ratio * nobs1
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
value : float
currently only `value=0`, i.e. equality testing, is supported
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. The one-sided test can be
either 'larger', 'smaller'.
return_results : bool
If true, then a results instance with extra information is returned,
otherwise only the computed power is returned.
Returns
-------
results : results instance or float
If return_results is True, then a results instance with the
information in attributes is returned.
If return_results is False, then only the power is returned.
power : float
Power of the test, e.g. 0.8, is one minus the probability of a
type II error. Power is the probability that the test correctly
rejects the Null Hypothesis if the Alternative Hypothesis is true.
Other attributes in results instance include :
p_pooled
pooled proportion, used for std_null
std_null
standard error of difference under the null hypothesis (without
sqrt(nobs1))
std_alt
standard error of difference under the alternative hypothesis
(without sqrt(nobs1)) | power_proportions_2indep | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def samplesize_proportions_2indep_onetail(diff, prop2, power, ratio=1,
alpha=0.05, value=0,
alternative='two-sided'):
"""
Required sample size assuming normal distribution based on one tail
This uses an explicit computation for the sample size that is required
to achieve a given power corresponding to the appropriate tails of the
normal distribution. This ignores the far tail in a two-sided test
which is negligible in the common case when alternative and null are
far apart.
Parameters
----------
diff : float
Difference between proportion 1 and 2 under the alternative
prop2 : float
proportion for the reference case, prop2, proportions for the
first case will be computing using p2 and diff
p1 = p2 + diff
power : float
Power for which sample size is computed.
ratio : float
Sample size ratio, nobs2 = ratio * nobs1
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
value : float
Currently only `value=0`, i.e. equality testing, is supported
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. In the case of a one-sided
alternative, it is assumed that the test is in the appropriate tail.
Returns
-------
nobs1 : float
Number of observations in sample 1.
"""
# TODO: avoid possible circular import, check if needed
from statsmodels.stats.power import normal_sample_size_one_tail
if alternative in ['two-sided', '2s']:
alpha = alpha / 2
_, std_null, std_alt = _std_2prop_power(diff, prop2, ratio=ratio,
alpha=alpha, value=value)
nobs = normal_sample_size_one_tail(diff, power, alpha, std_null=std_null,
std_alternative=std_alt)
return nobs | Required sample size assuming normal distribution based on one tail
This uses an explicit computation for the sample size that is required
to achieve a given power corresponding to the appropriate tails of the
normal distribution. This ignores the far tail in a two-sided test
which is negligible in the common case when alternative and null are
far apart.
Parameters
----------
diff : float
Difference between proportion 1 and 2 under the alternative
prop2 : float
proportion for the reference case, prop2, proportions for the
first case will be computing using p2 and diff
p1 = p2 + diff
power : float
Power for which sample size is computed.
ratio : float
Sample size ratio, nobs2 = ratio * nobs1
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
value : float
Currently only `value=0`, i.e. equality testing, is supported
alternative : string, 'two-sided' (default), 'larger', 'smaller'
Alternative hypothesis whether the power is calculated for a
two-sided (default) or one sided test. In the case of a one-sided
alternative, it is assumed that the test is in the appropriate tail.
Returns
-------
nobs1 : float
Number of observations in sample 1. | samplesize_proportions_2indep_onetail | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def _score_confint_inversion(count1, nobs1, count2, nobs2, compare='diff',
alpha=0.05, correction=True):
"""
Compute score confidence interval by inverting score test
Parameters
----------
count1, nobs1 :
Count and sample size for first sample.
count2, nobs2 :
Count and sample size for the second sample.
compare : string in ['diff', 'ratio' 'odds-ratio']
If compare is `diff`, then the confidence interval is for
diff = p1 - p2.
If compare is `ratio`, then the confidence interval is for the
risk ratio defined by ratio = p1 / p2.
If compare is `odds-ratio`, then the confidence interval is for the
odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2).
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
correction : bool
If correction is True (default), then the Miettinen and Nurminen
small sample correction to the variance nobs / (nobs - 1) is used.
Applies only if method='score'.
Returns
-------
low : float
Lower confidence bound.
upp : float
Upper confidence bound.
"""
def func(v):
r = test_proportions_2indep(count1, nobs1, count2, nobs2,
value=v, compare=compare, method='score',
correction=correction,
alternative="two-sided")
return r.pvalue - alpha
rt0 = test_proportions_2indep(count1, nobs1, count2, nobs2,
value=0, compare=compare, method='score',
correction=correction,
alternative="two-sided")
# use default method to get starting values
# this will not work if score confint becomes default
# maybe use "wald" as alias that works for all compare statistics
use_method = {"diff": "wald", "ratio": "log", "odds-ratio": "logit"}
rci0 = confint_proportions_2indep(count1, nobs1, count2, nobs2,
method=use_method[compare],
compare=compare, alpha=alpha)
# Note diff might be negative
ub = rci0[1] + np.abs(rci0[1]) * 0.5
lb = rci0[0] - np.abs(rci0[0]) * 0.25
if compare == 'diff':
param = rt0.diff
# 1 might not be the correct upper bound because
# rootfinding is for the `diff` and not for a probability.
ub = min(ub, 0.99999)
elif compare == 'ratio':
param = rt0.ratio
ub *= 2 # add more buffer
if compare == 'odds-ratio':
param = rt0.odds_ratio
# root finding for confint bounds
upp = optimize.brentq(func, param, ub)
low = optimize.brentq(func, lb, param)
return low, upp | Compute score confidence interval by inverting score test
Parameters
----------
count1, nobs1 :
Count and sample size for first sample.
count2, nobs2 :
Count and sample size for the second sample.
compare : string in ['diff', 'ratio' 'odds-ratio']
If compare is `diff`, then the confidence interval is for
diff = p1 - p2.
If compare is `ratio`, then the confidence interval is for the
risk ratio defined by ratio = p1 / p2.
If compare is `odds-ratio`, then the confidence interval is for the
odds-ratio defined by or = p1 / (1 - p1) / (p2 / (1 - p2).
alpha : float in interval (0,1)
Significance level, e.g. 0.05, is the probability of a type I
error, that is wrong rejections if the Null Hypothesis is true.
correction : bool
If correction is True (default), then the Miettinen and Nurminen
small sample correction to the variance nobs / (nobs - 1) is used.
Applies only if method='score'.
Returns
-------
low : float
Lower confidence bound.
upp : float
Upper confidence bound. | _score_confint_inversion | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def _confint_riskratio_koopman(count1, nobs1, count2, nobs2, alpha=0.05,
correction=True):
"""
Score confidence interval for ratio or proportions, Koopman/Nam
signature not consistent with other functions
When correction is True, then the small sample correction nobs / (nobs - 1)
by Miettinen/Nurminen is used.
"""
# The names below follow Nam
x0, x1, n0, n1 = count2, count1, nobs2, nobs1
x = x0 + x1
n = n0 + n1
z = stats.norm.isf(alpha / 2)**2
if correction:
# Mietinnen/Nurminen small sample correction
z *= n / (n - 1)
# z = stats.chi2.isf(alpha, 1)
# equ 6 in Nam 1995
a1 = n0 * (n0 * n * x1 + n1 * (n0 + x1) * z)
a2 = - n0 * (n0 * n1 * x + 2 * n * x0 * x1 + n1 * (n0 + x0 + 2 * x1) * z)
a3 = 2 * n0 * n1 * x0 * x + n * x0 * x0 * x1 + n0 * n1 * x * z
a4 = - n1 * x0 * x0 * x
p_roots_ = np.sort(np.roots([a1, a2, a3, a4]))
p_roots = p_roots_[:2][::-1]
# equ 5
ci = (1 - (n1 - x1) * (1 - p_roots) / (x0 + n1 - n * p_roots)) / p_roots
res = Holder()
res.confint = ci
res._p_roots = p_roots_ # for unit tests, can be dropped
return res | Score confidence interval for ratio or proportions, Koopman/Nam
signature not consistent with other functions
When correction is True, then the small sample correction nobs / (nobs - 1)
by Miettinen/Nurminen is used. | _confint_riskratio_koopman | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def _confint_riskratio_paired_nam(table, alpha=0.05):
"""
Confidence interval for marginal risk ratio for matched pairs
need full table
success fail marginal
success x11 x10 x1.
fail x01 x00 x0.
marginal x.1 x.0 n
The confidence interval is for the ratio p1 / p0 where
p1 = x1. / n and
p0 - x.1 / n
Todo: rename p1 to pa and p2 to pb, so we have a, b for treatment and
0, 1 for success/failure
current namings follow Nam 2009
status
testing:
compared to example in Nam 2009
internal polynomial coefficients in calculation correspond at around
4 decimals
confidence interval agrees only at 2 decimals
"""
x11, x10, x01, x00 = np.ravel(table)
n = np.sum(table) # nobs
p10, p01 = x10 / n, x01 / n
p1 = (x11 + x10) / n
p0 = (x11 + x01) / n
q00 = 1 - x00 / n
z2 = stats.norm.isf(alpha / 2)**2
# z = stats.chi2.isf(alpha, 1)
# before equ 3 in Nam 2009
g1 = (n * p0 + z2 / 2) * p0
g2 = - (2 * n * p1 * p0 + z2 * q00)
g3 = (n * p1 + z2 / 2) * p1
a0 = g1**2 - (z2 * p0 / 2)**2
a1 = 2 * g1 * g2
a2 = g2**2 + 2 * g1 * g3 + z2**2 * (p1 * p0 - 2 * p10 * p01) / 2
a3 = 2 * g2 * g3
a4 = g3**2 - (z2 * p1 / 2)**2
p_roots = np.sort(np.roots([a0, a1, a2, a3, a4]))
# p_roots = np.sort(np.roots([1, a1 / a0, a2 / a0, a3 / a0, a4 / a0]))
ci = [p_roots.min(), p_roots.max()]
res = Holder()
res.confint = ci
res.p = p1, p0
res._p_roots = p_roots # for unit tests, can be dropped
return res | Confidence interval for marginal risk ratio for matched pairs
need full table
success fail marginal
success x11 x10 x1.
fail x01 x00 x0.
marginal x.1 x.0 n
The confidence interval is for the ratio p1 / p0 where
p1 = x1. / n and
p0 - x.1 / n
Todo: rename p1 to pa and p2 to pb, so we have a, b for treatment and
0, 1 for success/failure
current namings follow Nam 2009
status
testing:
compared to example in Nam 2009
internal polynomial coefficients in calculation correspond at around
4 decimals
confidence interval agrees only at 2 decimals | _confint_riskratio_paired_nam | python | statsmodels/statsmodels | statsmodels/stats/proportion.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/proportion.py | BSD-3-Clause |
def _make_asymptotic_function(params):
"""
Generates an asymptotic distribution callable from a param matrix
Polynomial is a[0] * x**(-1/2) + a[1] * x**(-1) + a[2] * x**(-3/2)
Parameters
----------
params : ndarray
Array with shape (nalpha, 3) where nalpha is the number of
significance levels
"""
def f(n):
poly = np.array([1, np.log(n), np.log(n) ** 2])
return np.exp(poly.dot(params.T))
return f | Generates an asymptotic distribution callable from a param matrix
Polynomial is a[0] * x**(-1/2) + a[1] * x**(-1) + a[2] * x**(-3/2)
Parameters
----------
params : ndarray
Array with shape (nalpha, 3) where nalpha is the number of
significance levels | _make_asymptotic_function | python | statsmodels/statsmodels | statsmodels/stats/_lilliefors.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_lilliefors.py | BSD-3-Clause |
def ksstat(x, cdf, alternative='two_sided', args=()):
"""
Calculate statistic for the Kolmogorov-Smirnov test for goodness of fit
This calculates the test statistic for a test of the distribution G(x) of
an observed variable against a given distribution F(x). Under the null
hypothesis the two distributions are identical, G(x)=F(x). The
alternative hypothesis can be either 'two_sided' (default), 'less'
or 'greater'. The KS test is only valid for continuous distributions.
Parameters
----------
x : array_like, 1d
array of observations
cdf : str or callable
string: name of a distribution in scipy.stats
callable: function to evaluate cdf
alternative : 'two_sided' (default), 'less' or 'greater'
defines the alternative hypothesis (see explanation)
args : tuple, sequence
distribution parameters for call to cdf
Returns
-------
D : float
KS test statistic, either D, D+ or D-
See Also
--------
scipy.stats.kstest
Notes
-----
In the one-sided test, the alternative is that the empirical
cumulative distribution function of the random variable is "less"
or "greater" than the cumulative distribution function F(x) of the
hypothesis, G(x)<=F(x), resp. G(x)>=F(x).
In contrast to scipy.stats.kstest, this function only calculates the
statistic which can be used either as distance measure or to implement
case specific p-values.
"""
nobs = float(len(x))
if isinstance(cdf, str):
cdf = getattr(stats.distributions, cdf).cdf
elif hasattr(cdf, 'cdf'):
cdf = getattr(cdf, 'cdf')
x = np.sort(x)
cdfvals = cdf(x, *args)
d_plus = (np.arange(1.0, nobs + 1) / nobs - cdfvals).max()
d_min = (cdfvals - np.arange(0.0, nobs) / nobs).max()
if alternative == 'greater':
return d_plus
elif alternative == 'less':
return d_min
return np.max([d_plus, d_min]) | Calculate statistic for the Kolmogorov-Smirnov test for goodness of fit
This calculates the test statistic for a test of the distribution G(x) of
an observed variable against a given distribution F(x). Under the null
hypothesis the two distributions are identical, G(x)=F(x). The
alternative hypothesis can be either 'two_sided' (default), 'less'
or 'greater'. The KS test is only valid for continuous distributions.
Parameters
----------
x : array_like, 1d
array of observations
cdf : str or callable
string: name of a distribution in scipy.stats
callable: function to evaluate cdf
alternative : 'two_sided' (default), 'less' or 'greater'
defines the alternative hypothesis (see explanation)
args : tuple, sequence
distribution parameters for call to cdf
Returns
-------
D : float
KS test statistic, either D, D+ or D-
See Also
--------
scipy.stats.kstest
Notes
-----
In the one-sided test, the alternative is that the empirical
cumulative distribution function of the random variable is "less"
or "greater" than the cumulative distribution function F(x) of the
hypothesis, G(x)<=F(x), resp. G(x)>=F(x).
In contrast to scipy.stats.kstest, this function only calculates the
statistic which can be used either as distance measure or to implement
case specific p-values. | ksstat | python | statsmodels/statsmodels | statsmodels/stats/_lilliefors.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_lilliefors.py | BSD-3-Clause |
def get_lilliefors_table(dist='norm'):
"""
Generates tables for significance levels of Lilliefors test statistics
Tables for available normal and exponential distribution testing,
as specified in Lilliefors references above
Parameters
----------
dist : str
distribution being tested in set {'norm', 'exp'}.
Returns
-------
lf : TableDist object.
table of critical values
"""
# function just to keep things together
# for this test alpha is sf probability, i.e. right tail probability
alpha = 1 - np.array(PERCENTILES) / 100.0
alpha = alpha[::-1]
dist = 'normal' if dist == 'norm' else dist
if dist not in critical_values:
raise ValueError("Invalid dist parameter. Must be 'norm' or 'exp'")
cv_data = critical_values[dist]
acv_data = asymp_critical_values[dist]
size = np.array(sorted(cv_data), dtype=float)
crit_lf = np.array([cv_data[key] for key in sorted(cv_data)])
crit_lf = crit_lf[:, ::-1]
asym_params = np.array([acv_data[key] for key in sorted(acv_data)])
asymp_fn = _make_asymptotic_function(asym_params[::-1])
lf = TableDist(alpha, size, crit_lf, asymptotic=asymp_fn)
return lf | Generates tables for significance levels of Lilliefors test statistics
Tables for available normal and exponential distribution testing,
as specified in Lilliefors references above
Parameters
----------
dist : str
distribution being tested in set {'norm', 'exp'}.
Returns
-------
lf : TableDist object.
table of critical values | get_lilliefors_table | python | statsmodels/statsmodels | statsmodels/stats/_lilliefors.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_lilliefors.py | BSD-3-Clause |
def pval_lf(d_max, n):
"""
Approximate pvalues for Lilliefors test
This is only valid for pvalues smaller than 0.1 which is not checked in
this function.
Parameters
----------
d_max : array_like
two-sided Kolmogorov-Smirnov test statistic
n : int or float
sample size
Returns
-------
p-value : float or ndarray
pvalue according to approximation formula of Dallal and Wilkinson.
Notes
-----
This is mainly a helper function where the calling code should dispatch
on bound violations. Therefore it does not check whether the pvalue is in
the valid range.
Precision for the pvalues is around 2 to 3 decimals. This approximation is
also used by other statistical packages (e.g. R:fBasics) but might not be
the most precise available.
References
----------
DallalWilkinson1986
"""
# todo: check boundaries, valid range for n and Dmax
if n > 100:
d_max *= (n / 100.) ** 0.49
n = 100
pval = np.exp(-7.01256 * d_max ** 2 * (n + 2.78019)
+ 2.99587 * d_max * np.sqrt(n + 2.78019) - 0.122119
+ 0.974598 / np.sqrt(n) + 1.67997 / n)
return pval | Approximate pvalues for Lilliefors test
This is only valid for pvalues smaller than 0.1 which is not checked in
this function.
Parameters
----------
d_max : array_like
two-sided Kolmogorov-Smirnov test statistic
n : int or float
sample size
Returns
-------
p-value : float or ndarray
pvalue according to approximation formula of Dallal and Wilkinson.
Notes
-----
This is mainly a helper function where the calling code should dispatch
on bound violations. Therefore it does not check whether the pvalue is in
the valid range.
Precision for the pvalues is around 2 to 3 decimals. This approximation is
also used by other statistical packages (e.g. R:fBasics) but might not be
the most precise available.
References
----------
DallalWilkinson1986 | pval_lf | python | statsmodels/statsmodels | statsmodels/stats/_lilliefors.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_lilliefors.py | BSD-3-Clause |
def kstest_fit(x, dist='norm', pvalmethod="table"):
"""
Test assumed normal or exponential distribution using Lilliefors' test.
Lilliefors' test is a Kolmogorov-Smirnov test with estimated parameters.
Parameters
----------
x : array_like, 1d
Data to test.
dist : {'norm', 'exp'}, optional
The assumed distribution.
pvalmethod : {'approx', 'table'}, optional
The method used to compute the p-value of the test statistic. In
general, 'table' is preferred and makes use of a very large simulation.
'approx' is only valid for normality. if `dist = 'exp'` `table` is
always used. 'approx' uses the approximation formula of Dalal and
Wilkinson, valid for pvalues < 0.1. If the pvalue is larger than 0.1,
then the result of `table` is returned.
Returns
-------
ksstat : float
Kolmogorov-Smirnov test statistic with estimated mean and variance.
pvalue : float
If the pvalue is lower than some threshold, e.g. 0.05, then we can
reject the Null hypothesis that the sample comes from a normal
distribution.
Notes
-----
'table' uses an improved table based on 10,000,000 simulations. The
critical values are approximated using
log(cv_alpha) = b_alpha + c[0] log(n) + c[1] log(n)**2
where cv_alpha is the critical value for a test with size alpha,
b_alpha is an alpha-specific intercept term and c[1] and c[2] are
coefficients that are shared all alphas.
Values in the table are linearly interpolated. Values outside the
range are be returned as bounds, 0.990 for large and 0.001 for small
pvalues.
For implementation details, see lilliefors_critical_value_simulation.py in
the test directory.
"""
pvalmethod = string_like(pvalmethod,
"pvalmethod",
options=("approx", "table"))
x = np.asarray(x)
if x.ndim == 2 and x.shape[1] == 1:
x = x[:, 0]
elif x.ndim != 1:
raise ValueError("Invalid parameter `x`: must be a one-dimensional"
" array-like or a single-column DataFrame")
nobs = len(x)
if dist == 'norm':
z = (x - x.mean()) / x.std(ddof=1)
test_d = stats.norm.cdf
lilliefors_table = lilliefors_table_norm
elif dist == 'exp':
z = x / x.mean()
test_d = stats.expon.cdf
lilliefors_table = lilliefors_table_expon
pvalmethod = 'table'
else:
raise ValueError("Invalid dist parameter, must be 'norm' or 'exp'")
min_nobs = 4 if dist == 'norm' else 3
if nobs < min_nobs:
raise ValueError('Test for distribution {} requires at least {} '
'observations'.format(dist, min_nobs))
d_ks = ksstat(z, test_d, alternative='two_sided')
if pvalmethod == 'approx':
pval = pval_lf(d_ks, nobs)
# check pval is in desired range
if pval > 0.1:
pval = lilliefors_table.prob(d_ks, nobs)
else: # pvalmethod == 'table'
pval = lilliefors_table.prob(d_ks, nobs)
return d_ks, pval | Test assumed normal or exponential distribution using Lilliefors' test.
Lilliefors' test is a Kolmogorov-Smirnov test with estimated parameters.
Parameters
----------
x : array_like, 1d
Data to test.
dist : {'norm', 'exp'}, optional
The assumed distribution.
pvalmethod : {'approx', 'table'}, optional
The method used to compute the p-value of the test statistic. In
general, 'table' is preferred and makes use of a very large simulation.
'approx' is only valid for normality. if `dist = 'exp'` `table` is
always used. 'approx' uses the approximation formula of Dalal and
Wilkinson, valid for pvalues < 0.1. If the pvalue is larger than 0.1,
then the result of `table` is returned.
Returns
-------
ksstat : float
Kolmogorov-Smirnov test statistic with estimated mean and variance.
pvalue : float
If the pvalue is lower than some threshold, e.g. 0.05, then we can
reject the Null hypothesis that the sample comes from a normal
distribution.
Notes
-----
'table' uses an improved table based on 10,000,000 simulations. The
critical values are approximated using
log(cv_alpha) = b_alpha + c[0] log(n) + c[1] log(n)**2
where cv_alpha is the critical value for a test with size alpha,
b_alpha is an alpha-specific intercept term and c[1] and c[2] are
coefficients that are shared all alphas.
Values in the table are linearly interpolated. Values outside the
range are be returned as bounds, 0.990 for large and 0.001 for small
pvalues.
For implementation details, see lilliefors_critical_value_simulation.py in
the test directory. | kstest_fit | python | statsmodels/statsmodels | statsmodels/stats/_lilliefors.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_lilliefors.py | BSD-3-Clause |
def threshold(self, tfdr):
"""
Returns the threshold statistic for a given target FDR.
"""
if np.min(self._ufdr) <= tfdr:
return self._unq[self._ufdr <= tfdr][0]
else:
return np.inf | Returns the threshold statistic for a given target FDR. | threshold | python | statsmodels/statsmodels | statsmodels/stats/_knockoff.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_knockoff.py | BSD-3-Clause |
def _design_knockoff_sdp(exog):
"""
Use semidefinite programming to construct a knockoff design
matrix.
Requires cvxopt to be installed.
"""
try:
from cvxopt import solvers, matrix
except ImportError:
raise ValueError("SDP knockoff designs require installation of cvxopt")
nobs, nvar = exog.shape
# Standardize exog
xnm = np.sum(exog**2, 0)
xnm = np.sqrt(xnm)
exog = exog / xnm
Sigma = np.dot(exog.T, exog)
c = matrix(-np.ones(nvar))
h0 = np.concatenate((np.zeros(nvar), np.ones(nvar)))
h0 = matrix(h0)
G0 = np.concatenate((-np.eye(nvar), np.eye(nvar)), axis=0)
G0 = matrix(G0)
h1 = 2 * Sigma
h1 = matrix(h1)
i, j = np.diag_indices(nvar)
G1 = np.zeros((nvar*nvar, nvar))
G1[i*nvar + j, i] = 1
G1 = matrix(G1)
solvers.options['show_progress'] = False
sol = solvers.sdp(c, G0, h0, [G1], [h1])
sl = np.asarray(sol['x']).ravel()
xcov = np.dot(exog.T, exog)
exogn = _get_knmat(exog, xcov, sl)
return exog, exogn, sl | Use semidefinite programming to construct a knockoff design
matrix.
Requires cvxopt to be installed. | _design_knockoff_sdp | python | statsmodels/statsmodels | statsmodels/stats/_knockoff.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_knockoff.py | BSD-3-Clause |
def _design_knockoff_equi(exog):
"""
Construct an equivariant design matrix for knockoff analysis.
Follows the 'equi-correlated knockoff approach of equation 2.4 in
Barber and Candes.
Constructs a pair of design matrices exogs, exogn such that exogs
is a scaled/centered version of the input matrix exog, exogn is
another matrix of the same shape with cov(exogn) = cov(exogs), and
the covariances between corresponding columns of exogn and exogs
are as small as possible.
"""
nobs, nvar = exog.shape
if nobs < 2*nvar:
msg = "The equivariant knockoff can ony be used when n >= 2*p"
raise ValueError(msg)
# Standardize exog
xnm = np.sum(exog**2, 0)
xnm = np.sqrt(xnm)
exog = exog / xnm
xcov = np.dot(exog.T, exog)
ev, _ = np.linalg.eig(xcov)
evmin = np.min(ev)
sl = min(2*evmin, 1)
sl = sl * np.ones(nvar)
exogn = _get_knmat(exog, xcov, sl)
return exog, exogn, sl | Construct an equivariant design matrix for knockoff analysis.
Follows the 'equi-correlated knockoff approach of equation 2.4 in
Barber and Candes.
Constructs a pair of design matrices exogs, exogn such that exogs
is a scaled/centered version of the input matrix exog, exogn is
another matrix of the same shape with cov(exogn) = cov(exogs), and
the covariances between corresponding columns of exogn and exogs
are as small as possible. | _design_knockoff_equi | python | statsmodels/statsmodels | statsmodels/stats/_knockoff.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_knockoff.py | BSD-3-Clause |
def conf_int(self, alpha=0.05):
"""
Returns the confidence interval of the value, `effect` of the constraint.
This is currently only available for t and z tests.
Parameters
----------
alpha : float, optional
The significance level for the confidence interval.
ie., The default `alpha` = .05 returns a 95% confidence interval.
Returns
-------
ci : ndarray, (k_constraints, 2)
The array has the lower and the upper limit of the confidence
interval in the columns.
"""
if self.effect is not None:
# confidence intervals
q = self.dist.ppf(1 - alpha / 2., *self.dist_args)
lower = self.effect - q * self.sd
upper = self.effect + q * self.sd
return np.column_stack((lower, upper))
else:
raise NotImplementedError('Confidence Interval not available') | Returns the confidence interval of the value, `effect` of the constraint.
This is currently only available for t and z tests.
Parameters
----------
alpha : float, optional
The significance level for the confidence interval.
ie., The default `alpha` = .05 returns a 95% confidence interval.
Returns
-------
ci : ndarray, (k_constraints, 2)
The array has the lower and the upper limit of the confidence
interval in the columns. | conf_int | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def summary(self, xname=None, alpha=0.05, title=None):
"""Summarize the Results of the hypothesis test
Parameters
----------
xname : list[str], optional
Default is `c_##` for ## in the number of regressors
alpha : float
significance level for the confidence intervals. Default is
alpha = 0.05 which implies a confidence level of 95%.
title : str, optional
Title for the params table. If not None, then this replaces the
default title
Returns
-------
smry : str or Summary instance
This contains a parameter results table in the case of t or z test
in the same form as the parameter results table in the model
results summary.
For F or Wald test, the return is a string.
"""
if self.effect is not None:
# TODO: should also add some extra information, e.g. robust cov ?
# TODO: can we infer names for constraints, xname in __init__ ?
if title is None:
title = 'Test for Constraints'
elif title == '':
# do not add any title,
# I think SimpleTable skips on None - check
title = None
# we have everything for a params table
use_t = (self.distribution == 't')
yname='constraints' # Not used in params_frame
if xname is None:
xname = self.c_names
from statsmodels.iolib.summary import summary_params
pvalues = np.atleast_1d(self.pvalue)
summ = summary_params((self, self.effect, self.sd, self.statistic,
pvalues, self.conf_int(alpha)),
yname=yname, xname=xname, use_t=use_t,
title=title, alpha=alpha)
return summ
elif hasattr(self, 'fvalue'):
# TODO: create something nicer for these casee
return ('<F test: F=%s, p=%s, df_denom=%.3g, df_num=%.3g>' %
(repr(self.fvalue), self.pvalue, self.df_denom,
self.df_num))
elif self.distribution == 'chi2':
return ('<Wald test (%s): statistic=%s, p-value=%s, df_denom=%.3g>' %
(self.distribution, self.statistic, self.pvalue,
self.df_denom))
else:
# generic
return ('<Wald test: statistic=%s, p-value=%s>' %
(self.statistic, self.pvalue)) | Summarize the Results of the hypothesis test
Parameters
----------
xname : list[str], optional
Default is `c_##` for ## in the number of regressors
alpha : float
significance level for the confidence intervals. Default is
alpha = 0.05 which implies a confidence level of 95%.
title : str, optional
Title for the params table. If not None, then this replaces the
default title
Returns
-------
smry : str or Summary instance
This contains a parameter results table in the case of t or z test
in the same form as the parameter results table in the model
results summary.
For F or Wald test, the return is a string. | summary | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def summary_frame(self, xname=None, alpha=0.05):
"""Return the parameter table as a pandas DataFrame
This is only available for t and normal tests
"""
if self.effect is not None:
# we have everything for a params table
use_t = (self.distribution == 't')
yname='constraints' # Not used in params_frame
if xname is None:
xname = self.c_names
from statsmodels.iolib.summary import summary_params_frame
summ = summary_params_frame((self, self.effect, self.sd,
self.statistic,self.pvalue,
self.conf_int(alpha)), yname=yname,
xname=xname, use_t=use_t,
alpha=alpha)
return summ
else:
# TODO: create something nicer
raise NotImplementedError('only available for t and z') | Return the parameter table as a pandas DataFrame
This is only available for t and normal tests | summary_frame | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def _get_matrix(self):
"""
Gets the contrast_matrix property
"""
if not hasattr(self, "_contrast_matrix"):
self.compute_matrix()
return self._contrast_matrix | Gets the contrast_matrix property | _get_matrix | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def compute_matrix(self):
"""
Construct a contrast matrix C so that
colspan(dot(D, C)) = colspan(dot(D, dot(pinv(D), T)))
where pinv(D) is the generalized inverse of D=design.
"""
T = self.term
if T.ndim == 1:
T = T[:,None]
self.T = clean0(T)
self.D = self.design
self._contrast_matrix = contrastfromcols(self.T, self.D)
try:
self.rank = self.matrix.shape[1]
except (AttributeError, IndexError):
self.rank = 1 | Construct a contrast matrix C so that
colspan(dot(D, C)) = colspan(dot(D, dot(pinv(D), T)))
where pinv(D) is the generalized inverse of D=design. | compute_matrix | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def contrastfromcols(L, D, pseudo=None):
"""
From an n x p design matrix D and a matrix L, tries
to determine a p x q contrast matrix C which
determines a contrast of full rank, i.e. the
n x q matrix
dot(transpose(C), pinv(D))
is full rank.
L must satisfy either L.shape[0] == n or L.shape[1] == p.
If L.shape[0] == n, then L is thought of as representing
columns in the column space of D.
If L.shape[1] == p, then L is thought of as what is known
as a contrast matrix. In this case, this function returns an estimable
contrast corresponding to the dot(D, L.T)
Note that this always produces a meaningful contrast, not always
with the intended properties because q is always non-zero unless
L is identically 0. That is, it produces a contrast that spans
the column space of L (after projection onto the column space of D).
Parameters
----------
L : array_like
D : array_like
"""
L = np.asarray(L)
D = np.asarray(D)
n, p = D.shape
if L.shape[0] != n and L.shape[1] != p:
raise ValueError("shape of L and D mismatched")
if pseudo is None:
pseudo = np.linalg.pinv(D) # D^+ \approx= ((dot(D.T,D))^(-1),D.T)
if L.shape[0] == n:
C = np.dot(pseudo, L).T
else:
C = L
C = np.dot(pseudo, np.dot(D, C.T)).T
Lp = np.dot(D, C.T)
if len(Lp.shape) == 1:
Lp.shape = (n, 1)
if np.linalg.matrix_rank(Lp) != Lp.shape[1]:
Lp = fullrank(Lp)
C = np.dot(pseudo, Lp).T
return np.squeeze(C) | From an n x p design matrix D and a matrix L, tries
to determine a p x q contrast matrix C which
determines a contrast of full rank, i.e. the
n x q matrix
dot(transpose(C), pinv(D))
is full rank.
L must satisfy either L.shape[0] == n or L.shape[1] == p.
If L.shape[0] == n, then L is thought of as representing
columns in the column space of D.
If L.shape[1] == p, then L is thought of as what is known
as a contrast matrix. In this case, this function returns an estimable
contrast corresponding to the dot(D, L.T)
Note that this always produces a meaningful contrast, not always
with the intended properties because q is always non-zero unless
L is identically 0. That is, it produces a contrast that spans
the column space of L (after projection onto the column space of D).
Parameters
----------
L : array_like
D : array_like | contrastfromcols | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def col_names(self):
"""column names for summary table
"""
pr_test = "P>%s" % self.distribution
col_names = [self.distribution, pr_test, 'df constraint']
if self.distribution == 'F':
col_names.append('df denom')
return col_names | column names for summary table | col_names | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def _get_pairs_labels(k_level, level_names):
"""helper function for labels for pairwise comparisons
"""
idx_pairs_all = np.triu_indices(k_level, 1)
labels = [f'{level_names[name[1]]}-{level_names[name[0]]}'
for name in zip(*idx_pairs_all)]
return labels | helper function for labels for pairwise comparisons | _get_pairs_labels | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def _contrast_pairs(k_params, k_level, idx_start):
"""create pairwise contrast for reference coding
currently not used,
using encoding contrast matrix is more general, but requires requires
factor information from a formula's model_spec.
Parameters
----------
k_params : int
number of parameters
k_level : int
number of levels or categories (including reference case)
idx_start : int
Index of the first parameter of this factor. The restrictions on the
factor are inserted as a block in the full restriction matrix starting
at column with index `idx_start`.
Returns
-------
contrasts : ndarray
restriction matrix with k_params columns and number of rows equal to
the number of restrictions.
"""
k_level_m1 = k_level - 1
idx_pairs = np.triu_indices(k_level_m1, 1)
k = len(idx_pairs[0])
c_pairs = np.zeros((k, k_level_m1))
c_pairs[np.arange(k), idx_pairs[0]] = -1
c_pairs[np.arange(k), idx_pairs[1]] = 1
c_reference = np.eye(k_level_m1)
c = np.concatenate((c_reference, c_pairs), axis=0)
k_all = c.shape[0]
contrasts = np.zeros((k_all, k_params))
contrasts[:, idx_start : idx_start + k_level_m1] = c
return contrasts | create pairwise contrast for reference coding
currently not used,
using encoding contrast matrix is more general, but requires requires
factor information from a formula's model_spec.
Parameters
----------
k_params : int
number of parameters
k_level : int
number of levels or categories (including reference case)
idx_start : int
Index of the first parameter of this factor. The restrictions on the
factor are inserted as a block in the full restriction matrix starting
at column with index `idx_start`.
Returns
-------
contrasts : ndarray
restriction matrix with k_params columns and number of rows equal to
the number of restrictions. | _contrast_pairs | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def t_test_multi(result, contrasts, method='hs', alpha=0.05, ci_method=None,
contrast_names=None):
"""perform t_test and add multiplicity correction to results dataframe
Parameters
----------
result results instance
results of an estimated model
contrasts : ndarray
restriction matrix for t_test
method : str or list of strings
method for multiple testing p-value correction, default is'hs'.
alpha : float
significance level for multiple testing reject decision.
ci_method : None
not used yet, will be for multiplicity corrected confidence intervals
contrast_names : {list[str], None}
If contrast_names are provided, then they are used in the index of the
returned dataframe, otherwise some generic default names are created.
Returns
-------
res_df : pandas DataFrame
The dataframe contains the results of the t_test and additional columns
for multiplicity corrected p-values and boolean indicator for whether
the Null hypothesis is rejected.
"""
tt = result.t_test(contrasts)
res_df = tt.summary_frame(xname=contrast_names)
if type(method) is not list:
method = [method]
for meth in method:
mt = multipletests(tt.pvalue, method=meth, alpha=alpha)
res_df['pvalue-%s' % meth] = mt[1]
res_df['reject-%s' % meth] = mt[0]
return res_df | perform t_test and add multiplicity correction to results dataframe
Parameters
----------
result results instance
results of an estimated model
contrasts : ndarray
restriction matrix for t_test
method : str or list of strings
method for multiple testing p-value correction, default is'hs'.
alpha : float
significance level for multiple testing reject decision.
ci_method : None
not used yet, will be for multiplicity corrected confidence intervals
contrast_names : {list[str], None}
If contrast_names are provided, then they are used in the index of the
returned dataframe, otherwise some generic default names are created.
Returns
-------
res_df : pandas DataFrame
The dataframe contains the results of the t_test and additional columns
for multiplicity corrected p-values and boolean indicator for whether
the Null hypothesis is rejected. | t_test_multi | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def _embed_constraints(contrasts, k_params, idx_start, index=None):
"""helper function to expand constraints to a full restriction matrix
Parameters
----------
contrasts : ndarray
restriction matrix for t_test
k_params : int
number of parameters
idx_start : int
Index of the first parameter of this factor. The restrictions on the
factor are inserted as a block in the full restriction matrix starting
at column with index `idx_start`.
index : slice or ndarray
Column index if constraints do not form a block in the full restriction
matrix, i.e. if parameters that are subject to restrictions are not
consecutive in the list of parameters.
If index is not None, then idx_start is ignored.
Returns
-------
contrasts : ndarray
restriction matrix with k_params columns and number of rows equal to
the number of restrictions.
"""
k_c, k_p = contrasts.shape
c = np.zeros((k_c, k_params))
if index is None:
c[:, idx_start : idx_start + k_p] = contrasts
else:
c[:, index] = contrasts
return c | helper function to expand constraints to a full restriction matrix
Parameters
----------
contrasts : ndarray
restriction matrix for t_test
k_params : int
number of parameters
idx_start : int
Index of the first parameter of this factor. The restrictions on the
factor are inserted as a block in the full restriction matrix starting
at column with index `idx_start`.
index : slice or ndarray
Column index if constraints do not form a block in the full restriction
matrix, i.e. if parameters that are subject to restrictions are not
consecutive in the list of parameters.
If index is not None, then idx_start is ignored.
Returns
-------
contrasts : ndarray
restriction matrix with k_params columns and number of rows equal to
the number of restrictions. | _embed_constraints | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def _constraints_factor(encoding_matrix, comparison='pairwise', k_params=None,
idx_start=None):
"""helper function to create constraints based on encoding matrix
Parameters
----------
encoding_matrix : ndarray
contrast matrix for the encoding of a factor as defined by patsy.
The number of rows should be equal to the number of levels or categories
of the factor, the number of columns should be equal to the number
of parameters for this factor.
comparison : str
Currently only 'pairwise' is implemented. The restriction matrix
can be used for testing the hypothesis that all pairwise differences
are zero.
k_params : int
number of parameters
idx_start : int
Index of the first parameter of this factor. The restrictions on the
factor are inserted as a block in the full restriction matrix starting
at column with index `idx_start`.
Returns
-------
contrast : ndarray
Contrast or restriction matrix that can be used in hypothesis test
of model results. The number of columns is k_params.
"""
cm = encoding_matrix
k_level, k_p = cm.shape
import statsmodels.sandbox.stats.multicomp as mc
if comparison in ['pairwise', 'pw', 'pairs']:
c_all = -mc.contrast_allpairs(k_level)
else:
raise NotImplementedError('currentlyonly pairwise comparison')
contrasts = c_all.dot(cm)
if k_params is not None:
if idx_start is None:
raise ValueError("if k_params is not None, then idx_start is "
"required")
contrasts = _embed_constraints(contrasts, k_params, idx_start)
return contrasts | helper function to create constraints based on encoding matrix
Parameters
----------
encoding_matrix : ndarray
contrast matrix for the encoding of a factor as defined by patsy.
The number of rows should be equal to the number of levels or categories
of the factor, the number of columns should be equal to the number
of parameters for this factor.
comparison : str
Currently only 'pairwise' is implemented. The restriction matrix
can be used for testing the hypothesis that all pairwise differences
are zero.
k_params : int
number of parameters
idx_start : int
Index of the first parameter of this factor. The restrictions on the
factor are inserted as a block in the full restriction matrix starting
at column with index `idx_start`.
Returns
-------
contrast : ndarray
Contrast or restriction matrix that can be used in hypothesis test
of model results. The number of columns is k_params. | _constraints_factor | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def t_test_pairwise(result, term_name, method='hs', alpha=0.05,
factor_labels=None, ignore=False):
"""
Perform pairwise t_test with multiple testing corrected p-values.
This uses the formula's model_spec encoding contrast matrix and should
work for all encodings of a main effect.
Parameters
----------
result : result instance
The results of an estimated model with a categorical main effect.
term_name : str
name of the term for which pairwise comparisons are computed.
Term names for categorical effects are created by patsy and
correspond to the main part of the exog names.
method : {str, list[str]}
multiple testing p-value correction, default is 'hs',
see stats.multipletesting
alpha : float
significance level for multiple testing reject decision.
factor_labels : {list[str], None}
Labels for the factor levels used for pairwise labels. If not
provided, then the labels from the formula's model_spec are used.
ignore : bool
Turn off some of the exceptions raised by input checks.
Returns
-------
MultiCompResult
The results are stored as attributes, the main attributes are the
following two. Other attributes are added for debugging purposes
or as background information.
- result_frame : pandas DataFrame with t_test results and multiple
testing corrected p-values.
- contrasts : matrix of constraints of the null hypothesis in the
t_test.
Notes
-----
Status: experimental. Currently only checked for treatment coding with
and without specified reference level.
Currently there are no multiple testing corrected confidence intervals
available.
"""
mgr = FormulaManager()
model_spec = result.model.data.model_spec
term_idx = mgr.get_term_names(model_spec).index(term_name)
term = model_spec.terms[term_idx]
idx_start = model_spec.term_slices[term].start
if not ignore and len(term.factors) > 1:
raise ValueError('interaction effects not yet supported')
factor = term.factors[0]
cat = mgr.get_factor_categories(factor, model_spec)
# cat = model_spec.encoder_state[factor][1]["categories"]
# model_spec.factor_infos[factor].categories
if factor_labels is not None:
if len(factor_labels) == len(cat):
cat = factor_labels
else:
raise ValueError("factor_labels has the wrong length, should be %d" % len(cat))
k_level = len(cat)
cm = mgr.get_contrast_matrix(term, factor, model_spec)
k_params = len(result.params)
labels = _get_pairs_labels(k_level, cat)
import statsmodels.sandbox.stats.multicomp as mc
c_all_pairs = -mc.contrast_allpairs(k_level)
contrasts_sub = c_all_pairs.dot(cm)
contrasts = _embed_constraints(contrasts_sub, k_params, idx_start)
res_df = t_test_multi(result, contrasts, method=method, ci_method=None,
alpha=alpha, contrast_names=labels)
res = MultiCompResult(result_frame=res_df,
contrasts=contrasts,
term=term,
contrast_labels=labels,
term_encoding_matrix=cm)
return res | Perform pairwise t_test with multiple testing corrected p-values.
This uses the formula's model_spec encoding contrast matrix and should
work for all encodings of a main effect.
Parameters
----------
result : result instance
The results of an estimated model with a categorical main effect.
term_name : str
name of the term for which pairwise comparisons are computed.
Term names for categorical effects are created by patsy and
correspond to the main part of the exog names.
method : {str, list[str]}
multiple testing p-value correction, default is 'hs',
see stats.multipletesting
alpha : float
significance level for multiple testing reject decision.
factor_labels : {list[str], None}
Labels for the factor levels used for pairwise labels. If not
provided, then the labels from the formula's model_spec are used.
ignore : bool
Turn off some of the exceptions raised by input checks.
Returns
-------
MultiCompResult
The results are stored as attributes, the main attributes are the
following two. Other attributes are added for debugging purposes
or as background information.
- result_frame : pandas DataFrame with t_test results and multiple
testing corrected p-values.
- contrasts : matrix of constraints of the null hypothesis in the
t_test.
Notes
-----
Status: experimental. Currently only checked for treatment coding with
and without specified reference level.
Currently there are no multiple testing corrected confidence intervals
available. | t_test_pairwise | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def _offset_constraint(r_matrix, params_est, params_alt):
"""offset to the value of a linear constraint for new params
usage:
(cm, v) is original constraint
vo = offset_constraint(cm, res2.params, params_alt)
fs = res2.wald_test((cm, v + vo))
nc = fs.statistic * fs.df_num
"""
diff_est = r_matrix @ params_est
diff_alt = r_matrix @ params_alt
return diff_est - diff_alt | offset to the value of a linear constraint for new params
usage:
(cm, v) is original constraint
vo = offset_constraint(cm, res2.params, params_alt)
fs = res2.wald_test((cm, v + vo))
nc = fs.statistic * fs.df_num | _offset_constraint | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def wald_test_noncent(params, r_matrix, value, results, diff=None, joint=True):
"""Moncentrality parameter for a wald test in model results
The null hypothesis is ``diff = r_matrix @ params - value = 0``
Parameters
----------
params : ndarray
parameters of the model at which to evaluate noncentrality. This can
be estimated parameters or parameters under an alternative.
r_matrix : ndarray
Restriction matrix or contrasts for the Null hypothesis
value : None or ndarray
Value of the linear combination of parameters under the null
hypothesis. If value is None, then it will be replaced by zero.
results : Results instance of a model
The results instance is used to compute the covariance matrix of the
linear constraints using `cov_params.
diff : None or ndarray
If diff is not None, then it will be used instead of
``diff = r_matrix @ params - value``
joint : bool
If joint is True, then the noncentrality parameter for the joint
hypothesis will be returned.
If joint is True, then an array of noncentrality parameters will be
returned, where elements correspond to rows of the restriction matrix.
This correspond to the `t_test` in models and is not a quadratic form.
Returns
-------
nc : float or ndarray
Noncentrality parameter for Wald tests, correspondig to `wald_test`
or `t_test` depending on whether `joint` is true or not.
It needs to be divided by nobs to obtain effect size.
Notes
-----
Status : experimental, API will likely change
"""
if diff is None:
diff = r_matrix @ params - value # at parameter under alternative
cov_c = results.cov_params(r_matrix=r_matrix)
if joint:
nc = diff @ np.linalg.solve(cov_c, diff)
else:
nc = diff / np.sqrt(np.diag(cov_c))
return nc | Moncentrality parameter for a wald test in model results
The null hypothesis is ``diff = r_matrix @ params - value = 0``
Parameters
----------
params : ndarray
parameters of the model at which to evaluate noncentrality. This can
be estimated parameters or parameters under an alternative.
r_matrix : ndarray
Restriction matrix or contrasts for the Null hypothesis
value : None or ndarray
Value of the linear combination of parameters under the null
hypothesis. If value is None, then it will be replaced by zero.
results : Results instance of a model
The results instance is used to compute the covariance matrix of the
linear constraints using `cov_params.
diff : None or ndarray
If diff is not None, then it will be used instead of
``diff = r_matrix @ params - value``
joint : bool
If joint is True, then the noncentrality parameter for the joint
hypothesis will be returned.
If joint is True, then an array of noncentrality parameters will be
returned, where elements correspond to rows of the restriction matrix.
This correspond to the `t_test` in models and is not a quadratic form.
Returns
-------
nc : float or ndarray
Noncentrality parameter for Wald tests, correspondig to `wald_test`
or `t_test` depending on whether `joint` is true or not.
It needs to be divided by nobs to obtain effect size.
Notes
-----
Status : experimental, API will likely change | wald_test_noncent | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def wald_test_noncent_generic(params, r_matrix, value, cov_params, diff=None,
joint=True):
"""noncentrality parameter for a wald test
The null hypothesis is ``diff = r_matrix @ params - value = 0``
Parameters
----------
params : ndarray
parameters of the model at which to evaluate noncentrality. This can
be estimated parameters or parameters under an alternative.
r_matrix : ndarray
Restriction matrix or contrasts for the Null hypothesis
value : None or ndarray
Value of the linear combination of parameters under the null
hypothesis. If value is None, then it will be replace by zero.
cov_params : ndarray
covariance matrix of the parameter estimates
diff : None or ndarray
If diff is not None, then it will be used instead of
``diff = r_matrix @ params - value``
joint : bool
If joint is True, then the noncentrality parameter for the joint
hypothesis will be returned.
If joint is True, then an array of noncentrality parameters will be
returned, where elements correspond to rows of the restriction matrix.
This correspond to the `t_test` in models and is not a quadratic form.
Returns
-------
nc : float or ndarray
Noncentrality parameter for Wald tests, correspondig to `wald_test`
or `t_test` depending on whether `joint` is true or not.
It needs to be divided by nobs to obtain effect size.
Notes
-----
Status : experimental, API will likely change
"""
if value is None:
value = 0
if diff is None:
# at parameter under alternative
diff = r_matrix @ params - value
c = r_matrix
cov_c = c.dot(cov_params).dot(c.T)
if joint:
nc = diff @ np.linalg.solve(cov_c, diff)
else:
nc = diff / np.sqrt(np.diag(cov_c))
return nc | noncentrality parameter for a wald test
The null hypothesis is ``diff = r_matrix @ params - value = 0``
Parameters
----------
params : ndarray
parameters of the model at which to evaluate noncentrality. This can
be estimated parameters or parameters under an alternative.
r_matrix : ndarray
Restriction matrix or contrasts for the Null hypothesis
value : None or ndarray
Value of the linear combination of parameters under the null
hypothesis. If value is None, then it will be replace by zero.
cov_params : ndarray
covariance matrix of the parameter estimates
diff : None or ndarray
If diff is not None, then it will be used instead of
``diff = r_matrix @ params - value``
joint : bool
If joint is True, then the noncentrality parameter for the joint
hypothesis will be returned.
If joint is True, then an array of noncentrality parameters will be
returned, where elements correspond to rows of the restriction matrix.
This correspond to the `t_test` in models and is not a quadratic form.
Returns
-------
nc : float or ndarray
Noncentrality parameter for Wald tests, correspondig to `wald_test`
or `t_test` depending on whether `joint` is true or not.
It needs to be divided by nobs to obtain effect size.
Notes
-----
Status : experimental, API will likely change | wald_test_noncent_generic | python | statsmodels/statsmodels | statsmodels/stats/contrast.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/contrast.py | BSD-3-Clause |
def _HCCM(results, scale):
'''
sandwich with pinv(x) * diag(scale) * pinv(x).T
where pinv(x) = (X'X)^(-1) X
and scale is (nobs,)
'''
H = np.dot(results.model.pinv_wexog,
scale[:,None]*results.model.pinv_wexog.T)
return H | sandwich with pinv(x) * diag(scale) * pinv(x).T
where pinv(x) = (X'X)^(-1) X
and scale is (nobs,) | _HCCM | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def cov_hc0(results):
"""
See statsmodels.RegressionResults
"""
het_scale = results.resid**2 # or whitened residuals? only OLS?
cov_hc0 = _HCCM(results, het_scale)
return cov_hc0 | See statsmodels.RegressionResults | cov_hc0 | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def cov_hc1(results):
"""
See statsmodels.RegressionResults
"""
het_scale = results.nobs/(results.df_resid)*(results.resid**2)
cov_hc1 = _HCCM(results, het_scale)
return cov_hc1 | See statsmodels.RegressionResults | cov_hc1 | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def cov_hc2(results):
"""
See statsmodels.RegressionResults
"""
# probably could be optimized
h = np.diag(np.dot(results.model.exog,
np.dot(results.normalized_cov_params,
results.model.exog.T)))
het_scale = results.resid**2/(1-h)
cov_hc2_ = _HCCM(results, het_scale)
return cov_hc2_ | See statsmodels.RegressionResults | cov_hc2 | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def cov_hc3(results):
"""
See statsmodels.RegressionResults
"""
# above probably could be optimized to only calc the diag
h = np.diag(np.dot(results.model.exog,
np.dot(results.normalized_cov_params,
results.model.exog.T)))
het_scale=(results.resid/(1-h))**2
cov_hc3_ = _HCCM(results, het_scale)
return cov_hc3_ | See statsmodels.RegressionResults | cov_hc3 | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def _get_sandwich_arrays(results, cov_type=''):
"""Helper function to get scores from results
Parameters
"""
if isinstance(results, tuple):
# assume we have jac and hessian_inv
jac, hessian_inv = results
xu = jac = np.asarray(jac)
hessian_inv = np.asarray(hessian_inv)
elif hasattr(results, 'model'):
if hasattr(results, '_results'):
# remove wrapper
results = results._results
# assume we have a results instance
if hasattr(results.model, 'jac'):
xu = results.model.jac(results.params)
hessian_inv = np.linalg.inv(results.model.hessian(results.params))
elif hasattr(results.model, 'score_obs'):
xu = results.model.score_obs(results.params)
hessian_inv = np.linalg.inv(results.model.hessian(results.params))
else:
xu = results.model.wexog * results.wresid[:, None]
hessian_inv = np.asarray(results.normalized_cov_params)
# experimental support for freq_weights
if hasattr(results.model, 'freq_weights') and not cov_type == 'clu':
# we do not want to square the weights in the covariance calculations
# assumes that freq_weights are incorporated in score_obs or equivalent
# assumes xu/score_obs is 2D
# temporary asarray
xu /= np.sqrt(np.asarray(results.model.freq_weights)[:, None])
else:
raise ValueError('need either tuple of (jac, hessian_inv) or results' +
'instance')
return xu, hessian_inv | Helper function to get scores from results
Parameters | _get_sandwich_arrays | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def _HCCM1(results, scale):
'''
sandwich with pinv(x) * scale * pinv(x).T
where pinv(x) = (X'X)^(-1) X
and scale is (nobs, nobs), or (nobs,) with diagonal matrix diag(scale)
Parameters
----------
results : result instance
need to contain regression results, uses results.model.pinv_wexog
scale : ndarray (nobs,) or (nobs, nobs)
scale matrix, treated as diagonal matrix if scale is one-dimensional
Returns
-------
H : ndarray (k_vars, k_vars)
robust covariance matrix for the parameter estimates
'''
if scale.ndim == 1:
H = np.dot(results.model.pinv_wexog,
scale[:,None]*results.model.pinv_wexog.T)
else:
H = np.dot(results.model.pinv_wexog,
np.dot(scale, results.model.pinv_wexog.T))
return H | sandwich with pinv(x) * scale * pinv(x).T
where pinv(x) = (X'X)^(-1) X
and scale is (nobs, nobs), or (nobs,) with diagonal matrix diag(scale)
Parameters
----------
results : result instance
need to contain regression results, uses results.model.pinv_wexog
scale : ndarray (nobs,) or (nobs, nobs)
scale matrix, treated as diagonal matrix if scale is one-dimensional
Returns
-------
H : ndarray (k_vars, k_vars)
robust covariance matrix for the parameter estimates | _HCCM1 | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def _HCCM2(hessian_inv, scale):
'''
sandwich with (X'X)^(-1) * scale * (X'X)^(-1)
scale is (kvars, kvars)
this uses results.normalized_cov_params for (X'X)^(-1)
Parameters
----------
results : result instance
need to contain regression results, uses results.normalized_cov_params
scale : ndarray (k_vars, k_vars)
scale matrix
Returns
-------
H : ndarray (k_vars, k_vars)
robust covariance matrix for the parameter estimates
'''
if scale.ndim == 1:
scale = scale[:,None]
xxi = hessian_inv
H = np.dot(np.dot(xxi, scale), xxi.T)
return H | sandwich with (X'X)^(-1) * scale * (X'X)^(-1)
scale is (kvars, kvars)
this uses results.normalized_cov_params for (X'X)^(-1)
Parameters
----------
results : result instance
need to contain regression results, uses results.normalized_cov_params
scale : ndarray (k_vars, k_vars)
scale matrix
Returns
-------
H : ndarray (k_vars, k_vars)
robust covariance matrix for the parameter estimates | _HCCM2 | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def weights_bartlett(nlags):
'''Bartlett weights for HAC
this will be moved to another module
Parameters
----------
nlags : int
highest lag in the kernel window, this does not include the zero lag
Returns
-------
kernel : ndarray, (nlags+1,)
weights for Bartlett kernel
'''
#with lag zero
return 1 - np.arange(nlags+1)/(nlags+1.) | Bartlett weights for HAC
this will be moved to another module
Parameters
----------
nlags : int
highest lag in the kernel window, this does not include the zero lag
Returns
-------
kernel : ndarray, (nlags+1,)
weights for Bartlett kernel | weights_bartlett | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def weights_uniform(nlags):
'''uniform weights for HAC
this will be moved to another module
Parameters
----------
nlags : int
highest lag in the kernel window, this does not include the zero lag
Returns
-------
kernel : ndarray, (nlags+1,)
weights for uniform kernel
'''
#with lag zero
return np.ones(nlags+1) | uniform weights for HAC
this will be moved to another module
Parameters
----------
nlags : int
highest lag in the kernel window, this does not include the zero lag
Returns
-------
kernel : ndarray, (nlags+1,)
weights for uniform kernel | weights_uniform | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def S_hac_simple(x, nlags=None, weights_func=weights_bartlett):
'''inner covariance matrix for HAC (Newey, West) sandwich
assumes we have a single time series with zero axis consecutive, equal
spaced time periods
Parameters
----------
x : ndarray (nobs,) or (nobs, k_var)
data, for HAC this is array of x_i * u_i
nlags : int or None
highest lag to include in kernel window. If None, then
nlags = floor(4(T/100)^(2/9)) is used.
weights_func : callable
weights_func is called with nlags as argument to get the kernel
weights. default are Bartlett weights
Returns
-------
S : ndarray, (k_vars, k_vars)
inner covariance matrix for sandwich
Notes
-----
used by cov_hac_simple
options might change when other kernels besides Bartlett are available.
'''
if x.ndim == 1:
x = x[:,None]
n_periods = x.shape[0]
if nlags is None:
nlags = int(np.floor(4 * (n_periods / 100.)**(2./9.)))
weights = weights_func(nlags)
S = weights[0] * np.dot(x.T, x) #weights[0] just for completeness, is 1
for lag in range(1, nlags+1):
s = np.dot(x[lag:].T, x[:-lag])
S += weights[lag] * (s + s.T)
return S | inner covariance matrix for HAC (Newey, West) sandwich
assumes we have a single time series with zero axis consecutive, equal
spaced time periods
Parameters
----------
x : ndarray (nobs,) or (nobs, k_var)
data, for HAC this is array of x_i * u_i
nlags : int or None
highest lag to include in kernel window. If None, then
nlags = floor(4(T/100)^(2/9)) is used.
weights_func : callable
weights_func is called with nlags as argument to get the kernel
weights. default are Bartlett weights
Returns
-------
S : ndarray, (k_vars, k_vars)
inner covariance matrix for sandwich
Notes
-----
used by cov_hac_simple
options might change when other kernels besides Bartlett are available. | S_hac_simple | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def S_white_simple(x):
'''inner covariance matrix for White heteroscedastistity sandwich
Parameters
----------
x : ndarray (nobs,) or (nobs, k_var)
data, for HAC this is array of x_i * u_i
Returns
-------
S : ndarray, (k_vars, k_vars)
inner covariance matrix for sandwich
Notes
-----
this is just dot(X.T, X)
'''
if x.ndim == 1:
x = x[:,None]
return np.dot(x.T, x) | inner covariance matrix for White heteroscedastistity sandwich
Parameters
----------
x : ndarray (nobs,) or (nobs, k_var)
data, for HAC this is array of x_i * u_i
Returns
-------
S : ndarray, (k_vars, k_vars)
inner covariance matrix for sandwich
Notes
-----
this is just dot(X.T, X) | S_white_simple | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def S_hac_groupsum(x, time, nlags=None, weights_func=weights_bartlett):
'''inner covariance matrix for HAC over group sums sandwich
This assumes we have complete equal spaced time periods.
The number of time periods per group need not be the same, but we need
at least one observation for each time period
For a single categorical group only, or a everything else but time
dimension. This first aggregates x over groups for each time period, then
applies HAC on the sum per period.
Parameters
----------
x : ndarray (nobs,) or (nobs, k_var)
data, for HAC this is array of x_i * u_i
time : ndarray, (nobs,)
timeindes, assumed to be integers range(n_periods)
nlags : int or None
highest lag to include in kernel window. If None, then
nlags = floor[4(T/100)^(2/9)] is used.
weights_func : callable
weights_func is called with nlags as argument to get the kernel
weights. default are Bartlett weights
Returns
-------
S : ndarray, (k_vars, k_vars)
inner covariance matrix for sandwich
References
----------
Daniel Hoechle, xtscc paper
Driscoll and Kraay
'''
#needs groupsums
x_group_sums = group_sums(x, time).T #TODO: transpose return in grou_sum
return S_hac_simple(x_group_sums, nlags=nlags, weights_func=weights_func) | inner covariance matrix for HAC over group sums sandwich
This assumes we have complete equal spaced time periods.
The number of time periods per group need not be the same, but we need
at least one observation for each time period
For a single categorical group only, or a everything else but time
dimension. This first aggregates x over groups for each time period, then
applies HAC on the sum per period.
Parameters
----------
x : ndarray (nobs,) or (nobs, k_var)
data, for HAC this is array of x_i * u_i
time : ndarray, (nobs,)
timeindes, assumed to be integers range(n_periods)
nlags : int or None
highest lag to include in kernel window. If None, then
nlags = floor[4(T/100)^(2/9)] is used.
weights_func : callable
weights_func is called with nlags as argument to get the kernel
weights. default are Bartlett weights
Returns
-------
S : ndarray, (k_vars, k_vars)
inner covariance matrix for sandwich
References
----------
Daniel Hoechle, xtscc paper
Driscoll and Kraay | S_hac_groupsum | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def S_crosssection(x, group):
'''inner covariance matrix for White on group sums sandwich
I guess for a single categorical group only,
categorical group, can also be the product/intersection of groups
This is used by cov_cluster and indirectly verified
'''
x_group_sums = group_sums(x, group).T #TODO: why transposed
return S_white_simple(x_group_sums) | inner covariance matrix for White on group sums sandwich
I guess for a single categorical group only,
categorical group, can also be the product/intersection of groups
This is used by cov_cluster and indirectly verified | S_crosssection | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def cov_crosssection_0(results, group):
'''this one is still wrong, use cov_cluster instead'''
#TODO: currently used version of groupsums requires 2d resid
scale = S_crosssection(results.resid[:,None], group)
scale = np.squeeze(scale)
cov = _HCCM1(results, scale)
return cov | this one is still wrong, use cov_cluster instead | cov_crosssection_0 | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def cov_cluster(results, group, use_correction=True):
'''cluster robust covariance matrix
Calculates sandwich covariance matrix for a single cluster, i.e. grouped
variables.
Parameters
----------
results : result instance
result of a regression, uses results.model.exog and results.resid
TODO: this should use wexog instead
use_correction : bool
If true (default), then the small sample correction factor is used.
Returns
-------
cov : ndarray, (k_vars, k_vars)
cluster robust covariance matrix for parameter estimates
Notes
-----
same result as Stata in UCLA example and same as Peterson
'''
#TODO: currently used version of groupsums requires 2d resid
xu, hessian_inv = _get_sandwich_arrays(results, cov_type='clu')
if not hasattr(group, 'dtype') or group.dtype != np.dtype('int'):
clusters, group = np.unique(group, return_inverse=True)
else:
clusters = np.unique(group)
scale = S_crosssection(xu, group)
nobs, k_params = xu.shape
n_groups = len(clusters) #replace with stored group attributes if available
cov_c = _HCCM2(hessian_inv, scale)
if use_correction:
cov_c *= (n_groups / (n_groups - 1.) *
((nobs-1.) / float(nobs - k_params)))
return cov_c | cluster robust covariance matrix
Calculates sandwich covariance matrix for a single cluster, i.e. grouped
variables.
Parameters
----------
results : result instance
result of a regression, uses results.model.exog and results.resid
TODO: this should use wexog instead
use_correction : bool
If true (default), then the small sample correction factor is used.
Returns
-------
cov : ndarray, (k_vars, k_vars)
cluster robust covariance matrix for parameter estimates
Notes
-----
same result as Stata in UCLA example and same as Peterson | cov_cluster | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def cov_cluster_2groups(results, group, group2=None, use_correction=True):
'''cluster robust covariance matrix for two groups/clusters
Parameters
----------
results : result instance
result of a regression, uses results.model.exog and results.resid
TODO: this should use wexog instead
use_correction : bool
If true (default), then the small sample correction factor is used.
Returns
-------
cov_both : ndarray, (k_vars, k_vars)
cluster robust covariance matrix for parameter estimates, for both
clusters
cov_0 : ndarray, (k_vars, k_vars)
cluster robust covariance matrix for parameter estimates for first
cluster
cov_1 : ndarray, (k_vars, k_vars)
cluster robust covariance matrix for parameter estimates for second
cluster
Notes
-----
verified against Peterson's table, (4 decimal print precision)
'''
if group2 is None:
if group.ndim !=2 or group.shape[1] != 2:
raise ValueError('if group2 is not given, then groups needs to be ' +
'an array with two columns')
group0 = group[:, 0]
group1 = group[:, 1]
else:
group0 = group
group1 = group2
group = (group0, group1)
cov0 = cov_cluster(results, group0, use_correction=use_correction)
#[0] because we get still also returns bse
cov1 = cov_cluster(results, group1, use_correction=use_correction)
# cov of cluster formed by intersection of two groups
cov01 = cov_cluster(results,
combine_indices(group)[0],
use_correction=use_correction)
#robust cov matrix for union of groups
cov_both = cov0 + cov1 - cov01
#return all three (for now?)
return cov_both, cov0, cov1 | cluster robust covariance matrix for two groups/clusters
Parameters
----------
results : result instance
result of a regression, uses results.model.exog and results.resid
TODO: this should use wexog instead
use_correction : bool
If true (default), then the small sample correction factor is used.
Returns
-------
cov_both : ndarray, (k_vars, k_vars)
cluster robust covariance matrix for parameter estimates, for both
clusters
cov_0 : ndarray, (k_vars, k_vars)
cluster robust covariance matrix for parameter estimates for first
cluster
cov_1 : ndarray, (k_vars, k_vars)
cluster robust covariance matrix for parameter estimates for second
cluster
Notes
-----
verified against Peterson's table, (4 decimal print precision) | cov_cluster_2groups | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def cov_white_simple(results, use_correction=True):
'''
heteroscedasticity robust covariance matrix (White)
Parameters
----------
results : result instance
result of a regression, uses results.model.exog and results.resid
TODO: this should use wexog instead
Returns
-------
cov : ndarray, (k_vars, k_vars)
heteroscedasticity robust covariance matrix for parameter estimates
Notes
-----
This produces the same result as cov_hc0, and does not include any small
sample correction.
verified (against LinearRegressionResults and Peterson)
See Also
--------
cov_hc1, cov_hc2, cov_hc3 : heteroscedasticity robust covariance matrices
with small sample corrections
'''
xu, hessian_inv = _get_sandwich_arrays(results)
sigma = S_white_simple(xu)
cov_w = _HCCM2(hessian_inv, sigma) #add bread to sandwich
if use_correction:
nobs, k_params = xu.shape
cov_w *= nobs / float(nobs - k_params)
return cov_w | heteroscedasticity robust covariance matrix (White)
Parameters
----------
results : result instance
result of a regression, uses results.model.exog and results.resid
TODO: this should use wexog instead
Returns
-------
cov : ndarray, (k_vars, k_vars)
heteroscedasticity robust covariance matrix for parameter estimates
Notes
-----
This produces the same result as cov_hc0, and does not include any small
sample correction.
verified (against LinearRegressionResults and Peterson)
See Also
--------
cov_hc1, cov_hc2, cov_hc3 : heteroscedasticity robust covariance matrices
with small sample corrections | cov_white_simple | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def cov_hac_simple(results, nlags=None, weights_func=weights_bartlett,
use_correction=True):
'''
heteroscedasticity and autocorrelation robust covariance matrix (Newey-West)
Assumes we have a single time series with zero axis consecutive, equal
spaced time periods
Parameters
----------
results : result instance
result of a regression, uses results.model.exog and results.resid
TODO: this should use wexog instead
nlags : int or None
highest lag to include in kernel window. If None, then
nlags = floor[4(T/100)^(2/9)] is used.
weights_func : callable
weights_func is called with nlags as argument to get the kernel
weights. default are Bartlett weights
Returns
-------
cov : ndarray, (k_vars, k_vars)
HAC robust covariance matrix for parameter estimates
Notes
-----
verified only for nlags=0, which is just White
just guessing on correction factor, need reference
options might change when other kernels besides Bartlett are available.
'''
xu, hessian_inv = _get_sandwich_arrays(results)
sigma = S_hac_simple(xu, nlags=nlags, weights_func=weights_func)
cov_hac = _HCCM2(hessian_inv, sigma)
if use_correction:
nobs, k_params = xu.shape
cov_hac *= nobs / float(nobs - k_params)
return cov_hac | heteroscedasticity and autocorrelation robust covariance matrix (Newey-West)
Assumes we have a single time series with zero axis consecutive, equal
spaced time periods
Parameters
----------
results : result instance
result of a regression, uses results.model.exog and results.resid
TODO: this should use wexog instead
nlags : int or None
highest lag to include in kernel window. If None, then
nlags = floor[4(T/100)^(2/9)] is used.
weights_func : callable
weights_func is called with nlags as argument to get the kernel
weights. default are Bartlett weights
Returns
-------
cov : ndarray, (k_vars, k_vars)
HAC robust covariance matrix for parameter estimates
Notes
-----
verified only for nlags=0, which is just White
just guessing on correction factor, need reference
options might change when other kernels besides Bartlett are available. | cov_hac_simple | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def lagged_groups(x, lag, groupidx):
'''
assumes sorted by time, groupidx is tuple of start and end values
not optimized, just to get a working version, loop over groups
'''
out0 = []
out_lagged = []
for lo, up in groupidx:
if lo+lag < up: #group is longer than lag
out0.append(x[lo+lag:up])
out_lagged.append(x[lo:up-lag])
if out0 == []:
raise ValueError('all groups are empty taking lags')
return np.vstack(out0), np.vstack(out_lagged) | assumes sorted by time, groupidx is tuple of start and end values
not optimized, just to get a working version, loop over groups | lagged_groups | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def S_nw_panel(xw, weights, groupidx):
'''inner covariance matrix for HAC for panel data
no denominator nobs used
no reference for this, just accounting for time indices
'''
nlags = len(weights)-1
S = weights[0] * np.dot(xw.T, xw) #weights just for completeness
for lag in range(1, nlags+1):
xw0, xwlag = lagged_groups(xw, lag, groupidx)
s = np.dot(xw0.T, xwlag)
S += weights[lag] * (s + s.T)
return S | inner covariance matrix for HAC for panel data
no denominator nobs used
no reference for this, just accounting for time indices | S_nw_panel | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def cov_nw_panel(results, nlags, groupidx, weights_func=weights_bartlett,
use_correction='hac'):
'''Panel HAC robust covariance matrix
Assumes we have a panel of time series with consecutive, equal spaced time
periods. Data is assumed to be in long format with time series of each
individual stacked into one array. Panel can be unbalanced.
Parameters
----------
results : result instance
result of a regression, uses results.model.exog and results.resid
TODO: this should use wexog instead
nlags : int or None
Highest lag to include in kernel window. Currently, no default
because the optimal length will depend on the number of observations
per cross-sectional unit.
groupidx : list of tuple
each tuple should contain the start and end index for an individual.
(groupidx might change in future).
weights_func : callable
weights_func is called with nlags as argument to get the kernel
weights. default are Bartlett weights
use_correction : 'cluster' or 'hac' or False
If False, then no small sample correction is used.
If 'cluster' (default), then the same correction as in cov_cluster is
used.
If 'hac', then the same correction as in single time series, cov_hac
is used.
Returns
-------
cov : ndarray, (k_vars, k_vars)
HAC robust covariance matrix for parameter estimates
Notes
-----
For nlags=0, this is just White covariance, cov_white.
If kernel is uniform, `weights_uniform`, with nlags equal to the number
of observations per unit in a balance panel, then cov_cluster and
cov_hac_panel are identical.
Tested against STATA `newey` command with same defaults.
Options might change when other kernels besides Bartlett and uniform are
available.
'''
if nlags == 0: #so we can reproduce HC0 White
weights = [1, 0] #to avoid the scalar check in hac_nw
else:
weights = weights_func(nlags)
xu, hessian_inv = _get_sandwich_arrays(results)
S_hac = S_nw_panel(xu, weights, groupidx)
cov_hac = _HCCM2(hessian_inv, S_hac)
if use_correction:
nobs, k_params = xu.shape
if use_correction == 'hac':
cov_hac *= nobs / float(nobs - k_params)
elif use_correction in ['c', 'clu', 'cluster']:
n_groups = len(groupidx)
cov_hac *= n_groups / (n_groups - 1.)
cov_hac *= ((nobs-1.) / float(nobs - k_params))
return cov_hac | Panel HAC robust covariance matrix
Assumes we have a panel of time series with consecutive, equal spaced time
periods. Data is assumed to be in long format with time series of each
individual stacked into one array. Panel can be unbalanced.
Parameters
----------
results : result instance
result of a regression, uses results.model.exog and results.resid
TODO: this should use wexog instead
nlags : int or None
Highest lag to include in kernel window. Currently, no default
because the optimal length will depend on the number of observations
per cross-sectional unit.
groupidx : list of tuple
each tuple should contain the start and end index for an individual.
(groupidx might change in future).
weights_func : callable
weights_func is called with nlags as argument to get the kernel
weights. default are Bartlett weights
use_correction : 'cluster' or 'hac' or False
If False, then no small sample correction is used.
If 'cluster' (default), then the same correction as in cov_cluster is
used.
If 'hac', then the same correction as in single time series, cov_hac
is used.
Returns
-------
cov : ndarray, (k_vars, k_vars)
HAC robust covariance matrix for parameter estimates
Notes
-----
For nlags=0, this is just White covariance, cov_white.
If kernel is uniform, `weights_uniform`, with nlags equal to the number
of observations per unit in a balance panel, then cov_cluster and
cov_hac_panel are identical.
Tested against STATA `newey` command with same defaults.
Options might change when other kernels besides Bartlett and uniform are
available. | cov_nw_panel | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def cov_nw_groupsum(results, nlags, time, weights_func=weights_bartlett,
use_correction=0):
'''Driscoll and Kraay Panel robust covariance matrix
Robust covariance matrix for panel data of Driscoll and Kraay.
Assumes we have a panel of time series where the time index is available.
The time index is assumed to represent equal spaced periods. At least one
observation per period is required.
Parameters
----------
results : result instance
result of a regression, uses results.model.exog and results.resid
TODO: this should use wexog instead
nlags : int or None
Highest lag to include in kernel window. Currently, no default
because the optimal length will depend on the number of observations
per cross-sectional unit.
time : ndarray of int
this should contain the coding for the time period of each observation.
time periods should be integers in range(maxT) where maxT is obs of i
weights_func : callable
weights_func is called with nlags as argument to get the kernel
weights. default are Bartlett weights
use_correction : 'cluster' or 'hac' or False
If False, then no small sample correction is used.
If 'hac' (default), then the same correction as in single time series, cov_hac
is used.
If 'cluster', then the same correction as in cov_cluster is
used.
Returns
-------
cov : ndarray, (k_vars, k_vars)
HAC robust covariance matrix for parameter estimates
Notes
-----
Tested against STATA xtscc package, which uses no small sample correction
This first averages relevant variables for each time period over all
individuals/groups, and then applies the same kernel weighted averaging
over time as in HAC.
Warning:
In the example with a short panel (few time periods and many individuals)
with mainly across individual variation this estimator did not produce
reasonable results.
Options might change when other kernels besides Bartlett and uniform are
available.
References
----------
Daniel Hoechle, xtscc paper
Driscoll and Kraay
'''
xu, hessian_inv = _get_sandwich_arrays(results)
#S_hac = S_nw_panel(xw, weights, groupidx)
S_hac = S_hac_groupsum(xu, time, nlags=nlags, weights_func=weights_func)
cov_hac = _HCCM2(hessian_inv, S_hac)
if use_correction:
nobs, k_params = xu.shape
if use_correction == 'hac':
cov_hac *= nobs / float(nobs - k_params)
elif use_correction in ['c', 'cluster']:
n_groups = len(np.unique(time))
cov_hac *= n_groups / (n_groups - 1.)
cov_hac *= ((nobs-1.) / float(nobs - k_params))
return cov_hac | Driscoll and Kraay Panel robust covariance matrix
Robust covariance matrix for panel data of Driscoll and Kraay.
Assumes we have a panel of time series where the time index is available.
The time index is assumed to represent equal spaced periods. At least one
observation per period is required.
Parameters
----------
results : result instance
result of a regression, uses results.model.exog and results.resid
TODO: this should use wexog instead
nlags : int or None
Highest lag to include in kernel window. Currently, no default
because the optimal length will depend on the number of observations
per cross-sectional unit.
time : ndarray of int
this should contain the coding for the time period of each observation.
time periods should be integers in range(maxT) where maxT is obs of i
weights_func : callable
weights_func is called with nlags as argument to get the kernel
weights. default are Bartlett weights
use_correction : 'cluster' or 'hac' or False
If False, then no small sample correction is used.
If 'hac' (default), then the same correction as in single time series, cov_hac
is used.
If 'cluster', then the same correction as in cov_cluster is
used.
Returns
-------
cov : ndarray, (k_vars, k_vars)
HAC robust covariance matrix for parameter estimates
Notes
-----
Tested against STATA xtscc package, which uses no small sample correction
This first averages relevant variables for each time period over all
individuals/groups, and then applies the same kernel weighted averaging
over time as in HAC.
Warning:
In the example with a short panel (few time periods and many individuals)
with mainly across individual variation this estimator did not produce
reasonable results.
Options might change when other kernels besides Bartlett and uniform are
available.
References
----------
Daniel Hoechle, xtscc paper
Driscoll and Kraay | cov_nw_groupsum | python | statsmodels/statsmodels | statsmodels/stats/sandwich_covariance.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/sandwich_covariance.py | BSD-3-Clause |
def effectsize_oneway(means, vars_, nobs, use_var="unequal", ddof_between=0):
"""
Effect size corresponding to Cohen's f = nc / nobs for oneway anova
This contains adjustment for Welch and Brown-Forsythe Anova so that
effect size can be used with FTestAnovaPower.
Parameters
----------
means : array_like
Mean of samples to be compared
vars_ : float or array_like
Residual (within) variance of each sample or pooled
If ``vars_`` is scalar, then it is interpreted as pooled variance that
is the same for all samples, ``use_var`` will be ignored.
Otherwise, the variances are used depending on the ``use_var`` keyword.
nobs : int or array_like
Number of observations for the samples.
If nobs is scalar, then it is assumed that all samples have the same
number ``nobs`` of observation, i.e. a balanced sample case.
Otherwise, statistics will be weighted corresponding to nobs.
Only relative sizes are relevant, any proportional change to nobs does
not change the effect size.
use_var : {"unequal", "equal", "bf"}
If ``use_var`` is "unequal", then the variances can differ across
samples and the effect size for Welch anova will be computed.
ddof_between : int
Degrees of freedom correction for the weighted between sum of squares.
The denominator is ``nobs_total - ddof_between``
This can be used to match differences across reference literature.
Returns
-------
f2 : float
Effect size corresponding to squared Cohen's f, which is also equal
to the noncentrality divided by total number of observations.
Notes
-----
This currently handles the following cases for oneway anova
- balanced sample with homoscedastic variances
- samples with different number of observations and with homoscedastic
variances
- samples with different number of observations and with heteroskedastic
variances. This corresponds to Welch anova
In the case of "unequal" and "bf" methods for unequal variances, the
effect sizes do not directly correspond to the test statistic in Anova.
Both have correction terms dropped or added, so the effect sizes match up
with using FTestAnovaPower.
If all variances are equal, then all three methods result in the same
effect size. If variances are unequal, then the three methods produce
small differences in effect size.
Note, the effect size and power computation for BF Anova was not found in
the literature. The correction terms were added so that FTestAnovaPower
provides a good approximation to the power.
Status: experimental
We might add additional returns, if those are needed to support power
and sample size applications.
Examples
--------
The following shows how to compute effect size and power for each of the
three anova methods. The null hypothesis is that the means are equal which
corresponds to a zero effect size. Under the alternative, means differ
with two sample means at a distance delta from the mean. We assume the
variance is the same under the null and alternative hypothesis.
``nobs`` for the samples defines the fraction of observations in the
samples. ``nobs`` in the power method defines the total sample size.
In simulations, the computed power for standard anova,
i.e.``use_var="equal"`` overestimates the simulated power by a few percent.
The equal variance assumption does not hold in this example.
>>> from statsmodels.stats.oneway import effectsize_oneway
>>> from statsmodels.stats.power import FTestAnovaPower
>>>
>>> nobs = np.array([10, 12, 13, 15])
>>> delta = 0.5
>>> means_alt = np.array([-1, 0, 0, 1]) * delta
>>> vars_ = np.arange(1, len(means_alt) + 1)
>>>
>>> f2_alt = effectsize_oneway(means_alt, vars_, nobs, use_var="equal")
>>> f2_alt
0.04581300813008131
>>>
>>> kwds = {'effect_size': np.sqrt(f2_alt), 'nobs': 100, 'alpha': 0.05,
... 'k_groups': 4}
>>> power = FTestAnovaPower().power(**kwds)
>>> power
0.39165892158983273
>>>
>>> f2_alt = effectsize_oneway(means_alt, vars_, nobs, use_var="unequal")
>>> f2_alt
0.060640138408304504
>>>
>>> kwds['effect_size'] = np.sqrt(f2_alt)
>>> power = FTestAnovaPower().power(**kwds)
>>> power
0.5047366512800622
>>>
>>> f2_alt = effectsize_oneway(means_alt, vars_, nobs, use_var="bf")
>>> f2_alt
0.04391324307956788
>>>
>>> kwds['effect_size'] = np.sqrt(f2_alt)
>>> power = FTestAnovaPower().power(**kwds)
>>> power
0.3765792117047725
"""
# the code here is largely a copy of onway_generic with adjustments
means = np.asarray(means)
n_groups = means.shape[0]
if np.size(nobs) == 1:
nobs = np.ones(n_groups) * nobs
nobs_t = nobs.sum()
if use_var == "equal":
if np.size(vars_) == 1:
var_resid = vars_
else:
vars_ = np.asarray(vars_)
var_resid = ((nobs - 1) * vars_).sum() / (nobs_t - n_groups)
vars_ = var_resid # scalar, if broadcasting works
weights = nobs / vars_
w_total = weights.sum()
w_rel = weights / w_total
# meanw_t = (weights * means).sum() / w_total
meanw_t = w_rel @ means
f2 = np.dot(weights, (means - meanw_t)**2) / (nobs_t - ddof_between)
if use_var.lower() == "bf":
weights = nobs
w_total = weights.sum()
w_rel = weights / w_total
meanw_t = w_rel @ means
# TODO: reuse general case with weights
tmp = ((1. - nobs / nobs_t) * vars_).sum()
statistic = 1. * (nobs * (means - meanw_t)**2).sum()
statistic /= tmp
f2 = statistic * (1. - nobs / nobs_t).sum() / nobs_t
# correction factor for df_num in BFM
df_num2 = n_groups - 1
df_num = tmp**2 / ((vars_**2).sum() +
(nobs / nobs_t * vars_).sum()**2 -
2 * (nobs / nobs_t * vars_**2).sum())
f2 *= df_num / df_num2
return f2 | Effect size corresponding to Cohen's f = nc / nobs for oneway anova
This contains adjustment for Welch and Brown-Forsythe Anova so that
effect size can be used with FTestAnovaPower.
Parameters
----------
means : array_like
Mean of samples to be compared
vars_ : float or array_like
Residual (within) variance of each sample or pooled
If ``vars_`` is scalar, then it is interpreted as pooled variance that
is the same for all samples, ``use_var`` will be ignored.
Otherwise, the variances are used depending on the ``use_var`` keyword.
nobs : int or array_like
Number of observations for the samples.
If nobs is scalar, then it is assumed that all samples have the same
number ``nobs`` of observation, i.e. a balanced sample case.
Otherwise, statistics will be weighted corresponding to nobs.
Only relative sizes are relevant, any proportional change to nobs does
not change the effect size.
use_var : {"unequal", "equal", "bf"}
If ``use_var`` is "unequal", then the variances can differ across
samples and the effect size for Welch anova will be computed.
ddof_between : int
Degrees of freedom correction for the weighted between sum of squares.
The denominator is ``nobs_total - ddof_between``
This can be used to match differences across reference literature.
Returns
-------
f2 : float
Effect size corresponding to squared Cohen's f, which is also equal
to the noncentrality divided by total number of observations.
Notes
-----
This currently handles the following cases for oneway anova
- balanced sample with homoscedastic variances
- samples with different number of observations and with homoscedastic
variances
- samples with different number of observations and with heteroskedastic
variances. This corresponds to Welch anova
In the case of "unequal" and "bf" methods for unequal variances, the
effect sizes do not directly correspond to the test statistic in Anova.
Both have correction terms dropped or added, so the effect sizes match up
with using FTestAnovaPower.
If all variances are equal, then all three methods result in the same
effect size. If variances are unequal, then the three methods produce
small differences in effect size.
Note, the effect size and power computation for BF Anova was not found in
the literature. The correction terms were added so that FTestAnovaPower
provides a good approximation to the power.
Status: experimental
We might add additional returns, if those are needed to support power
and sample size applications.
Examples
--------
The following shows how to compute effect size and power for each of the
three anova methods. The null hypothesis is that the means are equal which
corresponds to a zero effect size. Under the alternative, means differ
with two sample means at a distance delta from the mean. We assume the
variance is the same under the null and alternative hypothesis.
``nobs`` for the samples defines the fraction of observations in the
samples. ``nobs`` in the power method defines the total sample size.
In simulations, the computed power for standard anova,
i.e.``use_var="equal"`` overestimates the simulated power by a few percent.
The equal variance assumption does not hold in this example.
>>> from statsmodels.stats.oneway import effectsize_oneway
>>> from statsmodels.stats.power import FTestAnovaPower
>>>
>>> nobs = np.array([10, 12, 13, 15])
>>> delta = 0.5
>>> means_alt = np.array([-1, 0, 0, 1]) * delta
>>> vars_ = np.arange(1, len(means_alt) + 1)
>>>
>>> f2_alt = effectsize_oneway(means_alt, vars_, nobs, use_var="equal")
>>> f2_alt
0.04581300813008131
>>>
>>> kwds = {'effect_size': np.sqrt(f2_alt), 'nobs': 100, 'alpha': 0.05,
... 'k_groups': 4}
>>> power = FTestAnovaPower().power(**kwds)
>>> power
0.39165892158983273
>>>
>>> f2_alt = effectsize_oneway(means_alt, vars_, nobs, use_var="unequal")
>>> f2_alt
0.060640138408304504
>>>
>>> kwds['effect_size'] = np.sqrt(f2_alt)
>>> power = FTestAnovaPower().power(**kwds)
>>> power
0.5047366512800622
>>>
>>> f2_alt = effectsize_oneway(means_alt, vars_, nobs, use_var="bf")
>>> f2_alt
0.04391324307956788
>>>
>>> kwds['effect_size'] = np.sqrt(f2_alt)
>>> power = FTestAnovaPower().power(**kwds)
>>> power
0.3765792117047725 | effectsize_oneway | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def convert_effectsize_fsqu(f2=None, eta2=None):
"""Convert squared effect sizes in f family
f2 is signal to noise ratio, var_explained / var_residual
eta2 is proportion of explained variance, var_explained / var_total
uses the relationship:
f2 = eta2 / (1 - eta2)
Parameters
----------
f2 : None or float
Squared Cohen's F effect size. If f2 is not None, then eta2 will be
computed.
eta2 : None or float
Squared eta effect size. If f2 is None and eta2 is not None, then f2 is
computed.
Returns
-------
res : Holder instance
An instance of the Holder class with f2 and eta2 as attributes.
"""
if f2 is not None:
eta2 = 1 / (1 + 1 / f2)
elif eta2 is not None:
f2 = eta2 / (1 - eta2)
res = Holder(f2=f2, eta2=eta2)
return res | Convert squared effect sizes in f family
f2 is signal to noise ratio, var_explained / var_residual
eta2 is proportion of explained variance, var_explained / var_total
uses the relationship:
f2 = eta2 / (1 - eta2)
Parameters
----------
f2 : None or float
Squared Cohen's F effect size. If f2 is not None, then eta2 will be
computed.
eta2 : None or float
Squared eta effect size. If f2 is None and eta2 is not None, then f2 is
computed.
Returns
-------
res : Holder instance
An instance of the Holder class with f2 and eta2 as attributes. | convert_effectsize_fsqu | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def _fstat2effectsize(f_stat, df):
"""Compute anova effect size from F-statistic
This might be combined with convert_effectsize_fsqu
Parameters
----------
f_stat : array_like
Test statistic of an F-test
df : tuple
degrees of freedom ``df = (df1, df2)`` where
- df1 : numerator degrees of freedom, number of constraints
- df2 : denominator degrees of freedom, df_resid
Returns
-------
res : Holder instance
This instance contains effect size measures f2, eta2, omega2 and eps2
as attributes.
Notes
-----
This uses the following definitions:
- f2 = f_stat * df1 / df2
- eta2 = f2 / (f2 + 1)
- omega2 = (f2 - df1 / df2) / (f2 + 2)
- eps2 = (f2 - df1 / df2) / (f2 + 1)
This differs from effect size measures in other function which define
``f2 = f_stat * df1 / nobs``
or an equivalent expression for power computation. The noncentrality
index for the hypothesis test is in those cases given by
``nc = f_stat * df1``.
Currently omega2 and eps2 are computed in two different ways. Those
values agree for regular cases but can show different behavior in corner
cases (e.g. zero division).
"""
df1, df2 = df
f2 = f_stat * df1 / df2
eta2 = f2 / (f2 + 1)
omega2_ = (f_stat - 1) / (f_stat + (df2 + 1) / df1)
omega2 = (f2 - df1 / df2) / (f2 + 1 + 1 / df2) # rewrite
eps2_ = (f_stat - 1) / (f_stat + df2 / df1)
eps2 = (f2 - df1 / df2) / (f2 + 1) # rewrite
return Holder(f2=f2, eta2=eta2, omega2=omega2, eps2=eps2, eps2_=eps2_,
omega2_=omega2_) | Compute anova effect size from F-statistic
This might be combined with convert_effectsize_fsqu
Parameters
----------
f_stat : array_like
Test statistic of an F-test
df : tuple
degrees of freedom ``df = (df1, df2)`` where
- df1 : numerator degrees of freedom, number of constraints
- df2 : denominator degrees of freedom, df_resid
Returns
-------
res : Holder instance
This instance contains effect size measures f2, eta2, omega2 and eps2
as attributes.
Notes
-----
This uses the following definitions:
- f2 = f_stat * df1 / df2
- eta2 = f2 / (f2 + 1)
- omega2 = (f2 - df1 / df2) / (f2 + 2)
- eps2 = (f2 - df1 / df2) / (f2 + 1)
This differs from effect size measures in other function which define
``f2 = f_stat * df1 / nobs``
or an equivalent expression for power computation. The noncentrality
index for the hypothesis test is in those cases given by
``nc = f_stat * df1``.
Currently omega2 and eps2 are computed in two different ways. Those
values agree for regular cases but can show different behavior in corner
cases (e.g. zero division). | _fstat2effectsize | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def wellek_to_f2(eps, n_groups):
"""Convert Wellek's effect size (sqrt) to Cohen's f-squared
This computes the following effect size :
f2 = 1 / n_groups * eps**2
Parameters
----------
eps : float or ndarray
Wellek's effect size used in anova equivalence test
n_groups : int
Number of groups in oneway comparison
Returns
-------
f2 : effect size Cohen's f-squared
"""
f2 = 1 / n_groups * eps**2
return f2 | Convert Wellek's effect size (sqrt) to Cohen's f-squared
This computes the following effect size :
f2 = 1 / n_groups * eps**2
Parameters
----------
eps : float or ndarray
Wellek's effect size used in anova equivalence test
n_groups : int
Number of groups in oneway comparison
Returns
-------
f2 : effect size Cohen's f-squared | wellek_to_f2 | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def f2_to_wellek(f2, n_groups):
"""Convert Cohen's f-squared to Wellek's effect size (sqrt)
This computes the following effect size :
eps = sqrt(n_groups * f2)
Parameters
----------
f2 : float or ndarray
Effect size Cohen's f-squared
n_groups : int
Number of groups in oneway comparison
Returns
-------
eps : float or ndarray
Wellek's effect size used in anova equivalence test
"""
eps = np.sqrt(n_groups * f2)
return eps | Convert Cohen's f-squared to Wellek's effect size (sqrt)
This computes the following effect size :
eps = sqrt(n_groups * f2)
Parameters
----------
f2 : float or ndarray
Effect size Cohen's f-squared
n_groups : int
Number of groups in oneway comparison
Returns
-------
eps : float or ndarray
Wellek's effect size used in anova equivalence test | f2_to_wellek | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def fstat_to_wellek(f_stat, n_groups, nobs_mean):
"""Convert F statistic to wellek's effect size eps squared
This computes the following effect size :
es = f_stat * (n_groups - 1) / nobs_mean
Parameters
----------
f_stat : float or ndarray
Test statistic of an F-test.
n_groups : int
Number of groups in oneway comparison
nobs_mean : float or ndarray
Average number of observations across groups.
Returns
-------
eps : float or ndarray
Wellek's effect size used in anova equivalence test
"""
es = f_stat * (n_groups - 1) / nobs_mean
return es | Convert F statistic to wellek's effect size eps squared
This computes the following effect size :
es = f_stat * (n_groups - 1) / nobs_mean
Parameters
----------
f_stat : float or ndarray
Test statistic of an F-test.
n_groups : int
Number of groups in oneway comparison
nobs_mean : float or ndarray
Average number of observations across groups.
Returns
-------
eps : float or ndarray
Wellek's effect size used in anova equivalence test | fstat_to_wellek | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def confint_noncentrality(f_stat, df, alpha=0.05,
alternative="two-sided"):
"""
Confidence interval for noncentrality parameter in F-test
This does not yet handle non-negativity constraint on nc.
Currently only two-sided alternative is supported.
Parameters
----------
f_stat : float
df : tuple
degrees of freedom ``df = (df1, df2)`` where
- df1 : numerator degrees of freedom, number of constraints
- df2 : denominator degrees of freedom, df_resid
alpha : float, default 0.05
alternative : {"two-sided"}
Other alternatives have not been implements.
Returns
-------
float
The end point of the confidence interval.
Notes
-----
The algorithm inverts the cdf of the noncentral F distribution with
respect to the noncentrality parameters.
See Steiger 2004 and references cited in it.
References
----------
.. [1] Steiger, James H. 2004. “Beyond the F Test: Effect Size Confidence
Intervals and Tests of Close Fit in the Analysis of Variance and
Contrast Analysis.” Psychological Methods 9 (2): 164–82.
https://doi.org/10.1037/1082-989X.9.2.164.
See Also
--------
confint_effectsize_oneway
"""
df1, df2 = df
if alternative in ["two-sided", "2s", "ts"]:
alpha1s = alpha / 2
ci = ncfdtrinc(df1, df2, [1 - alpha1s, alpha1s], f_stat)
else:
raise NotImplementedError
return ci | Confidence interval for noncentrality parameter in F-test
This does not yet handle non-negativity constraint on nc.
Currently only two-sided alternative is supported.
Parameters
----------
f_stat : float
df : tuple
degrees of freedom ``df = (df1, df2)`` where
- df1 : numerator degrees of freedom, number of constraints
- df2 : denominator degrees of freedom, df_resid
alpha : float, default 0.05
alternative : {"two-sided"}
Other alternatives have not been implements.
Returns
-------
float
The end point of the confidence interval.
Notes
-----
The algorithm inverts the cdf of the noncentral F distribution with
respect to the noncentrality parameters.
See Steiger 2004 and references cited in it.
References
----------
.. [1] Steiger, James H. 2004. “Beyond the F Test: Effect Size Confidence
Intervals and Tests of Close Fit in the Analysis of Variance and
Contrast Analysis.” Psychological Methods 9 (2): 164–82.
https://doi.org/10.1037/1082-989X.9.2.164.
See Also
--------
confint_effectsize_oneway | confint_noncentrality | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def confint_effectsize_oneway(f_stat, df, alpha=0.05, nobs=None):
"""
Confidence interval for effect size in oneway anova for F distribution
This does not yet handle non-negativity constraint on nc.
Currently only two-sided alternative is supported.
Parameters
----------
f_stat : float
df : tuple
degrees of freedom ``df = (df1, df2)`` where
- df1 : numerator degrees of freedom, number of constraints
- df2 : denominator degrees of freedom, df_resid
alpha : float, default 0.05
nobs : int, default None
Returns
-------
Holder
Class with effect size and confidence attributes
Notes
-----
The confidence interval for the noncentrality parameter is obtained by
inverting the cdf of the noncentral F distribution. Confidence intervals
for other effect sizes are computed by endpoint transformation.
R package ``effectsize`` does not compute the confidence intervals in the
same way. Their confidence intervals can be replicated with
>>> ci_nc = confint_noncentrality(f_stat, df1, df2, alpha=0.1)
>>> ci_es = smo._fstat2effectsize(ci_nc / df1, df1, df2)
See Also
--------
confint_noncentrality
"""
df1, df2 = df
if nobs is None:
nobs = df1 + df2 + 1
ci_nc = confint_noncentrality(f_stat, df, alpha=alpha)
ci_f2 = ci_nc / nobs
ci_res = convert_effectsize_fsqu(f2=ci_f2)
ci_res.ci_omega2 = (ci_f2 - df1 / df2) / (ci_f2 + 1 + 1 / df2)
ci_res.ci_nc = ci_nc
ci_res.ci_f = np.sqrt(ci_res.f2)
ci_res.ci_eta = np.sqrt(ci_res.eta2)
ci_res.ci_f_corrected = np.sqrt(ci_res.f2 * (df1 + 1) / df1)
return ci_res | Confidence interval for effect size in oneway anova for F distribution
This does not yet handle non-negativity constraint on nc.
Currently only two-sided alternative is supported.
Parameters
----------
f_stat : float
df : tuple
degrees of freedom ``df = (df1, df2)`` where
- df1 : numerator degrees of freedom, number of constraints
- df2 : denominator degrees of freedom, df_resid
alpha : float, default 0.05
nobs : int, default None
Returns
-------
Holder
Class with effect size and confidence attributes
Notes
-----
The confidence interval for the noncentrality parameter is obtained by
inverting the cdf of the noncentral F distribution. Confidence intervals
for other effect sizes are computed by endpoint transformation.
R package ``effectsize`` does not compute the confidence intervals in the
same way. Their confidence intervals can be replicated with
>>> ci_nc = confint_noncentrality(f_stat, df1, df2, alpha=0.1)
>>> ci_es = smo._fstat2effectsize(ci_nc / df1, df1, df2)
See Also
--------
confint_noncentrality | confint_effectsize_oneway | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def anova_generic(means, variances, nobs, use_var="unequal",
welch_correction=True, info=None):
"""
Oneway Anova based on summary statistics
Parameters
----------
means : array_like
Mean of samples to be compared
variances : float or array_like
Residual (within) variance of each sample or pooled.
If ``variances`` is scalar, then it is interpreted as pooled variance
that is the same for all samples, ``use_var`` will be ignored.
Otherwise, the variances are used depending on the ``use_var`` keyword.
nobs : int or array_like
Number of observations for the samples.
If nobs is scalar, then it is assumed that all samples have the same
number ``nobs`` of observation, i.e. a balanced sample case.
Otherwise, statistics will be weighted corresponding to nobs.
Only relative sizes are relevant, any proportional change to nobs does
not change the effect size.
use_var : {"unequal", "equal", "bf"}
If ``use_var`` is "unequal", then the variances can differ across
samples and the effect size for Welch anova will be computed.
welch_correction : bool
If this is false, then the Welch correction to the test statistic is
not included. This allows the computation of an effect size measure
that corresponds more closely to Cohen's f.
info : not used yet
Returns
-------
res : results instance
This includes `statistic` and `pvalue`.
"""
options = {"use_var": use_var,
"welch_correction": welch_correction
}
if means.ndim != 1:
raise ValueError('data (means, ...) has to be one-dimensional')
nobs_t = nobs.sum()
n_groups = len(means)
# mean_t = (nobs * means).sum() / nobs_t
if use_var == "unequal":
weights = nobs / variances
else:
weights = nobs
w_total = weights.sum()
w_rel = weights / w_total
# meanw_t = (weights * means).sum() / w_total
meanw_t = w_rel @ means
statistic = np.dot(weights, (means - meanw_t)**2) / (n_groups - 1.)
df_num = n_groups - 1.
if use_var == "unequal":
tmp = ((1 - w_rel)**2 / (nobs - 1)).sum() / (n_groups**2 - 1)
if welch_correction:
statistic /= 1 + 2 * (n_groups - 2) * tmp
df_denom = 1. / (3. * tmp)
elif use_var == "equal":
# variance of group demeaned total sample, pooled var_resid
tmp = ((nobs - 1) * variances).sum() / (nobs_t - n_groups)
statistic /= tmp
df_denom = nobs_t - n_groups
elif use_var == "bf":
tmp = ((1. - nobs / nobs_t) * variances).sum()
statistic = 1. * (nobs * (means - meanw_t)**2).sum()
statistic /= tmp
df_num2 = n_groups - 1
df_denom = tmp**2 / ((1. - nobs / nobs_t) ** 2 *
variances ** 2 / (nobs - 1)).sum()
df_num = tmp**2 / ((variances ** 2).sum() +
(nobs / nobs_t * variances).sum() ** 2 -
2 * (nobs / nobs_t * variances ** 2).sum())
pval2 = stats.f.sf(statistic, df_num2, df_denom)
options["df2"] = (df_num2, df_denom)
options["df_num2"] = df_num2
options["pvalue2"] = pval2
else:
raise ValueError('use_var is to be one of "unequal", "equal" or "bf"')
pval = stats.f.sf(statistic, df_num, df_denom)
res = HolderTuple(statistic=statistic,
pvalue=pval,
df=(df_num, df_denom),
df_num=df_num,
df_denom=df_denom,
nobs_t=nobs_t,
n_groups=n_groups,
means=means,
nobs=nobs,
vars_=variances,
**options
)
return res | Oneway Anova based on summary statistics
Parameters
----------
means : array_like
Mean of samples to be compared
variances : float or array_like
Residual (within) variance of each sample or pooled.
If ``variances`` is scalar, then it is interpreted as pooled variance
that is the same for all samples, ``use_var`` will be ignored.
Otherwise, the variances are used depending on the ``use_var`` keyword.
nobs : int or array_like
Number of observations for the samples.
If nobs is scalar, then it is assumed that all samples have the same
number ``nobs`` of observation, i.e. a balanced sample case.
Otherwise, statistics will be weighted corresponding to nobs.
Only relative sizes are relevant, any proportional change to nobs does
not change the effect size.
use_var : {"unequal", "equal", "bf"}
If ``use_var`` is "unequal", then the variances can differ across
samples and the effect size for Welch anova will be computed.
welch_correction : bool
If this is false, then the Welch correction to the test statistic is
not included. This allows the computation of an effect size measure
that corresponds more closely to Cohen's f.
info : not used yet
Returns
-------
res : results instance
This includes `statistic` and `pvalue`. | anova_generic | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def anova_oneway(data, groups=None, use_var="unequal", welch_correction=True,
trim_frac=0):
"""Oneway Anova
This implements standard anova, Welch and Brown-Forsythe, and trimmed
(Yuen) variants of those.
Parameters
----------
data : tuple of array_like or DataFrame or Series
Data for k independent samples, with k >= 2.
The data can be provided as a tuple or list of arrays or in long
format with outcome observations in ``data`` and group membership in
``groups``.
groups : ndarray or Series
If data is in long format, then groups is needed as indicator to which
group or sample and observations belongs.
use_var : {"unequal", "equal" or "bf"}
`use_var` specified how to treat heteroscedasticity, unequal variance,
across samples. Three approaches are available
"unequal" : Variances are not assumed to be equal across samples.
Heteroscedasticity is taken into account with Welch Anova and
Satterthwaite-Welch degrees of freedom.
This is the default.
"equal" : Variances are assumed to be equal across samples.
This is the standard Anova.
"bf" : Variances are not assumed to be equal across samples.
The method is Browne-Forsythe (1971) for testing equality of means
with the corrected degrees of freedom by Merothra. The original BF
degrees of freedom are available as additional attributes in the
results instance, ``df_denom2`` and ``p_value2``.
welch_correction : bool
If this is false, then the Welch correction to the test statistic is
not included. This allows the computation of an effect size measure
that corresponds more closely to Cohen's f.
trim_frac : float in [0, 0.5)
Optional trimming for Anova with trimmed mean and winsorized variances.
With the default trim_frac equal to zero, the oneway Anova statistics
are computed without trimming. If `trim_frac` is larger than zero,
then the largest and smallest observations in each sample are trimmed.
The number of trimmed observations is the fraction of number of
observations in the sample truncated to the next lower integer.
`trim_frac` has to be smaller than 0.5, however, if the fraction is
so large that there are not enough observations left over, then `nan`
will be returned.
Returns
-------
res : results instance
The returned HolderTuple instance has the following main attributes
and some additional information in other attributes.
statistic : float
Test statistic for k-sample mean comparison which is approximately
F-distributed.
pvalue : float
If ``use_var="bf"``, then the p-value is based on corrected
degrees of freedom following Mehrotra 1997.
pvalue2 : float
This is the p-value based on degrees of freedom as in
Brown-Forsythe 1974 and is only available if ``use_var="bf"``.
df = (df_denom, df_num) : tuple of floats
Degreeds of freedom for the F-distribution depend on ``use_var``.
If ``use_var="bf"``, then `df_denom` is for Mehrotra p-values
`df_denom2` is available for Brown-Forsythe 1974 p-values.
`df_num` is the same numerator degrees of freedom for both
p-values.
Notes
-----
Welch's anova is correctly sized (not liberal or conservative) in smaller
samples if the distribution of the samples is not very far away from the
normal distribution. The test can become liberal if the data is strongly
skewed. Welch's Anova can also be correctly sized for discrete
distributions with finite support, like Lickert scale data.
The trimmed version is robust to many non-normal distributions, it stays
correctly sized in many cases, and is more powerful in some cases with
skewness or heavy tails.
Trimming is currently based on the integer part of ``nobs * trim_frac``.
The default might change to including fractional observations as in the
original articles by Yuen.
See Also
--------
anova_generic
References
----------
Brown, Morton B., and Alan B. Forsythe. 1974. “The Small Sample Behavior
of Some Statistics Which Test the Equality of Several Means.”
Technometrics 16 (1) (February 1): 129–132. doi:10.2307/1267501.
Mehrotra, Devan V. 1997. “Improving the Brown-Forsythe Solution to the
Generalized Behrens-Fisher Problem.” Communications in Statistics -
Simulation and Computation 26 (3): 1139–1145.
doi:10.1080/03610919708813431.
"""
if groups is not None:
uniques = np.unique(groups)
data = [data[groups == uni] for uni in uniques]
else:
# uniques = None # not used yet, add to info?
pass
args = list(map(np.asarray, data))
if any([x.ndim != 1 for x in args]):
raise ValueError('data arrays have to be one-dimensional')
nobs = np.array([len(x) for x in args], float)
# n_groups = len(args) # not used
# means = np.array([np.mean(x, axis=0) for x in args], float)
# vars_ = np.array([np.var(x, ddof=1, axis=0) for x in args], float)
if trim_frac == 0:
means = np.array([x.mean() for x in args])
vars_ = np.array([x.var(ddof=1) for x in args])
else:
tms = [TrimmedMean(x, trim_frac) for x in args]
means = np.array([tm.mean_trimmed for tm in tms])
# R doesn't use uncorrected var_winsorized
# vars_ = np.array([tm.var_winsorized for tm in tms])
vars_ = np.array([tm.var_winsorized * (tm.nobs - 1) /
(tm.nobs_reduced - 1) for tm in tms])
# nobs_original = nobs # store just in case
nobs = np.array([tm.nobs_reduced for tm in tms])
res = anova_generic(means, vars_, nobs, use_var=use_var,
welch_correction=welch_correction)
return res | Oneway Anova
This implements standard anova, Welch and Brown-Forsythe, and trimmed
(Yuen) variants of those.
Parameters
----------
data : tuple of array_like or DataFrame or Series
Data for k independent samples, with k >= 2.
The data can be provided as a tuple or list of arrays or in long
format with outcome observations in ``data`` and group membership in
``groups``.
groups : ndarray or Series
If data is in long format, then groups is needed as indicator to which
group or sample and observations belongs.
use_var : {"unequal", "equal" or "bf"}
`use_var` specified how to treat heteroscedasticity, unequal variance,
across samples. Three approaches are available
"unequal" : Variances are not assumed to be equal across samples.
Heteroscedasticity is taken into account with Welch Anova and
Satterthwaite-Welch degrees of freedom.
This is the default.
"equal" : Variances are assumed to be equal across samples.
This is the standard Anova.
"bf" : Variances are not assumed to be equal across samples.
The method is Browne-Forsythe (1971) for testing equality of means
with the corrected degrees of freedom by Merothra. The original BF
degrees of freedom are available as additional attributes in the
results instance, ``df_denom2`` and ``p_value2``.
welch_correction : bool
If this is false, then the Welch correction to the test statistic is
not included. This allows the computation of an effect size measure
that corresponds more closely to Cohen's f.
trim_frac : float in [0, 0.5)
Optional trimming for Anova with trimmed mean and winsorized variances.
With the default trim_frac equal to zero, the oneway Anova statistics
are computed without trimming. If `trim_frac` is larger than zero,
then the largest and smallest observations in each sample are trimmed.
The number of trimmed observations is the fraction of number of
observations in the sample truncated to the next lower integer.
`trim_frac` has to be smaller than 0.5, however, if the fraction is
so large that there are not enough observations left over, then `nan`
will be returned.
Returns
-------
res : results instance
The returned HolderTuple instance has the following main attributes
and some additional information in other attributes.
statistic : float
Test statistic for k-sample mean comparison which is approximately
F-distributed.
pvalue : float
If ``use_var="bf"``, then the p-value is based on corrected
degrees of freedom following Mehrotra 1997.
pvalue2 : float
This is the p-value based on degrees of freedom as in
Brown-Forsythe 1974 and is only available if ``use_var="bf"``.
df = (df_denom, df_num) : tuple of floats
Degreeds of freedom for the F-distribution depend on ``use_var``.
If ``use_var="bf"``, then `df_denom` is for Mehrotra p-values
`df_denom2` is available for Brown-Forsythe 1974 p-values.
`df_num` is the same numerator degrees of freedom for both
p-values.
Notes
-----
Welch's anova is correctly sized (not liberal or conservative) in smaller
samples if the distribution of the samples is not very far away from the
normal distribution. The test can become liberal if the data is strongly
skewed. Welch's Anova can also be correctly sized for discrete
distributions with finite support, like Lickert scale data.
The trimmed version is robust to many non-normal distributions, it stays
correctly sized in many cases, and is more powerful in some cases with
skewness or heavy tails.
Trimming is currently based on the integer part of ``nobs * trim_frac``.
The default might change to including fractional observations as in the
original articles by Yuen.
See Also
--------
anova_generic
References
----------
Brown, Morton B., and Alan B. Forsythe. 1974. “The Small Sample Behavior
of Some Statistics Which Test the Equality of Several Means.”
Technometrics 16 (1) (February 1): 129–132. doi:10.2307/1267501.
Mehrotra, Devan V. 1997. “Improving the Brown-Forsythe Solution to the
Generalized Behrens-Fisher Problem.” Communications in Statistics -
Simulation and Computation 26 (3): 1139–1145.
doi:10.1080/03610919708813431. | anova_oneway | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def equivalence_oneway_generic(f_stat, n_groups, nobs, equiv_margin, df,
alpha=0.05, margin_type="f2"):
"""Equivalence test for oneway anova (Wellek and extensions)
This is an helper function when summary statistics are available.
Use `equivalence_oneway` instead.
The null hypothesis is that the means differ by more than `equiv_margin`
in the anova distance measure.
If the Null is rejected, then the data supports that means are equivalent,
i.e. within a given distance.
Parameters
----------
f_stat : float
F-statistic
n_groups : int
Number of groups in oneway comparison.
nobs : ndarray
Array of number of observations in groups.
equiv_margin : float
Equivalence margin in terms of effect size. Effect size can be chosen
with `margin_type`. default is squared Cohen's f.
df : tuple
degrees of freedom ``df = (df1, df2)`` where
- df1 : numerator degrees of freedom, number of constraints
- df2 : denominator degrees of freedom, df_resid
alpha : float in (0, 1)
Significance level for the hypothesis test.
margin_type : "f2" or "wellek"
Type of effect size used for equivalence margin.
Returns
-------
results : instance of HolderTuple class
The two main attributes are test statistic `statistic` and p-value
`pvalue`.
Notes
-----
Equivalence in this function is defined in terms of a squared distance
measure similar to Mahalanobis distance.
Alternative definitions for the oneway case are based on maximum difference
between pairs of means or similar pairwise distances.
The equivalence margin is used for the noncentrality parameter in the
noncentral F distribution for the test statistic. In samples with unequal
variances estimated using Welch or Brown-Forsythe Anova, the f-statistic
depends on the unequal variances and corrections to the test statistic.
This means that the equivalence margins are not fully comparable across
methods for treating unequal variances.
References
----------
Wellek, Stefan. 2010. Testing Statistical Hypotheses of Equivalence and
Noninferiority. 2nd ed. Boca Raton: CRC Press.
Cribbie, Robert A., Chantal A. Arpin-Cribbie, and Jamie A. Gruman. 2009.
“Tests of Equivalence for One-Way Independent Groups Designs.” The Journal
of Experimental Education 78 (1): 1–13.
https://doi.org/10.1080/00220970903224552.
Jan, Show-Li, and Gwowen Shieh. 2019. “On the Extended Welch Test for
Assessing Equivalence of Standardized Means.” Statistics in
Biopharmaceutical Research 0 (0): 1–8.
https://doi.org/10.1080/19466315.2019.1654915.
"""
nobs_t = nobs.sum()
nobs_mean = nobs_t / n_groups
if margin_type == "wellek":
nc_null = nobs_mean * equiv_margin**2
es = f_stat * (n_groups - 1) / nobs_mean
type_effectsize = "Wellek's psi_squared"
elif margin_type in ["f2", "fsqu", "fsquared"]:
nc_null = nobs_t * equiv_margin
es = f_stat / nobs_t
type_effectsize = "Cohen's f_squared"
else:
raise ValueError('`margin_type` should be "f2" or "wellek"')
crit_f = ncf_ppf(alpha, df[0], df[1], nc_null)
if margin_type == "wellek":
# TODO: do we need a sqrt
crit_es = crit_f * (n_groups - 1) / nobs_mean
elif margin_type in ["f2", "fsqu", "fsquared"]:
crit_es = crit_f / nobs_t
reject = (es < crit_es)
pv = ncf_cdf(f_stat, df[0], df[1], nc_null)
pwr = ncf_cdf(crit_f, df[0], df[1], 1e-13) # scipy, cannot be 0
res = HolderTuple(statistic=f_stat,
pvalue=pv,
effectsize=es, # match es type to margin_type
crit_f=crit_f,
crit_es=crit_es,
reject=reject,
power_zero=pwr,
df=df,
f_stat=f_stat,
type_effectsize=type_effectsize
)
return res | Equivalence test for oneway anova (Wellek and extensions)
This is an helper function when summary statistics are available.
Use `equivalence_oneway` instead.
The null hypothesis is that the means differ by more than `equiv_margin`
in the anova distance measure.
If the Null is rejected, then the data supports that means are equivalent,
i.e. within a given distance.
Parameters
----------
f_stat : float
F-statistic
n_groups : int
Number of groups in oneway comparison.
nobs : ndarray
Array of number of observations in groups.
equiv_margin : float
Equivalence margin in terms of effect size. Effect size can be chosen
with `margin_type`. default is squared Cohen's f.
df : tuple
degrees of freedom ``df = (df1, df2)`` where
- df1 : numerator degrees of freedom, number of constraints
- df2 : denominator degrees of freedom, df_resid
alpha : float in (0, 1)
Significance level for the hypothesis test.
margin_type : "f2" or "wellek"
Type of effect size used for equivalence margin.
Returns
-------
results : instance of HolderTuple class
The two main attributes are test statistic `statistic` and p-value
`pvalue`.
Notes
-----
Equivalence in this function is defined in terms of a squared distance
measure similar to Mahalanobis distance.
Alternative definitions for the oneway case are based on maximum difference
between pairs of means or similar pairwise distances.
The equivalence margin is used for the noncentrality parameter in the
noncentral F distribution for the test statistic. In samples with unequal
variances estimated using Welch or Brown-Forsythe Anova, the f-statistic
depends on the unequal variances and corrections to the test statistic.
This means that the equivalence margins are not fully comparable across
methods for treating unequal variances.
References
----------
Wellek, Stefan. 2010. Testing Statistical Hypotheses of Equivalence and
Noninferiority. 2nd ed. Boca Raton: CRC Press.
Cribbie, Robert A., Chantal A. Arpin-Cribbie, and Jamie A. Gruman. 2009.
“Tests of Equivalence for One-Way Independent Groups Designs.” The Journal
of Experimental Education 78 (1): 1–13.
https://doi.org/10.1080/00220970903224552.
Jan, Show-Li, and Gwowen Shieh. 2019. “On the Extended Welch Test for
Assessing Equivalence of Standardized Means.” Statistics in
Biopharmaceutical Research 0 (0): 1–8.
https://doi.org/10.1080/19466315.2019.1654915. | equivalence_oneway_generic | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def equivalence_oneway(data, equiv_margin, groups=None, use_var="unequal",
welch_correction=True, trim_frac=0, margin_type="f2"):
"""equivalence test for oneway anova (Wellek's Anova)
The null hypothesis is that the means differ by more than `equiv_margin`
in the anova distance measure.
If the Null is rejected, then the data supports that means are equivalent,
i.e. within a given distance.
Parameters
----------
data : tuple of array_like or DataFrame or Series
Data for k independent samples, with k >= 2.
The data can be provided as a tuple or list of arrays or in long
format with outcome observations in ``data`` and group membership in
``groups``.
equiv_margin : float
Equivalence margin in terms of effect size. Effect size can be chosen
with `margin_type`. default is squared Cohen's f.
groups : ndarray or Series
If data is in long format, then groups is needed as indicator to which
group or sample and observations belongs.
use_var : {"unequal", "equal" or "bf"}
`use_var` specified how to treat heteroscedasticity, unequal variance,
across samples. Three approaches are available
"unequal" : Variances are not assumed to be equal across samples.
Heteroscedasticity is taken into account with Welch Anova and
Satterthwaite-Welch degrees of freedom.
This is the default.
"equal" : Variances are assumed to be equal across samples.
This is the standard Anova.
"bf: Variances are not assumed to be equal across samples.
The method is Browne-Forsythe (1971) for testing equality of means
with the corrected degrees of freedom by Merothra. The original BF
degrees of freedom are available as additional attributes in the
results instance, ``df_denom2`` and ``p_value2``.
welch_correction : bool
If this is false, then the Welch correction to the test statistic is
not included. This allows the computation of an effect size measure
that corresponds more closely to Cohen's f.
trim_frac : float in [0, 0.5)
Optional trimming for Anova with trimmed mean and winsorized variances.
With the default trim_frac equal to zero, the oneway Anova statistics
are computed without trimming. If `trim_frac` is larger than zero,
then the largest and smallest observations in each sample are trimmed.
The number of trimmed observations is the fraction of number of
observations in the sample truncated to the next lower integer.
`trim_frac` has to be smaller than 0.5, however, if the fraction is
so large that there are not enough observations left over, then `nan`
will be returned.
margin_type : "f2" or "wellek"
Type of effect size used for equivalence margin, either squared
Cohen's f or Wellek's psi. Default is "f2".
Returns
-------
results : instance of HolderTuple class
The two main attributes are test statistic `statistic` and p-value
`pvalue`.
See Also
--------
anova_oneway
equivalence_scale_oneway
"""
# use anova to compute summary statistics and f-statistic
res0 = anova_oneway(data, groups=groups, use_var=use_var,
welch_correction=welch_correction,
trim_frac=trim_frac)
f_stat = res0.statistic
res = equivalence_oneway_generic(f_stat, res0.n_groups, res0.nobs_t,
equiv_margin, res0.df, alpha=0.05,
margin_type=margin_type)
return res | equivalence test for oneway anova (Wellek's Anova)
The null hypothesis is that the means differ by more than `equiv_margin`
in the anova distance measure.
If the Null is rejected, then the data supports that means are equivalent,
i.e. within a given distance.
Parameters
----------
data : tuple of array_like or DataFrame or Series
Data for k independent samples, with k >= 2.
The data can be provided as a tuple or list of arrays or in long
format with outcome observations in ``data`` and group membership in
``groups``.
equiv_margin : float
Equivalence margin in terms of effect size. Effect size can be chosen
with `margin_type`. default is squared Cohen's f.
groups : ndarray or Series
If data is in long format, then groups is needed as indicator to which
group or sample and observations belongs.
use_var : {"unequal", "equal" or "bf"}
`use_var` specified how to treat heteroscedasticity, unequal variance,
across samples. Three approaches are available
"unequal" : Variances are not assumed to be equal across samples.
Heteroscedasticity is taken into account with Welch Anova and
Satterthwaite-Welch degrees of freedom.
This is the default.
"equal" : Variances are assumed to be equal across samples.
This is the standard Anova.
"bf: Variances are not assumed to be equal across samples.
The method is Browne-Forsythe (1971) for testing equality of means
with the corrected degrees of freedom by Merothra. The original BF
degrees of freedom are available as additional attributes in the
results instance, ``df_denom2`` and ``p_value2``.
welch_correction : bool
If this is false, then the Welch correction to the test statistic is
not included. This allows the computation of an effect size measure
that corresponds more closely to Cohen's f.
trim_frac : float in [0, 0.5)
Optional trimming for Anova with trimmed mean and winsorized variances.
With the default trim_frac equal to zero, the oneway Anova statistics
are computed without trimming. If `trim_frac` is larger than zero,
then the largest and smallest observations in each sample are trimmed.
The number of trimmed observations is the fraction of number of
observations in the sample truncated to the next lower integer.
`trim_frac` has to be smaller than 0.5, however, if the fraction is
so large that there are not enough observations left over, then `nan`
will be returned.
margin_type : "f2" or "wellek"
Type of effect size used for equivalence margin, either squared
Cohen's f or Wellek's psi. Default is "f2".
Returns
-------
results : instance of HolderTuple class
The two main attributes are test statistic `statistic` and p-value
`pvalue`.
See Also
--------
anova_oneway
equivalence_scale_oneway | equivalence_oneway | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def _power_equivalence_oneway_emp(f_stat, n_groups, nobs, eps, df, alpha=0.05):
"""Empirical power of oneway equivalence test
This only returns post-hoc, empirical power.
Warning: eps is currently effect size margin as defined as in Wellek, and
not the signal to noise ratio (Cohen's f family).
Parameters
----------
f_stat : float
F-statistic from oneway anova, used to compute empirical effect size
n_groups : int
Number of groups in oneway comparison.
nobs : ndarray
Array of number of observations in groups.
eps : float
Equivalence margin in terms of effect size given by Wellek's psi.
df : tuple
Degrees of freedom for F distribution.
alpha : float in (0, 1)
Significance level for the hypothesis test.
Returns
-------
pow : float
Ex-post, post-hoc or empirical power at f-statistic of the equivalence
test.
"""
res = equivalence_oneway_generic(f_stat, n_groups, nobs, eps, df,
alpha=alpha, margin_type="wellek")
nobs_mean = nobs.sum() / n_groups
fn = f_stat # post-hoc power, empirical power at estimate
esn = fn * (n_groups - 1) / nobs_mean # Wellek psi
pow_ = ncf_cdf(res.crit_f, df[0], df[1], nobs_mean * esn)
return pow_ | Empirical power of oneway equivalence test
This only returns post-hoc, empirical power.
Warning: eps is currently effect size margin as defined as in Wellek, and
not the signal to noise ratio (Cohen's f family).
Parameters
----------
f_stat : float
F-statistic from oneway anova, used to compute empirical effect size
n_groups : int
Number of groups in oneway comparison.
nobs : ndarray
Array of number of observations in groups.
eps : float
Equivalence margin in terms of effect size given by Wellek's psi.
df : tuple
Degrees of freedom for F distribution.
alpha : float in (0, 1)
Significance level for the hypothesis test.
Returns
-------
pow : float
Ex-post, post-hoc or empirical power at f-statistic of the equivalence
test. | _power_equivalence_oneway_emp | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def power_equivalence_oneway(f2_alt, equiv_margin, nobs_t, n_groups=None,
df=None, alpha=0.05, margin_type="f2"):
"""
Power of oneway equivalence test
Parameters
----------
f2_alt : float
Effect size, squared Cohen's f, under the alternative.
equiv_margin : float
Equivalence margin in terms of effect size. Effect size can be chosen
with `margin_type`. default is squared Cohen's f.
nobs_t : ndarray
Total number of observations summed over all groups.
n_groups : int
Number of groups in oneway comparison. If margin_type is "wellek",
then either ``n_groups`` or ``df`` has to be given.
df : tuple
Degrees of freedom for F distribution,
``df = (n_groups - 1, nobs_t - n_groups)``
alpha : float in (0, 1)
Significance level for the hypothesis test.
margin_type : "f2" or "wellek"
Type of effect size used for equivalence margin, either squared
Cohen's f or Wellek's psi. Default is "f2".
Returns
-------
pow_alt : float
Power of the equivalence test at given equivalence effect size under
the alternative.
"""
# one of n_groups or df has to be specified
if df is None:
if n_groups is None:
raise ValueError("either df or n_groups has to be provided")
df = (n_groups - 1, nobs_t - n_groups)
# esn = fn * (n_groups - 1) / nobs_mean # Wellek psi
# fix for scipy, ncf does not allow nc == 0, fixed in scipy master
if f2_alt == 0:
f2_alt = 1e-13
# effect size, critical value at margin
# f2_null = equiv_margin
if margin_type in ["f2", "fsqu", "fsquared"]:
f2_null = equiv_margin
elif margin_type == "wellek":
if n_groups is None:
raise ValueError("If margin_type is wellek, then n_groups has "
"to be provided")
# f2_null = (n_groups - 1) * n_groups / nobs_t * equiv_margin**2
nobs_mean = nobs_t / n_groups
f2_null = nobs_mean * equiv_margin**2 / nobs_t
f2_alt = nobs_mean * f2_alt**2 / nobs_t
else:
raise ValueError('`margin_type` should be "f2" or "wellek"')
crit_f_margin = ncf_ppf(alpha, df[0], df[1], nobs_t * f2_null)
pwr_alt = ncf_cdf(crit_f_margin, df[0], df[1], nobs_t * f2_alt)
return pwr_alt | Power of oneway equivalence test
Parameters
----------
f2_alt : float
Effect size, squared Cohen's f, under the alternative.
equiv_margin : float
Equivalence margin in terms of effect size. Effect size can be chosen
with `margin_type`. default is squared Cohen's f.
nobs_t : ndarray
Total number of observations summed over all groups.
n_groups : int
Number of groups in oneway comparison. If margin_type is "wellek",
then either ``n_groups`` or ``df`` has to be given.
df : tuple
Degrees of freedom for F distribution,
``df = (n_groups - 1, nobs_t - n_groups)``
alpha : float in (0, 1)
Significance level for the hypothesis test.
margin_type : "f2" or "wellek"
Type of effect size used for equivalence margin, either squared
Cohen's f or Wellek's psi. Default is "f2".
Returns
-------
pow_alt : float
Power of the equivalence test at given equivalence effect size under
the alternative. | power_equivalence_oneway | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def simulate_power_equivalence_oneway(means, nobs, equiv_margin, vars_=None,
k_mc=1000, trim_frac=0,
options_var=None, margin_type="f2"
): # , anova_options=None): #TODO
"""Simulate Power for oneway equivalence test (Wellek's Anova)
This function is experimental and written to evaluate asymptotic power
function. This function will change without backwards compatibility
constraints. The only part that is stable is `pvalue` attribute in results.
Effect size for equivalence margin
"""
if options_var is None:
options_var = ["unequal", "equal", "bf"]
if vars_ is not None:
stds = np.sqrt(vars_)
else:
stds = np.ones(len(means))
nobs_mean = nobs.mean()
n_groups = len(nobs)
res_mc = []
f_mc = []
reject_mc = []
other_mc = []
for _ in range(k_mc):
y0, y1, y2, y3 = (m + std * np.random.randn(n)
for (n, m, std) in zip(nobs, means, stds))
res_i = []
f_i = []
reject_i = []
other_i = []
for uv in options_var:
# for welch in options_welch:
# res1 = sma.anova_generic(means, vars_, nobs, use_var=uv,
# welch_correction=welch)
res0 = anova_oneway([y0, y1, y2, y3], use_var=uv,
trim_frac=trim_frac)
f_stat = res0.statistic
res1 = equivalence_oneway_generic(f_stat, n_groups, nobs.sum(),
equiv_margin, res0.df,
alpha=0.05,
margin_type=margin_type)
res_i.append(res1.pvalue)
es_wellek = f_stat * (n_groups - 1) / nobs_mean
f_i.append(es_wellek)
reject_i.append(res1.reject)
other_i.extend([res1.crit_f, res1.crit_es, res1.power_zero])
res_mc.append(res_i)
f_mc.append(f_i)
reject_mc.append(reject_i)
other_mc.append(other_i)
f_mc = np.asarray(f_mc)
other_mc = np.asarray(other_mc)
res_mc = np.asarray(res_mc)
reject_mc = np.asarray(reject_mc)
res = Holder(f_stat=f_mc,
other=other_mc,
pvalue=res_mc,
reject=reject_mc
)
return res | Simulate Power for oneway equivalence test (Wellek's Anova)
This function is experimental and written to evaluate asymptotic power
function. This function will change without backwards compatibility
constraints. The only part that is stable is `pvalue` attribute in results.
Effect size for equivalence margin | simulate_power_equivalence_oneway | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def test_scale_oneway(data, method="bf", center="median", transform="abs",
trim_frac_mean=0.1, trim_frac_anova=0.0):
"""Oneway Anova test for equal scale, variance or dispersion
This hypothesis test performs a oneway anova test on transformed data and
includes Levene and Brown-Forsythe tests for equal variances as special
cases.
Parameters
----------
data : tuple of array_like or DataFrame or Series
Data for k independent samples, with k >= 2. The data can be provided
as a tuple or list of arrays or in long format with outcome
observations in ``data`` and group membership in ``groups``.
method : {"unequal", "equal" or "bf"}
How to treat heteroscedasticity across samples. This is used as
`use_var` option in `anova_oneway` and refers to the variance of the
transformed data, i.e. assumption is on 4th moment if squares are used
as transform.
Three approaches are available:
"unequal" : Variances are not assumed to be equal across samples.
Heteroscedasticity is taken into account with Welch Anova and
Satterthwaite-Welch degrees of freedom.
This is the default.
"equal" : Variances are assumed to be equal across samples.
This is the standard Anova.
"bf" : Variances are not assumed to be equal across samples.
The method is Browne-Forsythe (1971) for testing equality of means
with the corrected degrees of freedom by Merothra. The original BF
degrees of freedom are available as additional attributes in the
results instance, ``df_denom2`` and ``p_value2``.
center : "median", "mean", "trimmed" or float
Statistic used for centering observations. If a float, then this
value is used to center. Default is median.
transform : "abs", "square" or callable
Transformation for the centered observations. If a callable, then this
function is called on the centered data.
Default is absolute value.
trim_frac_mean=0.1 : float in [0, 0.5)
Trim fraction for the trimmed mean when `center` is "trimmed"
trim_frac_anova : float in [0, 0.5)
Optional trimming for Anova with trimmed mean and Winsorized variances.
With the default trim_frac equal to zero, the oneway Anova statistics
are computed without trimming. If `trim_frac` is larger than zero,
then the largest and smallest observations in each sample are trimmed.
see ``trim_frac`` option in `anova_oneway`
Returns
-------
res : results instance
The returned HolderTuple instance has the following main attributes
and some additional information in other attributes.
statistic : float
Test statistic for k-sample mean comparison which is approximately
F-distributed.
pvalue : float
If ``method="bf"``, then the p-value is based on corrected
degrees of freedom following Mehrotra 1997.
pvalue2 : float
This is the p-value based on degrees of freedom as in
Brown-Forsythe 1974 and is only available if ``method="bf"``.
df : (df_denom, df_num)
Tuple containing degrees of freedom for the F-distribution depend
on ``method``. If ``method="bf"``, then `df_denom` is for Mehrotra
p-values `df_denom2` is available for Brown-Forsythe 1974 p-values.
`df_num` is the same numerator degrees of freedom for both
p-values.
See Also
--------
anova_oneway
scale_transform
"""
data = map(np.asarray, data)
xxd = [scale_transform(x, center=center, transform=transform,
trim_frac=trim_frac_mean) for x in data]
res = anova_oneway(xxd, groups=None, use_var=method,
welch_correction=True, trim_frac=trim_frac_anova)
res.data_transformed = xxd
return res | Oneway Anova test for equal scale, variance or dispersion
This hypothesis test performs a oneway anova test on transformed data and
includes Levene and Brown-Forsythe tests for equal variances as special
cases.
Parameters
----------
data : tuple of array_like or DataFrame or Series
Data for k independent samples, with k >= 2. The data can be provided
as a tuple or list of arrays or in long format with outcome
observations in ``data`` and group membership in ``groups``.
method : {"unequal", "equal" or "bf"}
How to treat heteroscedasticity across samples. This is used as
`use_var` option in `anova_oneway` and refers to the variance of the
transformed data, i.e. assumption is on 4th moment if squares are used
as transform.
Three approaches are available:
"unequal" : Variances are not assumed to be equal across samples.
Heteroscedasticity is taken into account with Welch Anova and
Satterthwaite-Welch degrees of freedom.
This is the default.
"equal" : Variances are assumed to be equal across samples.
This is the standard Anova.
"bf" : Variances are not assumed to be equal across samples.
The method is Browne-Forsythe (1971) for testing equality of means
with the corrected degrees of freedom by Merothra. The original BF
degrees of freedom are available as additional attributes in the
results instance, ``df_denom2`` and ``p_value2``.
center : "median", "mean", "trimmed" or float
Statistic used for centering observations. If a float, then this
value is used to center. Default is median.
transform : "abs", "square" or callable
Transformation for the centered observations. If a callable, then this
function is called on the centered data.
Default is absolute value.
trim_frac_mean=0.1 : float in [0, 0.5)
Trim fraction for the trimmed mean when `center` is "trimmed"
trim_frac_anova : float in [0, 0.5)
Optional trimming for Anova with trimmed mean and Winsorized variances.
With the default trim_frac equal to zero, the oneway Anova statistics
are computed without trimming. If `trim_frac` is larger than zero,
then the largest and smallest observations in each sample are trimmed.
see ``trim_frac`` option in `anova_oneway`
Returns
-------
res : results instance
The returned HolderTuple instance has the following main attributes
and some additional information in other attributes.
statistic : float
Test statistic for k-sample mean comparison which is approximately
F-distributed.
pvalue : float
If ``method="bf"``, then the p-value is based on corrected
degrees of freedom following Mehrotra 1997.
pvalue2 : float
This is the p-value based on degrees of freedom as in
Brown-Forsythe 1974 and is only available if ``method="bf"``.
df : (df_denom, df_num)
Tuple containing degrees of freedom for the F-distribution depend
on ``method``. If ``method="bf"``, then `df_denom` is for Mehrotra
p-values `df_denom2` is available for Brown-Forsythe 1974 p-values.
`df_num` is the same numerator degrees of freedom for both
p-values.
See Also
--------
anova_oneway
scale_transform | test_scale_oneway | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def equivalence_scale_oneway(data, equiv_margin, method='bf', center='median',
transform='abs', trim_frac_mean=0.,
trim_frac_anova=0.):
"""Oneway Anova test for equivalence of scale, variance or dispersion
This hypothesis test performs a oneway equivalence anova test on
transformed data.
Note, the interpretation of the equivalence margin `equiv_margin` will
depend on the transformation of the data. Transformations like
absolute deviation are not scaled to correspond to the variance under
normal distribution.
Parameters
----------
data : tuple of array_like or DataFrame or Series
Data for k independent samples, with k >= 2. The data can be provided
as a tuple or list of arrays or in long format with outcome
observations in ``data`` and group membership in ``groups``.
equiv_margin : float
Equivalence margin in terms of effect size. Effect size can be chosen
with `margin_type`. default is squared Cohen's f.
method : {"unequal", "equal" or "bf"}
How to treat heteroscedasticity across samples. This is used as
`use_var` option in `anova_oneway` and refers to the variance of the
transformed data, i.e. assumption is on 4th moment if squares are used
as transform.
Three approaches are available:
"unequal" : Variances are not assumed to be equal across samples.
Heteroscedasticity is taken into account with Welch Anova and
Satterthwaite-Welch degrees of freedom.
This is the default.
"equal" : Variances are assumed to be equal across samples.
This is the standard Anova.
"bf" : Variances are not assumed to be equal across samples.
The method is Browne-Forsythe (1971) for testing equality of means
with the corrected degrees of freedom by Merothra. The original BF
degrees of freedom are available as additional attributes in the
results instance, ``df_denom2`` and ``p_value2``.
center : "median", "mean", "trimmed" or float
Statistic used for centering observations. If a float, then this
value is used to center. Default is median.
transform : "abs", "square" or callable
Transformation for the centered observations. If a callable, then this
function is called on the centered data.
Default is absolute value.
trim_frac_mean : float in [0, 0.5)
Trim fraction for the trimmed mean when `center` is "trimmed"
trim_frac_anova : float in [0, 0.5)
Optional trimming for Anova with trimmed mean and Winsorized variances.
With the default trim_frac equal to zero, the oneway Anova statistics
are computed without trimming. If `trim_frac` is larger than zero,
then the largest and smallest observations in each sample are trimmed.
see ``trim_frac`` option in `anova_oneway`
Returns
-------
results : instance of HolderTuple class
The two main attributes are test statistic `statistic` and p-value
`pvalue`.
See Also
--------
anova_oneway
scale_transform
equivalence_oneway
"""
data = map(np.asarray, data)
xxd = [scale_transform(x, center=center, transform=transform,
trim_frac=trim_frac_mean) for x in data]
res = equivalence_oneway(xxd, equiv_margin, use_var=method,
welch_correction=True, trim_frac=trim_frac_anova)
res.x_transformed = xxd
return res | Oneway Anova test for equivalence of scale, variance or dispersion
This hypothesis test performs a oneway equivalence anova test on
transformed data.
Note, the interpretation of the equivalence margin `equiv_margin` will
depend on the transformation of the data. Transformations like
absolute deviation are not scaled to correspond to the variance under
normal distribution.
Parameters
----------
data : tuple of array_like or DataFrame or Series
Data for k independent samples, with k >= 2. The data can be provided
as a tuple or list of arrays or in long format with outcome
observations in ``data`` and group membership in ``groups``.
equiv_margin : float
Equivalence margin in terms of effect size. Effect size can be chosen
with `margin_type`. default is squared Cohen's f.
method : {"unequal", "equal" or "bf"}
How to treat heteroscedasticity across samples. This is used as
`use_var` option in `anova_oneway` and refers to the variance of the
transformed data, i.e. assumption is on 4th moment if squares are used
as transform.
Three approaches are available:
"unequal" : Variances are not assumed to be equal across samples.
Heteroscedasticity is taken into account with Welch Anova and
Satterthwaite-Welch degrees of freedom.
This is the default.
"equal" : Variances are assumed to be equal across samples.
This is the standard Anova.
"bf" : Variances are not assumed to be equal across samples.
The method is Browne-Forsythe (1971) for testing equality of means
with the corrected degrees of freedom by Merothra. The original BF
degrees of freedom are available as additional attributes in the
results instance, ``df_denom2`` and ``p_value2``.
center : "median", "mean", "trimmed" or float
Statistic used for centering observations. If a float, then this
value is used to center. Default is median.
transform : "abs", "square" or callable
Transformation for the centered observations. If a callable, then this
function is called on the centered data.
Default is absolute value.
trim_frac_mean : float in [0, 0.5)
Trim fraction for the trimmed mean when `center` is "trimmed"
trim_frac_anova : float in [0, 0.5)
Optional trimming for Anova with trimmed mean and Winsorized variances.
With the default trim_frac equal to zero, the oneway Anova statistics
are computed without trimming. If `trim_frac` is larger than zero,
then the largest and smallest observations in each sample are trimmed.
see ``trim_frac`` option in `anova_oneway`
Returns
-------
results : instance of HolderTuple class
The two main attributes are test statistic `statistic` and p-value
`pvalue`.
See Also
--------
anova_oneway
scale_transform
equivalence_oneway | equivalence_scale_oneway | python | statsmodels/statsmodels | statsmodels/stats/oneway.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oneway.py | BSD-3-Clause |
def mc2mnc(mc):
"""convert central to non-central moments, uses recursive formula
optionally adjusts first moment to return mean
"""
x = _convert_to_multidim(mc)
def _local_counts(mc):
mean = mc[0]
mc = [1] + list(mc) # add zero moment = 1
mc[1] = 0 # define central mean as zero for formula
mnc = [1, mean] # zero and first raw moments
for nn, m in enumerate(mc[2:]):
n = nn + 2
mnc.append(0)
for k in range(n + 1):
mnc[n] += comb(n, k, exact=True) * mc[k] * mean ** (n - k)
return mnc[1:]
res = np.apply_along_axis(_local_counts, 0, x)
# for backward compatibility convert 1-dim output to list/tuple
return _convert_from_multidim(res) | convert central to non-central moments, uses recursive formula
optionally adjusts first moment to return mean | mc2mnc | python | statsmodels/statsmodels | statsmodels/stats/moment_helpers.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/moment_helpers.py | BSD-3-Clause |
def mnc2mc(mnc, wmean=True):
"""convert non-central to central moments, uses recursive formula
optionally adjusts first moment to return mean
"""
X = _convert_to_multidim(mnc)
def _local_counts(mnc):
mean = mnc[0]
mnc = [1] + list(mnc) # add zero moment = 1
mu = []
for n, m in enumerate(mnc):
mu.append(0)
for k in range(n + 1):
sgn_comb = (-1) ** (n - k) * comb(n, k, exact=True)
mu[n] += sgn_comb * mnc[k] * mean ** (n - k)
if wmean:
mu[1] = mean
return mu[1:]
res = np.apply_along_axis(_local_counts, 0, X)
# for backward compatibility convert 1-dim output to list/tuple
return _convert_from_multidim(res) | convert non-central to central moments, uses recursive formula
optionally adjusts first moment to return mean | mnc2mc | python | statsmodels/statsmodels | statsmodels/stats/moment_helpers.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/moment_helpers.py | BSD-3-Clause |
def cum2mc(kappa):
"""convert non-central moments to cumulants
recursive formula produces as many cumulants as moments
References
----------
Kenneth Lange: Numerical Analysis for Statisticians, page 40
"""
X = _convert_to_multidim(kappa)
def _local_counts(kappa):
mc = [1, 0.0] # _kappa[0]] #insert 0-moment and mean
kappa0 = kappa[0]
kappa = [1] + list(kappa)
for nn, m in enumerate(kappa[2:]):
n = nn + 2
mc.append(0)
for k in range(n - 1):
mc[n] += comb(n - 1, k, exact=True) * kappa[n - k] * mc[k]
mc[1] = kappa0 # insert mean as first moments by convention
return mc[1:]
res = np.apply_along_axis(_local_counts, 0, X)
# for backward compatibility convert 1-dim output to list/tuple
return _convert_from_multidim(res) | convert non-central moments to cumulants
recursive formula produces as many cumulants as moments
References
----------
Kenneth Lange: Numerical Analysis for Statisticians, page 40 | cum2mc | python | statsmodels/statsmodels | statsmodels/stats/moment_helpers.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/moment_helpers.py | BSD-3-Clause |
def mnc2cum(mnc):
"""convert non-central moments to cumulants
recursive formula produces as many cumulants as moments
https://en.wikipedia.org/wiki/Cumulant#Cumulants_and_moments
"""
X = _convert_to_multidim(mnc)
def _local_counts(mnc):
mnc = [1] + list(mnc)
kappa = [1]
for nn, m in enumerate(mnc[1:]):
n = nn + 1
kappa.append(m)
for k in range(1, n):
num_ways = comb(n - 1, k - 1, exact=True)
kappa[n] -= num_ways * kappa[k] * mnc[n - k]
return kappa[1:]
res = np.apply_along_axis(_local_counts, 0, X)
# for backward compatibility convert 1-dim output to list/tuple
return _convert_from_multidim(res) | convert non-central moments to cumulants
recursive formula produces as many cumulants as moments
https://en.wikipedia.org/wiki/Cumulant#Cumulants_and_moments | mnc2cum | python | statsmodels/statsmodels | statsmodels/stats/moment_helpers.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/moment_helpers.py | BSD-3-Clause |
def mc2cum(mc):
"""
just chained because I have still the test case
"""
first_step = mc2mnc(mc)
if isinstance(first_step, np.ndarray):
first_step = first_step.T
return mnc2cum(first_step) | just chained because I have still the test case | mc2cum | python | statsmodels/statsmodels | statsmodels/stats/moment_helpers.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/moment_helpers.py | BSD-3-Clause |
def mvsk2mc(args):
"""convert mean, variance, skew, kurtosis to central moments"""
X = _convert_to_multidim(args)
def _local_counts(args):
mu, sig2, sk, kur = args
cnt = [None] * 4
cnt[0] = mu
cnt[1] = sig2
cnt[2] = sk * sig2 ** 1.5
cnt[3] = (kur + 3.0) * sig2 ** 2.0
return tuple(cnt)
res = np.apply_along_axis(_local_counts, 0, X)
# for backward compatibility convert 1-dim output to list/tuple
return _convert_from_multidim(res, tuple) | convert mean, variance, skew, kurtosis to central moments | mvsk2mc | python | statsmodels/statsmodels | statsmodels/stats/moment_helpers.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/moment_helpers.py | BSD-3-Clause |
def mvsk2mnc(args):
"""convert mean, variance, skew, kurtosis to non-central moments"""
X = _convert_to_multidim(args)
def _local_counts(args):
mc, mc2, skew, kurt = args
mnc = mc
mnc2 = mc2 + mc * mc
mc3 = skew * (mc2 ** 1.5) # 3rd central moment
mnc3 = mc3 + 3 * mc * mc2 + mc ** 3 # 3rd non-central moment
mc4 = (kurt + 3.0) * (mc2 ** 2.0) # 4th central moment
mnc4 = mc4 + 4 * mc * mc3 + 6 * mc * mc * mc2 + mc ** 4
return (mnc, mnc2, mnc3, mnc4)
res = np.apply_along_axis(_local_counts, 0, X)
# for backward compatibility convert 1-dim output to list/tuple
return _convert_from_multidim(res, tuple) | convert mean, variance, skew, kurtosis to non-central moments | mvsk2mnc | python | statsmodels/statsmodels | statsmodels/stats/moment_helpers.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/moment_helpers.py | BSD-3-Clause |
def mc2mvsk(args):
"""convert central moments to mean, variance, skew, kurtosis"""
X = _convert_to_multidim(args)
def _local_counts(args):
mc, mc2, mc3, mc4 = args
skew = np.divide(mc3, mc2 ** 1.5)
kurt = np.divide(mc4, mc2 ** 2.0) - 3.0
return (mc, mc2, skew, kurt)
res = np.apply_along_axis(_local_counts, 0, X)
# for backward compatibility convert 1-dim output to list/tuple
return _convert_from_multidim(res, tuple) | convert central moments to mean, variance, skew, kurtosis | mc2mvsk | python | statsmodels/statsmodels | statsmodels/stats/moment_helpers.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/moment_helpers.py | BSD-3-Clause |
def mnc2mvsk(args):
"""convert central moments to mean, variance, skew, kurtosis
"""
X = _convert_to_multidim(args)
def _local_counts(args):
# convert four non-central moments to central moments
mnc, mnc2, mnc3, mnc4 = args
mc = mnc
mc2 = mnc2 - mnc * mnc
mc3 = mnc3 - (3 * mc * mc2 + mc ** 3) # 3rd central moment
mc4 = mnc4 - (4 * mc * mc3 + 6 * mc * mc * mc2 + mc ** 4)
return mc2mvsk((mc, mc2, mc3, mc4))
res = np.apply_along_axis(_local_counts, 0, X)
# for backward compatibility convert 1-dim output to list/tuple
return _convert_from_multidim(res, tuple) | convert central moments to mean, variance, skew, kurtosis | mnc2mvsk | python | statsmodels/statsmodels | statsmodels/stats/moment_helpers.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/moment_helpers.py | BSD-3-Clause |
def cov2corr(cov, return_std=False):
"""
convert covariance matrix to correlation matrix
Parameters
----------
cov : array_like, 2d
covariance matrix, see Notes
Returns
-------
corr : ndarray (subclass)
correlation matrix
return_std : bool
If this is true then the standard deviation is also returned.
By default only the correlation matrix is returned.
Notes
-----
This function does not convert subclasses of ndarrays. This requires that
division is defined elementwise. np.ma.array and np.matrix are allowed.
"""
cov = np.asanyarray(cov)
std_ = np.sqrt(np.diag(cov))
corr = cov / np.outer(std_, std_)
if return_std:
return corr, std_
else:
return corr | convert covariance matrix to correlation matrix
Parameters
----------
cov : array_like, 2d
covariance matrix, see Notes
Returns
-------
corr : ndarray (subclass)
correlation matrix
return_std : bool
If this is true then the standard deviation is also returned.
By default only the correlation matrix is returned.
Notes
-----
This function does not convert subclasses of ndarrays. This requires that
division is defined elementwise. np.ma.array and np.matrix are allowed. | cov2corr | python | statsmodels/statsmodels | statsmodels/stats/moment_helpers.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/moment_helpers.py | BSD-3-Clause |
def corr2cov(corr, std):
"""
convert correlation matrix to covariance matrix given standard deviation
Parameters
----------
corr : array_like, 2d
correlation matrix, see Notes
std : array_like, 1d
standard deviation
Returns
-------
cov : ndarray (subclass)
covariance matrix
Notes
-----
This function does not convert subclasses of ndarrays. This requires
that multiplication is defined elementwise. np.ma.array are allowed, but
not matrices.
"""
corr = np.asanyarray(corr)
std_ = np.asanyarray(std)
cov = corr * np.outer(std_, std_)
return cov | convert correlation matrix to covariance matrix given standard deviation
Parameters
----------
corr : array_like, 2d
correlation matrix, see Notes
std : array_like, 1d
standard deviation
Returns
-------
cov : ndarray (subclass)
covariance matrix
Notes
-----
This function does not convert subclasses of ndarrays. This requires
that multiplication is defined elementwise. np.ma.array are allowed, but
not matrices. | corr2cov | python | statsmodels/statsmodels | statsmodels/stats/moment_helpers.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/moment_helpers.py | BSD-3-Clause |
def se_cov(cov):
"""
get standard deviation from covariance matrix
just a shorthand function np.sqrt(np.diag(cov))
Parameters
----------
cov : array_like, square
covariance matrix
Returns
-------
std : ndarray
standard deviation from diagonal of cov
"""
return np.sqrt(np.diag(cov)) | get standard deviation from covariance matrix
just a shorthand function np.sqrt(np.diag(cov))
Parameters
----------
cov : array_like, square
covariance matrix
Returns
-------
std : ndarray
standard deviation from diagonal of cov | se_cov | python | statsmodels/statsmodels | statsmodels/stats/moment_helpers.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/moment_helpers.py | BSD-3-Clause |
def anderson_statistic(x, dist='norm', fit=True, params=(), axis=0):
"""
Calculate the Anderson-Darling a2 statistic.
Parameters
----------
x : array_like
The data to test.
dist : {'norm', callable}
The assumed distribution under the null of test statistic.
fit : bool
If True, then the distribution parameters are estimated.
Currently only for 1d data x, except in case dist='norm'.
params : tuple
The optional distribution parameters if fit is False.
axis : int
If dist is 'norm' or fit is False, then data can be an n-dimensional
and axis specifies the axis of a variable.
Returns
-------
{float, ndarray}
The Anderson-Darling statistic.
"""
x = array_like(x, 'x', ndim=None)
fit = bool_like(fit, 'fit')
axis = int_like(axis, 'axis')
y = np.sort(x, axis=axis)
nobs = y.shape[axis]
if fit:
if dist == 'norm':
xbar = np.expand_dims(np.mean(x, axis=axis), axis)
s = np.expand_dims(np.std(x, ddof=1, axis=axis), axis)
w = (y - xbar) / s
z = stats.norm.cdf(w)
elif callable(dist):
params = dist.fit(x)
z = dist.cdf(y, *params)
else:
raise ValueError("dist must be 'norm' or a Callable")
else:
if callable(dist):
z = dist.cdf(y, *params)
else:
raise ValueError('if fit is false, then dist must be callable')
i = np.arange(1, nobs + 1)
sl1 = [None] * x.ndim
sl1[axis] = slice(None)
sl1 = tuple(sl1)
sl2 = [slice(None)] * x.ndim
sl2[axis] = slice(None, None, -1)
sl2 = tuple(sl2)
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore", message="divide by zero encountered in log1p"
)
ad_values = (2 * i[sl1] - 1.0) / nobs * (np.log(z) + np.log1p(-z[sl2]))
s = np.sum(ad_values, axis=axis)
a2 = -nobs - s
return a2 | Calculate the Anderson-Darling a2 statistic.
Parameters
----------
x : array_like
The data to test.
dist : {'norm', callable}
The assumed distribution under the null of test statistic.
fit : bool
If True, then the distribution parameters are estimated.
Currently only for 1d data x, except in case dist='norm'.
params : tuple
The optional distribution parameters if fit is False.
axis : int
If dist is 'norm' or fit is False, then data can be an n-dimensional
and axis specifies the axis of a variable.
Returns
-------
{float, ndarray}
The Anderson-Darling statistic. | anderson_statistic | python | statsmodels/statsmodels | statsmodels/stats/_adnorm.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_adnorm.py | BSD-3-Clause |
def normal_ad(x, axis=0):
"""
Anderson-Darling test for normal distribution unknown mean and variance.
Parameters
----------
x : array_like
The data array.
axis : int
The axis to perform the test along.
Returns
-------
ad2 : float
Anderson Darling test statistic.
pval : float
The pvalue for hypothesis that the data comes from a normal
distribution with unknown mean and variance.
See Also
--------
statsmodels.stats.diagnostic.anderson_statistic
The Anderson-Darling a2 statistic.
statsmodels.stats.diagnostic.kstest_fit
Kolmogorov-Smirnov test with estimated parameters for Normal or
Exponential distributions.
"""
ad2 = anderson_statistic(x, dist='norm', fit=True, axis=axis)
n = x.shape[axis]
ad2a = ad2 * (1 + 0.75 / n + 2.25 / n ** 2)
if np.size(ad2a) == 1:
if (ad2a >= 0.00 and ad2a < 0.200):
pval = 1 - np.exp(-13.436 + 101.14 * ad2a - 223.73 * ad2a ** 2)
elif ad2a < 0.340:
pval = 1 - np.exp(-8.318 + 42.796 * ad2a - 59.938 * ad2a ** 2)
elif ad2a < 0.600:
pval = np.exp(0.9177 - 4.279 * ad2a - 1.38 * ad2a ** 2)
elif ad2a <= 13:
pval = np.exp(1.2937 - 5.709 * ad2a + 0.0186 * ad2a ** 2)
else:
pval = 0.0 # is < 4.9542108058458799e-31
else:
bounds = np.array([0.0, 0.200, 0.340, 0.600])
def pval0(ad2a):
return np.nan * np.ones_like(ad2a)
def pval1(ad2a):
return 1 - np.exp(-13.436 + 101.14 * ad2a - 223.73 * ad2a ** 2)
def pval2(ad2a):
return 1 - np.exp(-8.318 + 42.796 * ad2a - 59.938 * ad2a ** 2)
def pval3(ad2a):
return np.exp(0.9177 - 4.279 * ad2a - 1.38 * ad2a ** 2)
def pval4(ad2a):
return np.exp(1.2937 - 5.709 * ad2a + 0.0186 * ad2a ** 2)
pvalli = [pval0, pval1, pval2, pval3, pval4]
idx = np.searchsorted(bounds, ad2a, side='right')
pval = np.nan * np.ones_like(ad2a)
for i in range(5):
mask = (idx == i)
pval[mask] = pvalli[i](ad2a[mask])
return ad2, pval | Anderson-Darling test for normal distribution unknown mean and variance.
Parameters
----------
x : array_like
The data array.
axis : int
The axis to perform the test along.
Returns
-------
ad2 : float
Anderson Darling test statistic.
pval : float
The pvalue for hypothesis that the data comes from a normal
distribution with unknown mean and variance.
See Also
--------
statsmodels.stats.diagnostic.anderson_statistic
The Anderson-Darling a2 statistic.
statsmodels.stats.diagnostic.kstest_fit
Kolmogorov-Smirnov test with estimated parameters for Normal or
Exponential distributions. | normal_ad | python | statsmodels/statsmodels | statsmodels/stats/_adnorm.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_adnorm.py | BSD-3-Clause |
def sum_weights(self):
"""Sum of weights"""
return self.weights.sum(0) | Sum of weights | sum_weights | python | statsmodels/statsmodels | statsmodels/stats/weightstats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py | BSD-3-Clause |
def nobs(self):
"""alias for number of observations/cases, equal to sum of weights
"""
return self.sum_weights | alias for number of observations/cases, equal to sum of weights | nobs | python | statsmodels/statsmodels | statsmodels/stats/weightstats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py | BSD-3-Clause |
def sum(self):
"""weighted sum of data"""
return np.dot(self.data.T, self.weights) | weighted sum of data | sum | python | statsmodels/statsmodels | statsmodels/stats/weightstats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py | BSD-3-Clause |
def mean(self):
"""weighted mean of data"""
return self.sum / self.sum_weights | weighted mean of data | mean | python | statsmodels/statsmodels | statsmodels/stats/weightstats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py | BSD-3-Clause |
def demeaned(self):
"""data with weighted mean subtracted"""
return self.data - self.mean | data with weighted mean subtracted | demeaned | python | statsmodels/statsmodels | statsmodels/stats/weightstats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py | BSD-3-Clause |
def sumsquares(self):
"""weighted sum of squares of demeaned data"""
return np.dot((self.demeaned ** 2).T, self.weights) | weighted sum of squares of demeaned data | sumsquares | python | statsmodels/statsmodels | statsmodels/stats/weightstats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py | BSD-3-Clause |
def var_ddof(self, ddof=0):
"""variance of data given ddof
Parameters
----------
ddof : int, float
degrees of freedom correction, independent of attribute ddof
Returns
-------
var : float, ndarray
variance with denominator ``sum_weights - ddof``
"""
return self.sumsquares / (self.sum_weights - ddof) | variance of data given ddof
Parameters
----------
ddof : int, float
degrees of freedom correction, independent of attribute ddof
Returns
-------
var : float, ndarray
variance with denominator ``sum_weights - ddof`` | var_ddof | python | statsmodels/statsmodels | statsmodels/stats/weightstats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py | BSD-3-Clause |
def std_ddof(self, ddof=0):
"""standard deviation of data with given ddof
Parameters
----------
ddof : int, float
degrees of freedom correction, independent of attribute ddof
Returns
-------
std : float, ndarray
standard deviation with denominator ``sum_weights - ddof``
"""
return np.sqrt(self.var_ddof(ddof=ddof)) | standard deviation of data with given ddof
Parameters
----------
ddof : int, float
degrees of freedom correction, independent of attribute ddof
Returns
-------
std : float, ndarray
standard deviation with denominator ``sum_weights - ddof`` | std_ddof | python | statsmodels/statsmodels | statsmodels/stats/weightstats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py | BSD-3-Clause |
def var(self):
"""variance with default degrees of freedom correction
"""
return self.sumsquares / (self.sum_weights - self.ddof) | variance with default degrees of freedom correction | var | python | statsmodels/statsmodels | statsmodels/stats/weightstats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py | BSD-3-Clause |
def _var(self):
"""variance without degrees of freedom correction
used for statistical tests with controlled ddof
"""
return self.sumsquares / self.sum_weights | variance without degrees of freedom correction
used for statistical tests with controlled ddof | _var | python | statsmodels/statsmodels | statsmodels/stats/weightstats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py | BSD-3-Clause |
def std(self):
"""standard deviation with default degrees of freedom correction
"""
return np.sqrt(self.var) | standard deviation with default degrees of freedom correction | std | python | statsmodels/statsmodels | statsmodels/stats/weightstats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py | BSD-3-Clause |
def cov(self):
"""weighted covariance of data if data is 2 dimensional
assumes variables in columns and observations in rows
uses default ddof
"""
cov_ = np.dot(self.weights * self.demeaned.T, self.demeaned)
cov_ /= self.sum_weights - self.ddof
return cov_ | weighted covariance of data if data is 2 dimensional
assumes variables in columns and observations in rows
uses default ddof | cov | python | statsmodels/statsmodels | statsmodels/stats/weightstats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py | BSD-3-Clause |
def corrcoef(self):
"""weighted correlation with default ddof
assumes variables in columns and observations in rows
"""
return self.cov / self.std / self.std[:, None] | weighted correlation with default ddof
assumes variables in columns and observations in rows | corrcoef | python | statsmodels/statsmodels | statsmodels/stats/weightstats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py | BSD-3-Clause |
def std_mean(self):
"""standard deviation of weighted mean
"""
std = self.std
if self.ddof != 0:
# ddof correction, (need copy of std)
std = std * np.sqrt(
(self.sum_weights - self.ddof) / self.sum_weights
)
return std / np.sqrt(self.sum_weights - 1) | standard deviation of weighted mean | std_mean | python | statsmodels/statsmodels | statsmodels/stats/weightstats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/weightstats.py | BSD-3-Clause |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.