code
stringlengths 26
870k
| docstring
stringlengths 1
65.6k
| func_name
stringlengths 1
194
| language
stringclasses 1
value | repo
stringlengths 8
68
| path
stringlengths 5
194
| url
stringlengths 46
254
| license
stringclasses 4
values |
---|---|---|---|---|---|---|---|
def approx_hess_cs(x, f, epsilon=None, args=(), kwargs={}):
'''Calculate Hessian with complex-step derivative approximation
Parameters
----------
x : array_like
value at which function derivative is evaluated
f : function
function of one array f(x)
epsilon : float
stepsize, if None, then stepsize is automatically chosen
Returns
-------
hess : ndarray
array of partial second derivatives, Hessian
Notes
-----
based on equation 10 in
M. S. RIDOUT: Statistical Applications of the Complex-step Method
of Numerical Differentiation, University of Kent, Canterbury, Kent, U.K.
The stepsize is the same for the complex and the finite difference part.
'''
# TODO: might want to consider lowering the step for pure derivatives
n = len(x)
h = _get_epsilon(x, 3, epsilon, n)
ee = np.diag(h)
hess = np.outer(h, h)
n = len(x)
for i in range(n):
for j in range(i, n):
hess[i, j] = np.squeeze(
(f(*((x + 1j*ee[i, :] + ee[j, :],) + args), **kwargs)
- f(*((x + 1j*ee[i, :] - ee[j, :],)+args),
**kwargs)).imag/2./hess[i, j]
)
hess[j, i] = hess[i, j]
return hess | Calculate Hessian with complex-step derivative approximation
Parameters
----------
x : array_like
value at which function derivative is evaluated
f : function
function of one array f(x)
epsilon : float
stepsize, if None, then stepsize is automatically chosen
Returns
-------
hess : ndarray
array of partial second derivatives, Hessian
Notes
-----
based on equation 10 in
M. S. RIDOUT: Statistical Applications of the Complex-step Method
of Numerical Differentiation, University of Kent, Canterbury, Kent, U.K.
The stepsize is the same for the complex and the finite difference part. | approx_hess_cs | python | statsmodels/statsmodels | statsmodels/tools/numdiff.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/numdiff.py | BSD-3-Clause |
def test_missing_data_pandas():
"""
Fixes GH: #144
"""
X = np.random.random((10, 5))
X[1, 2] = np.nan
df = pandas.DataFrame(X)
vals, cnames, rnames = data.interpret_data(df)
np.testing.assert_equal(rnames.tolist(), [0, 2, 3, 4, 5, 6, 7, 8, 9]) | Fixes GH: #144 | test_missing_data_pandas | python | statsmodels/statsmodels | statsmodels/tools/tests/test_data.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tests/test_data.py | BSD-3-Clause |
def _right_squeeze(arr, stop_dim=0):
"""
Remove trailing singleton dimensions
Parameters
----------
arr : ndarray
Input array
stop_dim : int
Dimension where checking should stop so that shape[i] is not checked
for i < stop_dim
Returns
-------
squeezed : ndarray
Array with all trailing singleton dimensions (0 or 1) removed.
Singleton dimensions for dimension < stop_dim are retained.
"""
last = arr.ndim
for s in reversed(arr.shape):
if s > 1:
break
last -= 1
last = max(last, stop_dim)
return arr.reshape(arr.shape[:last]) | Remove trailing singleton dimensions
Parameters
----------
arr : ndarray
Input array
stop_dim : int
Dimension where checking should stop so that shape[i] is not checked
for i < stop_dim
Returns
-------
squeezed : ndarray
Array with all trailing singleton dimensions (0 or 1) removed.
Singleton dimensions for dimension < stop_dim are retained. | _right_squeeze | python | statsmodels/statsmodels | statsmodels/tools/validation/validation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/validation/validation.py | BSD-3-Clause |
def array_like(
obj,
name,
dtype=np.double,
ndim=1,
maxdim=None,
shape=None,
order=None,
contiguous=False,
optional=False,
writeable=True,
):
"""
Convert array-like to a ndarray and check conditions
Parameters
----------
obj : array_like
An array, any object exposing the array interface, an object whose
__array__ method returns an array, or any (nested) sequence.
name : str
Name of the variable to use in exceptions
dtype : {None, numpy.dtype, str}
Required dtype. Default is double. If None, does not change the dtype
of obj (if present) or uses NumPy to automatically detect the dtype
ndim : {int, None}
Required number of dimensions of obj. If None, no check is performed.
If the number of dimensions of obj is less than ndim, additional axes
are inserted on the right. See examples.
maxdim : {int, None}
Maximum allowed dimension. Use ``maxdim`` instead of ``ndim`` when
inputs are allowed to have ndim 1, 2, ..., or maxdim.
shape : {tuple[int], None}
Required shape obj. If None, no check is performed. Partially
restricted shapes can be checked using None. See examples.
order : {'C', 'F', None}
Order of the array
contiguous : bool
Ensure that the array's data is contiguous with order ``order``
optional : bool
Flag indicating whether None is allowed
writeable : bool
Whether to ensure the returned array is writeable
Returns
-------
ndarray
The converted input.
Examples
--------
Convert a list or pandas series to an array
>>> import pandas as pd
>>> x = [0, 1, 2, 3]
>>> a = array_like(x, 'x', ndim=1)
>>> a.shape
(4,)
>>> a = array_like(pd.Series(x), 'x', ndim=1)
>>> a.shape
(4,)
>>> type(a.orig)
pandas.core.series.Series
Squeezes singleton dimensions when required
>>> x = np.array(x).reshape((4, 1))
>>> a = array_like(x, 'x', ndim=1)
>>> a.shape
(4,)
Right-appends when required size is larger than actual
>>> x = [0, 1, 2, 3]
>>> a = array_like(x, 'x', ndim=2)
>>> a.shape
(4, 1)
Check only the first and last dimension of the input
>>> x = np.arange(4*10*4).reshape((4, 10, 4))
>>> y = array_like(x, 'x', ndim=3, shape=(4, None, 4))
Check only the first two dimensions
>>> z = array_like(x, 'x', ndim=3, shape=(4, 10))
Raises ValueError if constraints are not satisfied
>>> z = array_like(x, 'x', ndim=2)
Traceback (most recent call last):
...
ValueError: x is required to have ndim 2 but has ndim 3
>>> z = array_like(x, 'x', shape=(10, 4, 4))
Traceback (most recent call last):
...
ValueError: x is required to have shape (10, 4, 4) but has shape (4, 10, 4)
>>> z = array_like(x, 'x', shape=(None, 4, 4))
Traceback (most recent call last):
...
ValueError: x is required to have shape (*, 4, 4) but has shape (4, 10, 4)
"""
if optional and obj is None:
return None
reqs = ["W"] if writeable else []
if order == "C" or contiguous:
reqs += ["C"]
elif order == "F":
reqs += ["F"]
arr = np.require(obj, dtype=dtype, requirements=reqs)
if maxdim is not None:
if arr.ndim > maxdim:
msg = f"{name} must have ndim <= {maxdim}"
raise ValueError(msg)
elif ndim is not None:
if arr.ndim > ndim:
arr = _right_squeeze(arr, stop_dim=ndim)
elif arr.ndim < ndim:
arr = np.reshape(arr, arr.shape + (1,) * (ndim - arr.ndim))
if arr.ndim != ndim:
msg = "{0} is required to have ndim {1} but has ndim {2}"
raise ValueError(msg.format(name, ndim, arr.ndim))
if shape is not None:
for actual, req in zip(arr.shape, shape):
if req is not None and actual != req:
req_shape = str(shape).replace("None, ", "*, ")
msg = "{0} is required to have shape {1} but has shape {2}"
raise ValueError(msg.format(name, req_shape, arr.shape))
return arr | Convert array-like to a ndarray and check conditions
Parameters
----------
obj : array_like
An array, any object exposing the array interface, an object whose
__array__ method returns an array, or any (nested) sequence.
name : str
Name of the variable to use in exceptions
dtype : {None, numpy.dtype, str}
Required dtype. Default is double. If None, does not change the dtype
of obj (if present) or uses NumPy to automatically detect the dtype
ndim : {int, None}
Required number of dimensions of obj. If None, no check is performed.
If the number of dimensions of obj is less than ndim, additional axes
are inserted on the right. See examples.
maxdim : {int, None}
Maximum allowed dimension. Use ``maxdim`` instead of ``ndim`` when
inputs are allowed to have ndim 1, 2, ..., or maxdim.
shape : {tuple[int], None}
Required shape obj. If None, no check is performed. Partially
restricted shapes can be checked using None. See examples.
order : {'C', 'F', None}
Order of the array
contiguous : bool
Ensure that the array's data is contiguous with order ``order``
optional : bool
Flag indicating whether None is allowed
writeable : bool
Whether to ensure the returned array is writeable
Returns
-------
ndarray
The converted input.
Examples
--------
Convert a list or pandas series to an array
>>> import pandas as pd
>>> x = [0, 1, 2, 3]
>>> a = array_like(x, 'x', ndim=1)
>>> a.shape
(4,)
>>> a = array_like(pd.Series(x), 'x', ndim=1)
>>> a.shape
(4,)
>>> type(a.orig)
pandas.core.series.Series
Squeezes singleton dimensions when required
>>> x = np.array(x).reshape((4, 1))
>>> a = array_like(x, 'x', ndim=1)
>>> a.shape
(4,)
Right-appends when required size is larger than actual
>>> x = [0, 1, 2, 3]
>>> a = array_like(x, 'x', ndim=2)
>>> a.shape
(4, 1)
Check only the first and last dimension of the input
>>> x = np.arange(4*10*4).reshape((4, 10, 4))
>>> y = array_like(x, 'x', ndim=3, shape=(4, None, 4))
Check only the first two dimensions
>>> z = array_like(x, 'x', ndim=3, shape=(4, 10))
Raises ValueError if constraints are not satisfied
>>> z = array_like(x, 'x', ndim=2)
Traceback (most recent call last):
...
ValueError: x is required to have ndim 2 but has ndim 3
>>> z = array_like(x, 'x', shape=(10, 4, 4))
Traceback (most recent call last):
...
ValueError: x is required to have shape (10, 4, 4) but has shape (4, 10, 4)
>>> z = array_like(x, 'x', shape=(None, 4, 4))
Traceback (most recent call last):
...
ValueError: x is required to have shape (*, 4, 4) but has shape (4, 10, 4) | array_like | python | statsmodels/statsmodels | statsmodels/tools/validation/validation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/validation/validation.py | BSD-3-Clause |
def wrap(self, obj, columns=None, append=None, trim_start=0, trim_end=0):
"""
Parameters
----------
obj : {array_like}
The value to wrap like to a pandas Series or DataFrame.
columns : {str, list[str]}
Column names or series name, if obj is 1d.
append : str
String to append to the columns to create a new column name.
trim_start : int
The number of observations to drop from the start of the index, so
that the index applied is index[trim_start:].
trim_end : int
The number of observations to drop from the end of the index , so
that the index applied is index[:nobs - trim_end].
Returns
-------
array_like
A pandas Series or DataFrame, depending on the shape of obj.
"""
obj = np.asarray(obj)
if not self._is_pandas:
return obj
if obj.shape[0] + trim_start + trim_end != self._pandas_obj.shape[0]:
raise ValueError(
"obj must have the same number of elements in "
"axis 0 as orig"
)
index = self._pandas_obj.index
index = index[trim_start: index.shape[0] - trim_end]
if obj.ndim == 1:
if columns is None:
name = getattr(self._pandas_obj, "name", None)
elif isinstance(columns, str):
name = columns
else:
name = columns[0]
if append is not None:
name = append if name is None else f"{name}_{append}"
return pd.Series(obj, name=name, index=index)
elif obj.ndim == 2:
if columns is None:
columns = getattr(self._pandas_obj, "columns", None)
if append is not None:
new = []
for c in columns:
new.append(append if c is None else f"{c}_{append}")
columns = new
return pd.DataFrame(obj, columns=columns, index=index)
else:
raise ValueError("Can only wrap 1 or 2-d array_like") | Parameters
----------
obj : {array_like}
The value to wrap like to a pandas Series or DataFrame.
columns : {str, list[str]}
Column names or series name, if obj is 1d.
append : str
String to append to the columns to create a new column name.
trim_start : int
The number of observations to drop from the start of the index, so
that the index applied is index[trim_start:].
trim_end : int
The number of observations to drop from the end of the index , so
that the index applied is index[:nobs - trim_end].
Returns
-------
array_like
A pandas Series or DataFrame, depending on the shape of obj. | wrap | python | statsmodels/statsmodels | statsmodels/tools/validation/validation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/validation/validation.py | BSD-3-Clause |
def bool_like(value, name, optional=False, strict=False):
"""
Convert to bool or raise if not bool_like
Parameters
----------
value : object
Value to verify
name : str
Variable name for exceptions
optional : bool
Flag indicating whether None is allowed
strict : bool
If True, then only allow bool. If False, allow types that support
casting to bool.
Returns
-------
converted : bool
value converted to a bool
"""
if optional and value is None:
return value
extra_text = " or None" if optional else ""
if strict:
if isinstance(value, bool):
return value
else:
raise TypeError(f"{name} must be a bool{extra_text}")
if hasattr(value, "squeeze") and callable(value.squeeze):
value = value.squeeze()
try:
return bool(value)
except Exception:
raise TypeError(
"{} must be a bool (or bool-compatible)"
"{}".format(name, extra_text)
) | Convert to bool or raise if not bool_like
Parameters
----------
value : object
Value to verify
name : str
Variable name for exceptions
optional : bool
Flag indicating whether None is allowed
strict : bool
If True, then only allow bool. If False, allow types that support
casting to bool.
Returns
-------
converted : bool
value converted to a bool | bool_like | python | statsmodels/statsmodels | statsmodels/tools/validation/validation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/validation/validation.py | BSD-3-Clause |
def int_like(
value: Any, name: str, optional: bool = False, strict: bool = False
) -> Optional[int]:
"""
Convert to int or raise if not int_like
Parameters
----------
value : object
Value to verify
name : str
Variable name for exceptions
optional : bool
Flag indicating whether None is allowed
strict : bool
If True, then only allow int or np.integer that are not bool. If False,
allow types that support integer division by 1 and conversion to int.
Returns
-------
converted : int
value converted to a int
"""
if optional and value is None:
return None
is_bool_timedelta = isinstance(value, (bool, np.timedelta64))
if hasattr(value, "squeeze") and callable(value.squeeze):
value = value.squeeze()
if isinstance(value, (int, np.integer)) and not is_bool_timedelta:
return int(value)
elif not strict and not is_bool_timedelta:
try:
if value == (value // 1):
return int(value)
except Exception:
pass
extra_text = " or None" if optional else ""
raise TypeError(
"{} must be integer_like (int or np.integer, but not bool"
" or timedelta64){}".format(name, extra_text)
) | Convert to int or raise if not int_like
Parameters
----------
value : object
Value to verify
name : str
Variable name for exceptions
optional : bool
Flag indicating whether None is allowed
strict : bool
If True, then only allow int or np.integer that are not bool. If False,
allow types that support integer division by 1 and conversion to int.
Returns
-------
converted : int
value converted to a int | int_like | python | statsmodels/statsmodels | statsmodels/tools/validation/validation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/validation/validation.py | BSD-3-Clause |
def required_int_like(value: Any, name: str, strict: bool = False) -> int:
"""
Convert to int or raise if not int_like
Parameters
----------
value : object
Value to verify
name : str
Variable name for exceptions
optional : bool
Flag indicating whether None is allowed
strict : bool
If True, then only allow int or np.integer that are not bool. If False,
allow types that support integer division by 1 and conversion to int.
Returns
-------
converted : int
value converted to a int
"""
_int = int_like(value, name, optional=False, strict=strict)
assert _int is not None
return _int | Convert to int or raise if not int_like
Parameters
----------
value : object
Value to verify
name : str
Variable name for exceptions
optional : bool
Flag indicating whether None is allowed
strict : bool
If True, then only allow int or np.integer that are not bool. If False,
allow types that support integer division by 1 and conversion to int.
Returns
-------
converted : int
value converted to a int | required_int_like | python | statsmodels/statsmodels | statsmodels/tools/validation/validation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/validation/validation.py | BSD-3-Clause |
def float_like(value, name, optional=False, strict=False):
"""
Convert to float or raise if not float_like
Parameters
----------
value : object
Value to verify
name : str
Variable name for exceptions
optional : bool
Flag indicating whether None is allowed
strict : bool
If True, then only allow int, np.integer, float or np.inexact that are
not bool or complex. If False, allow complex types with 0 imag part or
any other type that is float like in the sense that it support
multiplication by 1.0 and conversion to float.
Returns
-------
converted : float
value converted to a float
"""
if optional and value is None:
return None
is_bool = isinstance(value, bool)
is_complex = isinstance(value, (complex, np.complexfloating))
if hasattr(value, "squeeze") and callable(value.squeeze):
value = value.squeeze()
if isinstance(value, (int, np.integer, float, np.inexact)) and not (
is_bool or is_complex
):
return float(value)
elif not strict and is_complex:
imag = np.imag(value)
if imag == 0:
return float(np.real(value))
elif not strict and not is_bool:
try:
return float(value / 1.0)
except Exception:
pass
extra_text = " or None" if optional else ""
raise TypeError(
"{} must be float_like (float or np.inexact)"
"{}".format(name, extra_text)
) | Convert to float or raise if not float_like
Parameters
----------
value : object
Value to verify
name : str
Variable name for exceptions
optional : bool
Flag indicating whether None is allowed
strict : bool
If True, then only allow int, np.integer, float or np.inexact that are
not bool or complex. If False, allow complex types with 0 imag part or
any other type that is float like in the sense that it support
multiplication by 1.0 and conversion to float.
Returns
-------
converted : float
value converted to a float | float_like | python | statsmodels/statsmodels | statsmodels/tools/validation/validation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/validation/validation.py | BSD-3-Clause |
def string_like(value, name, optional=False, options=None, lower=True):
"""
Check if object is string-like and raise if not
Parameters
----------
value : object
Value to verify.
name : str
Variable name for exceptions.
optional : bool
Flag indicating whether None is allowed.
options : tuple[str]
Allowed values for input parameter `value`.
lower : bool
Convert all case-based characters in `value` into lowercase.
Returns
-------
str
The validated input
Raises
------
TypeError
If the value is not a string or None when optional is True.
ValueError
If the input is not in ``options`` when ``options`` is set.
"""
if value is None:
return None
if not isinstance(value, str):
extra_text = " or None" if optional else ""
raise TypeError(f"{name} must be a string{extra_text}")
if lower:
value = value.lower()
if options is not None and value not in options:
extra_text = "If not None, " if optional else ""
options_text = "'" + "', '".join(options) + "'"
msg = "{}{} must be one of: {}".format(
extra_text, name, options_text
)
raise ValueError(msg)
return value | Check if object is string-like and raise if not
Parameters
----------
value : object
Value to verify.
name : str
Variable name for exceptions.
optional : bool
Flag indicating whether None is allowed.
options : tuple[str]
Allowed values for input parameter `value`.
lower : bool
Convert all case-based characters in `value` into lowercase.
Returns
-------
str
The validated input
Raises
------
TypeError
If the value is not a string or None when optional is True.
ValueError
If the input is not in ``options`` when ``options`` is set. | string_like | python | statsmodels/statsmodels | statsmodels/tools/validation/validation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/validation/validation.py | BSD-3-Clause |
def dict_like(value, name, optional=False, strict=True):
"""
Check if dict_like (dict, Mapping) or raise if not
Parameters
----------
value : object
Value to verify
name : str
Variable name for exceptions
optional : bool
Flag indicating whether None is allowed
strict : bool
If True, then only allow dict. If False, allow any Mapping-like object.
Returns
-------
converted : dict_like
value
"""
if optional and value is None:
return None
if not isinstance(value, Mapping) or (
strict and not (isinstance(value, dict))
):
extra_text = "If not None, " if optional else ""
strict_text = " or dict_like (i.e., a Mapping)" if strict else ""
msg = f"{extra_text}{name} must be a dict{strict_text}"
raise TypeError(msg)
return value | Check if dict_like (dict, Mapping) or raise if not
Parameters
----------
value : object
Value to verify
name : str
Variable name for exceptions
optional : bool
Flag indicating whether None is allowed
strict : bool
If True, then only allow dict. If False, allow any Mapping-like object.
Returns
-------
converted : dict_like
value | dict_like | python | statsmodels/statsmodels | statsmodels/tools/validation/validation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/validation/validation.py | BSD-3-Clause |
def _noncentrality_chisquare(chi2_stat, df, alpha=0.05):
"""noncentrality parameter for chi-square statistic
`nc` is zero-truncated umvue
Parameters
----------
chi2_stat : float
Chisquare-statistic, for example from a hypothesis test
df : int or float
Degrees of freedom
alpha : float in (0, 1)
Significance level for the confidence interval, covarage is 1 - alpha.
Returns
-------
HolderTuple
The main attributes are
- ``nc`` : estimate of noncentrality parameter
- ``confint`` : lower and upper bound of confidence interval for `nc``
Other attributes are estimates for nc by different methods.
References
----------
.. [1] Kubokawa, T., C.P. Robert, and A.K.Md.E. Saleh. 1993. “Estimation of
Noncentrality Parameters.”
Canadian Journal of Statistics 21 (1): 45–57.
https://doi.org/10.2307/3315657.
.. [2] Li, Qizhai, Junjian Zhang, and Shuai Dai. 2009. “On Estimating the
Non-Centrality Parameter of a Chi-Squared Distribution.”
Statistics & Probability Letters 79 (1): 98–104.
https://doi.org/10.1016/j.spl.2008.07.025.
"""
alpha_half = alpha / 2
nc_umvue = chi2_stat - df
nc = np.maximum(nc_umvue, 0)
nc_lzd = np.maximum(nc_umvue, chi2_stat / (df + 1))
nc_krs = np.maximum(nc_umvue, chi2_stat * 2 / (df + 2))
nc_median = special.chndtrinc(chi2_stat, df, 0.5)
ci = special.chndtrinc(chi2_stat, df, [1 - alpha_half, alpha_half])
res = Holder(nc=nc,
confint=ci,
nc_umvue=nc_umvue,
nc_lzd=nc_lzd,
nc_krs=nc_krs,
nc_median=nc_median,
name="Noncentrality for chisquare-distributed random variable"
)
return res | noncentrality parameter for chi-square statistic
`nc` is zero-truncated umvue
Parameters
----------
chi2_stat : float
Chisquare-statistic, for example from a hypothesis test
df : int or float
Degrees of freedom
alpha : float in (0, 1)
Significance level for the confidence interval, covarage is 1 - alpha.
Returns
-------
HolderTuple
The main attributes are
- ``nc`` : estimate of noncentrality parameter
- ``confint`` : lower and upper bound of confidence interval for `nc``
Other attributes are estimates for nc by different methods.
References
----------
.. [1] Kubokawa, T., C.P. Robert, and A.K.Md.E. Saleh. 1993. “Estimation of
Noncentrality Parameters.”
Canadian Journal of Statistics 21 (1): 45–57.
https://doi.org/10.2307/3315657.
.. [2] Li, Qizhai, Junjian Zhang, and Shuai Dai. 2009. “On Estimating the
Non-Centrality Parameter of a Chi-Squared Distribution.”
Statistics & Probability Letters 79 (1): 98–104.
https://doi.org/10.1016/j.spl.2008.07.025. | _noncentrality_chisquare | python | statsmodels/statsmodels | statsmodels/stats/effect_size.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/effect_size.py | BSD-3-Clause |
def _noncentrality_f(f_stat, df1, df2, alpha=0.05):
"""noncentrality parameter for f statistic
`nc` is zero-truncated umvue
Parameters
----------
fstat : float
f-statistic, for example from a hypothesis test
df : int or float
Degrees of freedom
alpha : float in (0, 1)
Significance level for the confidence interval, covarage is 1 - alpha.
Returns
-------
HolderTuple
The main attributes are
- ``nc`` : estimate of noncentrality parameter
- ``confint`` : lower and upper bound of confidence interval for `nc``
Other attributes are estimates for nc by different methods.
References
----------
.. [1] Kubokawa, T., C.P. Robert, and A.K.Md.E. Saleh. 1993. “Estimation of
Noncentrality Parameters.” Canadian Journal of Statistics 21 (1): 45–57.
https://doi.org/10.2307/3315657.
"""
alpha_half = alpha / 2
x_s = f_stat * df1 / df2
nc_umvue = (df2 - 2) * x_s - df1
nc = np.maximum(nc_umvue, 0)
nc_krs = np.maximum(nc_umvue, x_s * 2 * (df2 - 1) / (df1 + 2))
nc_median = special.ncfdtrinc(df1, df2, 0.5, f_stat)
ci = special.ncfdtrinc(df1, df2, [1 - alpha_half, alpha_half], f_stat)
res = Holder(nc=nc,
confint=ci,
nc_umvue=nc_umvue,
nc_krs=nc_krs,
nc_median=nc_median,
name="Noncentrality for F-distributed random variable"
)
return res | noncentrality parameter for f statistic
`nc` is zero-truncated umvue
Parameters
----------
fstat : float
f-statistic, for example from a hypothesis test
df : int or float
Degrees of freedom
alpha : float in (0, 1)
Significance level for the confidence interval, covarage is 1 - alpha.
Returns
-------
HolderTuple
The main attributes are
- ``nc`` : estimate of noncentrality parameter
- ``confint`` : lower and upper bound of confidence interval for `nc``
Other attributes are estimates for nc by different methods.
References
----------
.. [1] Kubokawa, T., C.P. Robert, and A.K.Md.E. Saleh. 1993. “Estimation of
Noncentrality Parameters.” Canadian Journal of Statistics 21 (1): 45–57.
https://doi.org/10.2307/3315657. | _noncentrality_f | python | statsmodels/statsmodels | statsmodels/stats/effect_size.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/effect_size.py | BSD-3-Clause |
def _noncentrality_t(t_stat, df, alpha=0.05):
"""noncentrality parameter for t statistic
Parameters
----------
fstat : float
f-statistic, for example from a hypothesis test
df : int or float
Degrees of freedom
alpha : float in (0, 1)
Significance level for the confidence interval, covarage is 1 - alpha.
Returns
-------
HolderTuple
The main attributes are
- ``nc`` : estimate of noncentrality parameter
- ``confint`` : lower and upper bound of confidence interval for `nc``
Other attributes are estimates for nc by different methods.
References
----------
.. [1] Hedges, Larry V. 2016. “Distribution Theory for Glass’s Estimator of
Effect Size and Related Estimators:”
Journal of Educational Statistics, November.
https://doi.org/10.3102/10769986006002107.
"""
alpha_half = alpha / 2
gfac = np.exp(special.gammaln(df/2.-0.5) - special.gammaln(df/2.))
c11 = np.sqrt(df/2.) * gfac
nc = t_stat / c11
nc_median = special.nctdtrinc(df, 0.5, t_stat)
ci = special.nctdtrinc(df, [1 - alpha_half, alpha_half], t_stat)
res = Holder(nc=nc,
confint=ci,
nc_median=nc_median,
name="Noncentrality for t-distributed random variable"
)
return res | noncentrality parameter for t statistic
Parameters
----------
fstat : float
f-statistic, for example from a hypothesis test
df : int or float
Degrees of freedom
alpha : float in (0, 1)
Significance level for the confidence interval, covarage is 1 - alpha.
Returns
-------
HolderTuple
The main attributes are
- ``nc`` : estimate of noncentrality parameter
- ``confint`` : lower and upper bound of confidence interval for `nc``
Other attributes are estimates for nc by different methods.
References
----------
.. [1] Hedges, Larry V. 2016. “Distribution Theory for Glass’s Estimator of
Effect Size and Related Estimators:”
Journal of Educational Statistics, November.
https://doi.org/10.3102/10769986006002107. | _noncentrality_t | python | statsmodels/statsmodels | statsmodels/stats/effect_size.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/effect_size.py | BSD-3-Clause |
def _kurtosis(a):
"""
wrapper for scipy.stats.kurtosis that returns nan instead of raising Error
missing options
"""
try:
res = stats.kurtosis(a)
except ValueError:
res = np.nan
return res | wrapper for scipy.stats.kurtosis that returns nan instead of raising Error
missing options | _kurtosis | python | statsmodels/statsmodels | statsmodels/stats/descriptivestats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/descriptivestats.py | BSD-3-Clause |
def _skew(a):
"""
wrapper for scipy.stats.skew that returns nan instead of raising Error
missing options
"""
try:
res = stats.skew(a)
except ValueError:
res = np.nan
return res | wrapper for scipy.stats.skew that returns nan instead of raising Error
missing options | _skew | python | statsmodels/statsmodels | statsmodels/stats/descriptivestats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/descriptivestats.py | BSD-3-Clause |
def sign_test(samp, mu0=0):
"""
Signs test
Parameters
----------
samp : array_like
1d array. The sample for which you want to perform the sign test.
mu0 : float
See Notes for the definition of the sign test. mu0 is 0 by
default, but it is common to set it to the median.
Returns
-------
M
p-value
Notes
-----
The signs test returns
M = (N(+) - N(-))/2
where N(+) is the number of values above `mu0`, N(-) is the number of
values below. Values equal to `mu0` are discarded.
The p-value for M is calculated using the binomial distribution
and can be interpreted the same as for a t-test. The test-statistic
is distributed Binom(min(N(+), N(-)), n_trials, .5) where n_trials
equals N(+) + N(-).
See Also
--------
scipy.stats.wilcoxon
"""
samp = np.asarray(samp)
pos = np.sum(samp > mu0)
neg = np.sum(samp < mu0)
M = (pos - neg) / 2.0
try:
p = stats.binomtest(min(pos, neg), pos + neg, 0.5).pvalue
except AttributeError:
# Remove after min SciPy >= 1.7
p = stats.binom_test(min(pos, neg), pos + neg, 0.5)
return M, p | Signs test
Parameters
----------
samp : array_like
1d array. The sample for which you want to perform the sign test.
mu0 : float
See Notes for the definition of the sign test. mu0 is 0 by
default, but it is common to set it to the median.
Returns
-------
M
p-value
Notes
-----
The signs test returns
M = (N(+) - N(-))/2
where N(+) is the number of values above `mu0`, N(-) is the number of
values below. Values equal to `mu0` are discarded.
The p-value for M is calculated using the binomial distribution
and can be interpreted the same as for a t-test. The test-statistic
is distributed Binom(min(N(+), N(-)), n_trials, .5) where n_trials
equals N(+) + N(-).
See Also
--------
scipy.stats.wilcoxon | sign_test | python | statsmodels/statsmodels | statsmodels/stats/descriptivestats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/descriptivestats.py | BSD-3-Clause |
def frame(self) -> pd.DataFrame:
"""
Descriptive statistics for both numeric and categorical data
Returns
-------
DataFrame
The statistics
"""
numeric = self.numeric
categorical = self.categorical
if categorical.shape[1] == 0:
return numeric
elif numeric.shape[1] == 0:
return categorical
df = pd.concat([numeric, categorical], axis=1)
return self._reorder(df[self._data.columns]) | Descriptive statistics for both numeric and categorical data
Returns
-------
DataFrame
The statistics | frame | python | statsmodels/statsmodels | statsmodels/stats/descriptivestats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/descriptivestats.py | BSD-3-Clause |
def numeric(self) -> pd.DataFrame:
"""
Descriptive statistics for numeric data
Returns
-------
DataFrame
The statistics of the numeric columns
"""
df: pd.DataFrame = self._data.loc[:, self._is_numeric]
cols = df.columns
_, k = df.shape
std = df.std()
count = df.count()
mean = df.mean()
mad = (df - mean).abs().mean()
std_err = std.copy()
std_err.loc[count > 0] /= count.loc[count > 0] ** 0.5
if self._use_t:
q = stats.t(count - 1).ppf(1.0 - self._alpha / 2)
else:
q = stats.norm.ppf(1.0 - self._alpha / 2)
def _mode(ser):
dtype = ser.dtype if isinstance(ser.dtype, np.dtype) else ser.dtype.numpy_dtype
ser_no_missing = ser.dropna().to_numpy(dtype=dtype)
kwargs = {} if SP_LT_19 else {"keepdims": True}
mode_res = stats.mode(ser_no_missing, **kwargs)
# Changes in SciPy 1.10
if np.isscalar(mode_res[0]):
return float(mode_res[0]), mode_res[1]
if mode_res[0].shape[0] > 0:
return [float(val) for val in mode_res]
return np.nan, np.nan
mode_values = df.apply(_mode).T
if mode_values.size > 0:
if isinstance(mode_values, pd.DataFrame):
# pandas 1.0 or later
mode = np.asarray(mode_values[0], dtype=float)
mode_counts = np.asarray(mode_values[1], dtype=np.int64)
else:
# pandas before 1.0 returns a Series of 2-elem list
mode = []
mode_counts = []
for idx in mode_values.index:
val = mode_values.loc[idx]
mode.append(val[0])
mode_counts.append(val[1])
mode = np.atleast_1d(mode)
mode_counts = np.atleast_1d(mode_counts)
else:
mode = mode_counts = np.empty(0)
loc = count > 0
mode_freq = np.full(mode.shape[0], np.nan)
mode_freq[loc] = mode_counts[loc] / count.loc[loc]
# TODO: Workaround for pandas AbstractMethodError in extension
# types. Remove when quantile is supported for these
_df = df
try:
from pandas.api.types import is_extension_array_dtype
_df = df.copy()
for col in df:
if is_extension_array_dtype(df[col].dtype):
if _df[col].isnull().any():
_df[col] = _df[col].fillna(np.nan)
except ImportError:
pass
if df.shape[1] > 0:
iqr = _df.quantile(0.75) - _df.quantile(0.25)
else:
iqr = mean
def _safe_jarque_bera(c):
a = np.asarray(c)
if a.shape[0] < 2:
return (np.nan,) * 4
return jarque_bera(a)
jb = df.apply(
lambda x: list(_safe_jarque_bera(x.dropna())), result_type="expand"
).T
nan_mean = mean.copy()
nan_mean.loc[nan_mean == 0] = np.nan
coef_var = std / nan_mean
results = {
"nobs": pd.Series(
np.ones(k, dtype=np.int64) * df.shape[0], index=cols
),
"missing": df.shape[0] - count,
"mean": mean,
"std_err": std_err,
"upper_ci": mean + q * std_err,
"lower_ci": mean - q * std_err,
"std": std,
"iqr": iqr,
"mad": mad,
"coef_var": coef_var,
"range": pd_ptp(df),
"max": df.max(),
"min": df.min(),
"skew": jb[2],
"kurtosis": jb[3],
"iqr_normal": iqr / np.diff(stats.norm.ppf([0.25, 0.75])),
"mad_normal": mad / np.sqrt(2 / np.pi),
"jarque_bera": jb[0],
"jarque_bera_pval": jb[1],
"mode": pd.Series(mode, index=cols),
"mode_freq": pd.Series(mode_freq, index=cols),
"median": df.median(),
}
final = {k: v for k, v in results.items() if k in self._stats}
results_df = pd.DataFrame(
list(final.values()), columns=cols, index=list(final.keys())
)
if "percentiles" not in self._stats:
return results_df
# Pandas before 1.0 cannot handle empty DF
if df.shape[1] > 0:
# TODO: Remove when extension types support quantile
perc = _df.quantile(self._percentiles / 100).astype(float)
else:
perc = pd.DataFrame(index=self._percentiles / 100, dtype=float)
if np.all(np.floor(100 * perc.index) == (100 * perc.index)):
perc.index = [f"{int(100 * idx)}%" for idx in perc.index]
else:
dupe = True
scale = 100
index = perc.index
while dupe:
scale *= 10
idx = np.floor(scale * perc.index)
if np.all(np.diff(idx) > 0):
dupe = False
index = np.floor(scale * index) / (scale / 100)
fmt = f"0.{len(str(scale//100))-1}f"
output = f"{{0:{fmt}}}%"
perc.index = [output.format(val) for val in index]
# Add in the names of the percentiles to the output
self._stats = self._stats + perc.index.tolist()
return self._reorder(pd.concat([results_df, perc], axis=0)) | Descriptive statistics for numeric data
Returns
-------
DataFrame
The statistics of the numeric columns | numeric | python | statsmodels/statsmodels | statsmodels/stats/descriptivestats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/descriptivestats.py | BSD-3-Clause |
def categorical(self) -> pd.DataFrame:
"""
Descriptive statistics for categorical data
Returns
-------
DataFrame
The statistics of the categorical columns
"""
df = self._data.loc[:, [col for col in self._is_cat_like]]
k = df.shape[1]
cols = df.columns
vc = {col: df[col].value_counts(normalize=True) for col in df}
distinct = pd.Series(
{col: vc[col].shape[0] for col in vc}, dtype=np.int64
)
top = {}
freq = {}
for col in vc:
single = vc[col]
if single.shape[0] >= self._ntop:
top[col] = single.index[: self._ntop]
freq[col] = np.asarray(single.iloc[:5])
else:
val = list(single.index)
val += [None] * (self._ntop - len(val))
top[col] = val
freq_val = list(single)
freq_val += [np.nan] * (self._ntop - len(freq_val))
freq[col] = np.asarray(freq_val)
index = [f"top_{i}" for i in range(1, self._ntop + 1)]
top_df = pd.DataFrame(top, dtype="object", index=index, columns=cols)
index = [f"freq_{i}" for i in range(1, self._ntop + 1)]
freq_df = pd.DataFrame(freq, dtype="object", index=index, columns=cols)
results = {
"nobs": pd.Series(
np.ones(k, dtype=np.int64) * df.shape[0], index=cols
),
"missing": df.shape[0] - df.count(),
"distinct": distinct,
}
final = {k: v for k, v in results.items() if k in self._stats}
results_df = pd.DataFrame(
list(final.values()),
columns=cols,
index=list(final.keys()),
dtype="object",
)
if self._compute_top:
results_df = pd.concat([results_df, top_df], axis=0)
if self._compute_freq:
results_df = pd.concat([results_df, freq_df], axis=0)
return self._reorder(results_df) | Descriptive statistics for categorical data
Returns
-------
DataFrame
The statistics of the categorical columns | categorical | python | statsmodels/statsmodels | statsmodels/stats/descriptivestats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/descriptivestats.py | BSD-3-Clause |
def summary(self) -> SimpleTable:
"""
Summary table of the descriptive statistics
Returns
-------
SimpleTable
A table instance supporting export to text, csv and LaTeX
"""
df = self.frame.astype(object)
if df.isnull().any().any():
df = df.fillna("")
cols = [str(col) for col in df.columns]
stubs = [str(idx) for idx in df.index]
data = []
for _, row in df.iterrows():
data.append([v for v in row])
def _formatter(v):
if isinstance(v, str):
return v
elif v // 1 == v:
return str(int(v))
return f"{v:0.4g}"
return SimpleTable(
data,
header=cols,
stubs=stubs,
title="Descriptive Statistics",
txt_fmt={"data_fmts": {0: "%s", 1: _formatter}},
datatypes=[1] * len(data),
) | Summary table of the descriptive statistics
Returns
-------
SimpleTable
A table instance supporting export to text, csv and LaTeX | summary | python | statsmodels/statsmodels | statsmodels/stats/descriptivestats.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/descriptivestats.py | BSD-3-Clause |
def corr_nearest(corr, threshold=1e-15, n_fact=100):
'''
Find the nearest correlation matrix that is positive semi-definite.
The function iteratively adjust the correlation matrix by clipping the
eigenvalues of a difference matrix. The diagonal elements are set to one.
Parameters
----------
corr : ndarray, (k, k)
initial correlation matrix
threshold : float
clipping threshold for smallest eigenvalue, see Notes
n_fact : int or float
factor to determine the maximum number of iterations. The maximum
number of iterations is the integer part of the number of columns in
the correlation matrix times n_fact.
Returns
-------
corr_new : ndarray, (optional)
corrected correlation matrix
Notes
-----
The smallest eigenvalue of the corrected correlation matrix is
approximately equal to the ``threshold``.
If the threshold=0, then the smallest eigenvalue of the correlation matrix
might be negative, but zero within a numerical error, for example in the
range of -1e-16.
Assumes input correlation matrix is symmetric.
Stops after the first step if correlation matrix is already positive
semi-definite or positive definite, so that smallest eigenvalue is above
threshold. In this case, the returned array is not the original, but
is equal to it within numerical precision.
See Also
--------
corr_clipped
cov_nearest
'''
k_vars = corr.shape[0]
if k_vars != corr.shape[1]:
raise ValueError("matrix is not square")
diff = np.zeros(corr.shape)
x_new = corr.copy()
diag_idx = np.arange(k_vars)
for ii in range(int(len(corr) * n_fact)):
x_adj = x_new - diff
x_psd, clipped = clip_evals(x_adj, value=threshold)
if not clipped:
x_new = x_psd
break
diff = x_psd - x_adj
x_new = x_psd.copy()
x_new[diag_idx, diag_idx] = 1
else:
warnings.warn(iteration_limit_doc, IterationLimitWarning)
return x_new | Find the nearest correlation matrix that is positive semi-definite.
The function iteratively adjust the correlation matrix by clipping the
eigenvalues of a difference matrix. The diagonal elements are set to one.
Parameters
----------
corr : ndarray, (k, k)
initial correlation matrix
threshold : float
clipping threshold for smallest eigenvalue, see Notes
n_fact : int or float
factor to determine the maximum number of iterations. The maximum
number of iterations is the integer part of the number of columns in
the correlation matrix times n_fact.
Returns
-------
corr_new : ndarray, (optional)
corrected correlation matrix
Notes
-----
The smallest eigenvalue of the corrected correlation matrix is
approximately equal to the ``threshold``.
If the threshold=0, then the smallest eigenvalue of the correlation matrix
might be negative, but zero within a numerical error, for example in the
range of -1e-16.
Assumes input correlation matrix is symmetric.
Stops after the first step if correlation matrix is already positive
semi-definite or positive definite, so that smallest eigenvalue is above
threshold. In this case, the returned array is not the original, but
is equal to it within numerical precision.
See Also
--------
corr_clipped
cov_nearest | corr_nearest | python | statsmodels/statsmodels | statsmodels/stats/correlation_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/correlation_tools.py | BSD-3-Clause |
def corr_clipped(corr, threshold=1e-15):
'''
Find a near correlation matrix that is positive semi-definite
This function clips the eigenvalues, replacing eigenvalues smaller than
the threshold by the threshold. The new matrix is normalized, so that the
diagonal elements are one.
Compared to corr_nearest, the distance between the original correlation
matrix and the positive definite correlation matrix is larger, however,
it is much faster since it only computes eigenvalues once.
Parameters
----------
corr : ndarray, (k, k)
initial correlation matrix
threshold : float
clipping threshold for smallest eigenvalue, see Notes
Returns
-------
corr_new : ndarray, (optional)
corrected correlation matrix
Notes
-----
The smallest eigenvalue of the corrected correlation matrix is
approximately equal to the ``threshold``. In examples, the
smallest eigenvalue can be by a factor of 10 smaller than the threshold,
e.g. threshold 1e-8 can result in smallest eigenvalue in the range
between 1e-9 and 1e-8.
If the threshold=0, then the smallest eigenvalue of the correlation matrix
might be negative, but zero within a numerical error, for example in the
range of -1e-16.
Assumes input correlation matrix is symmetric. The diagonal elements of
returned correlation matrix is set to ones.
If the correlation matrix is already positive semi-definite given the
threshold, then the original correlation matrix is returned.
``cov_clipped`` is 40 or more times faster than ``cov_nearest`` in simple
example, but has a slightly larger approximation error.
See Also
--------
corr_nearest
cov_nearest
'''
x_new, clipped = clip_evals(corr, value=threshold)
if not clipped:
return corr
# cov2corr
x_std = np.sqrt(np.diag(x_new))
x_new = x_new / x_std / x_std[:, None]
return x_new | Find a near correlation matrix that is positive semi-definite
This function clips the eigenvalues, replacing eigenvalues smaller than
the threshold by the threshold. The new matrix is normalized, so that the
diagonal elements are one.
Compared to corr_nearest, the distance between the original correlation
matrix and the positive definite correlation matrix is larger, however,
it is much faster since it only computes eigenvalues once.
Parameters
----------
corr : ndarray, (k, k)
initial correlation matrix
threshold : float
clipping threshold for smallest eigenvalue, see Notes
Returns
-------
corr_new : ndarray, (optional)
corrected correlation matrix
Notes
-----
The smallest eigenvalue of the corrected correlation matrix is
approximately equal to the ``threshold``. In examples, the
smallest eigenvalue can be by a factor of 10 smaller than the threshold,
e.g. threshold 1e-8 can result in smallest eigenvalue in the range
between 1e-9 and 1e-8.
If the threshold=0, then the smallest eigenvalue of the correlation matrix
might be negative, but zero within a numerical error, for example in the
range of -1e-16.
Assumes input correlation matrix is symmetric. The diagonal elements of
returned correlation matrix is set to ones.
If the correlation matrix is already positive semi-definite given the
threshold, then the original correlation matrix is returned.
``cov_clipped`` is 40 or more times faster than ``cov_nearest`` in simple
example, but has a slightly larger approximation error.
See Also
--------
corr_nearest
cov_nearest | corr_clipped | python | statsmodels/statsmodels | statsmodels/stats/correlation_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/correlation_tools.py | BSD-3-Clause |
def cov_nearest(cov, method='clipped', threshold=1e-15, n_fact=100,
return_all=False):
"""
Find the nearest covariance matrix that is positive (semi-) definite
This leaves the diagonal, i.e. the variance, unchanged
Parameters
----------
cov : ndarray, (k,k)
initial covariance matrix
method : str
if "clipped", then the faster but less accurate ``corr_clipped`` is
used.if "nearest", then ``corr_nearest`` is used
threshold : float
clipping threshold for smallest eigen value, see Notes
n_fact : int or float
factor to determine the maximum number of iterations in
``corr_nearest``. See its doc string
return_all : bool
if False (default), then only the covariance matrix is returned.
If True, then correlation matrix and standard deviation are
additionally returned.
Returns
-------
cov_ : ndarray
corrected covariance matrix
corr_ : ndarray, (optional)
corrected correlation matrix
std_ : ndarray, (optional)
standard deviation
Notes
-----
This converts the covariance matrix to a correlation matrix. Then, finds
the nearest correlation matrix that is positive semidefinite and converts
it back to a covariance matrix using the initial standard deviation.
The smallest eigenvalue of the intermediate correlation matrix is
approximately equal to the ``threshold``.
If the threshold=0, then the smallest eigenvalue of the correlation matrix
might be negative, but zero within a numerical error, for example in the
range of -1e-16.
Assumes input covariance matrix is symmetric.
See Also
--------
corr_nearest
corr_clipped
"""
from statsmodels.stats.moment_helpers import cov2corr, corr2cov
cov_, std_ = cov2corr(cov, return_std=True)
if method == 'clipped':
corr_ = corr_clipped(cov_, threshold=threshold)
else: # method == 'nearest'
corr_ = corr_nearest(cov_, threshold=threshold, n_fact=n_fact)
cov_ = corr2cov(corr_, std_)
if return_all:
return cov_, corr_, std_
else:
return cov_ | Find the nearest covariance matrix that is positive (semi-) definite
This leaves the diagonal, i.e. the variance, unchanged
Parameters
----------
cov : ndarray, (k,k)
initial covariance matrix
method : str
if "clipped", then the faster but less accurate ``corr_clipped`` is
used.if "nearest", then ``corr_nearest`` is used
threshold : float
clipping threshold for smallest eigen value, see Notes
n_fact : int or float
factor to determine the maximum number of iterations in
``corr_nearest``. See its doc string
return_all : bool
if False (default), then only the covariance matrix is returned.
If True, then correlation matrix and standard deviation are
additionally returned.
Returns
-------
cov_ : ndarray
corrected covariance matrix
corr_ : ndarray, (optional)
corrected correlation matrix
std_ : ndarray, (optional)
standard deviation
Notes
-----
This converts the covariance matrix to a correlation matrix. Then, finds
the nearest correlation matrix that is positive semidefinite and converts
it back to a covariance matrix using the initial standard deviation.
The smallest eigenvalue of the intermediate correlation matrix is
approximately equal to the ``threshold``.
If the threshold=0, then the smallest eigenvalue of the correlation matrix
might be negative, but zero within a numerical error, for example in the
range of -1e-16.
Assumes input covariance matrix is symmetric.
See Also
--------
corr_nearest
corr_clipped | cov_nearest | python | statsmodels/statsmodels | statsmodels/stats/correlation_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/correlation_tools.py | BSD-3-Clause |
def _nmono_linesearch(obj, grad, x, d, obj_hist, M=10, sig1=0.1,
sig2=0.9, gam=1e-4, maxiter=100):
"""
Implements the non-monotone line search of Grippo et al. (1986),
as described in Birgin, Martinez and Raydan (2013).
Parameters
----------
obj : real-valued function
The objective function, to be minimized
grad : vector-valued function
The gradient of the objective function
x : array_like
The starting point for the line search
d : array_like
The search direction
obj_hist : array_like
Objective function history (must contain at least one value)
M : positive int
Number of previous function points to consider (see references
for details).
sig1 : real
Tuning parameter, see references for details.
sig2 : real
Tuning parameter, see references for details.
gam : real
Tuning parameter, see references for details.
maxiter : int
The maximum number of iterations; returns Nones if convergence
does not occur by this point
Returns
-------
alpha : real
The step value
x : Array_like
The function argument at the final step
obval : Real
The function value at the final step
g : Array_like
The gradient at the final step
Notes
-----
The basic idea is to take a big step in the direction of the
gradient, even if the function value is not decreased (but there
is a maximum allowed increase in terms of the recent history of
the iterates).
References
----------
Grippo L, Lampariello F, Lucidi S (1986). A Nonmonotone Line
Search Technique for Newton's Method. SIAM Journal on Numerical
Analysis, 23, 707-716.
E. Birgin, J.M. Martinez, and M. Raydan. Spectral projected
gradient methods: Review and perspectives. Journal of Statistical
Software (preprint).
"""
alpha = 1.
last_obval = obj(x)
obj_max = max(obj_hist[-M:])
for iter in range(maxiter):
obval = obj(x + alpha*d)
g = grad(x)
gtd = (g * d).sum()
if obval <= obj_max + gam*alpha*gtd:
return alpha, x + alpha*d, obval, g
a1 = -0.5*alpha**2*gtd / (obval - last_obval - alpha*gtd)
if (sig1 <= a1) and (a1 <= sig2*alpha):
alpha = a1
else:
alpha /= 2.
last_obval = obval
return None, None, None, None | Implements the non-monotone line search of Grippo et al. (1986),
as described in Birgin, Martinez and Raydan (2013).
Parameters
----------
obj : real-valued function
The objective function, to be minimized
grad : vector-valued function
The gradient of the objective function
x : array_like
The starting point for the line search
d : array_like
The search direction
obj_hist : array_like
Objective function history (must contain at least one value)
M : positive int
Number of previous function points to consider (see references
for details).
sig1 : real
Tuning parameter, see references for details.
sig2 : real
Tuning parameter, see references for details.
gam : real
Tuning parameter, see references for details.
maxiter : int
The maximum number of iterations; returns Nones if convergence
does not occur by this point
Returns
-------
alpha : real
The step value
x : Array_like
The function argument at the final step
obval : Real
The function value at the final step
g : Array_like
The gradient at the final step
Notes
-----
The basic idea is to take a big step in the direction of the
gradient, even if the function value is not decreased (but there
is a maximum allowed increase in terms of the recent history of
the iterates).
References
----------
Grippo L, Lampariello F, Lucidi S (1986). A Nonmonotone Line
Search Technique for Newton's Method. SIAM Journal on Numerical
Analysis, 23, 707-716.
E. Birgin, J.M. Martinez, and M. Raydan. Spectral projected
gradient methods: Review and perspectives. Journal of Statistical
Software (preprint). | _nmono_linesearch | python | statsmodels/statsmodels | statsmodels/stats/correlation_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/correlation_tools.py | BSD-3-Clause |
def _spg_optim(func, grad, start, project, maxiter=1e4, M=10,
ctol=1e-3, maxiter_nmls=200, lam_min=1e-30,
lam_max=1e30, sig1=0.1, sig2=0.9, gam=1e-4):
"""
Implements the spectral projected gradient method for minimizing a
differentiable function on a convex domain.
Parameters
----------
func : real valued function
The objective function to be minimized.
grad : real array-valued function
The gradient of the objective function
start : array_like
The starting point
project : function
In-place projection of the argument to the domain
of func.
... See notes regarding additional arguments
Returns
-------
rslt : Bunch
rslt.params is the final iterate, other fields describe
convergence status.
Notes
-----
This can be an effective heuristic algorithm for problems where no
guaranteed algorithm for computing a global minimizer is known.
There are a number of tuning parameters, but these generally
should not be changed except for `maxiter` (positive integer) and
`ctol` (small positive real). See the Birgin et al reference for
more information about the tuning parameters.
Reference
---------
E. Birgin, J.M. Martinez, and M. Raydan. Spectral projected
gradient methods: Review and perspectives. Journal of Statistical
Software (preprint). Available at:
http://www.ime.usp.br/~egbirgin/publications/bmr5.pdf
"""
lam = min(10*lam_min, lam_max)
params = start.copy()
gval = grad(params)
obj_hist = [func(params), ]
for itr in range(int(maxiter)):
# Check convergence
df = params - gval
project(df)
df -= params
if np.max(np.abs(df)) < ctol:
return Bunch(**{"Converged": True, "params": params,
"objective_values": obj_hist,
"Message": "Converged successfully"})
# The line search direction
d = params - lam*gval
project(d)
d -= params
# Carry out the nonmonotone line search
alpha, params1, fval, gval1 = _nmono_linesearch(
func,
grad,
params,
d,
obj_hist,
M=M,
sig1=sig1,
sig2=sig2,
gam=gam,
maxiter=maxiter_nmls)
if alpha is None:
return Bunch(**{"Converged": False, "params": params,
"objective_values": obj_hist,
"Message": "Failed in nmono_linesearch"})
obj_hist.append(fval)
s = params1 - params
y = gval1 - gval
sy = (s*y).sum()
if sy <= 0:
lam = lam_max
else:
ss = (s*s).sum()
lam = max(lam_min, min(ss/sy, lam_max))
params = params1
gval = gval1
return Bunch(**{"Converged": False, "params": params,
"objective_values": obj_hist,
"Message": "spg_optim did not converge"}) | Implements the spectral projected gradient method for minimizing a
differentiable function on a convex domain.
Parameters
----------
func : real valued function
The objective function to be minimized.
grad : real array-valued function
The gradient of the objective function
start : array_like
The starting point
project : function
In-place projection of the argument to the domain
of func.
... See notes regarding additional arguments
Returns
-------
rslt : Bunch
rslt.params is the final iterate, other fields describe
convergence status.
Notes
-----
This can be an effective heuristic algorithm for problems where no
guaranteed algorithm for computing a global minimizer is known.
There are a number of tuning parameters, but these generally
should not be changed except for `maxiter` (positive integer) and
`ctol` (small positive real). See the Birgin et al reference for
more information about the tuning parameters.
Reference
---------
E. Birgin, J.M. Martinez, and M. Raydan. Spectral projected
gradient methods: Review and perspectives. Journal of Statistical
Software (preprint). Available at:
http://www.ime.usp.br/~egbirgin/publications/bmr5.pdf | _spg_optim | python | statsmodels/statsmodels | statsmodels/stats/correlation_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/correlation_tools.py | BSD-3-Clause |
def _project_correlation_factors(X):
"""
Project a matrix into the domain of matrices whose row-wise sums
of squares are less than or equal to 1.
The input matrix is modified in-place.
"""
nm = np.sqrt((X*X).sum(1))
ii = np.flatnonzero(nm > 1)
if len(ii) > 0:
X[ii, :] /= nm[ii][:, None] | Project a matrix into the domain of matrices whose row-wise sums
of squares are less than or equal to 1.
The input matrix is modified in-place. | _project_correlation_factors | python | statsmodels/statsmodels | statsmodels/stats/correlation_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/correlation_tools.py | BSD-3-Clause |
def to_matrix(self):
"""
Returns the PSD matrix represented by this instance as a full
(square) matrix.
"""
return np.diag(self.diag) + np.dot(self.root, self.root.T) | Returns the PSD matrix represented by this instance as a full
(square) matrix. | to_matrix | python | statsmodels/statsmodels | statsmodels/stats/correlation_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/correlation_tools.py | BSD-3-Clause |
def decorrelate(self, rhs):
"""
Decorrelate the columns of `rhs`.
Parameters
----------
rhs : array_like
A 2 dimensional array with the same number of rows as the
PSD matrix represented by the class instance.
Returns
-------
C^{-1/2} * rhs, where C is the covariance matrix represented
by this class instance.
Notes
-----
The returned matrix has the identity matrix as its row-wise
population covariance matrix.
This function exploits the factor structure for efficiency.
"""
# I + factor * qval * factor' is the inverse square root of
# the covariance matrix in the homogeneous case where diag =
# 1.
qval = -1 + 1 / np.sqrt(1 + self.scales)
# Decorrelate in the general case.
rhs = rhs / np.sqrt(self.diag)[:, None]
rhs1 = np.dot(self.factor.T, rhs)
rhs1 *= qval[:, None]
rhs1 = np.dot(self.factor, rhs1)
rhs += rhs1
return rhs | Decorrelate the columns of `rhs`.
Parameters
----------
rhs : array_like
A 2 dimensional array with the same number of rows as the
PSD matrix represented by the class instance.
Returns
-------
C^{-1/2} * rhs, where C is the covariance matrix represented
by this class instance.
Notes
-----
The returned matrix has the identity matrix as its row-wise
population covariance matrix.
This function exploits the factor structure for efficiency. | decorrelate | python | statsmodels/statsmodels | statsmodels/stats/correlation_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/correlation_tools.py | BSD-3-Clause |
def solve(self, rhs):
"""
Solve a linear system of equations with factor-structured
coefficients.
Parameters
----------
rhs : array_like
A 2 dimensional array with the same number of rows as the
PSD matrix represented by the class instance.
Returns
-------
C^{-1} * rhs, where C is the covariance matrix represented
by this class instance.
Notes
-----
This function exploits the factor structure for efficiency.
"""
qval = -self.scales / (1 + self.scales)
dr = np.sqrt(self.diag)
rhs = rhs / dr[:, None]
mat = qval[:, None] * np.dot(self.factor.T, rhs)
rhs = rhs + np.dot(self.factor, mat)
return rhs / dr[:, None] | Solve a linear system of equations with factor-structured
coefficients.
Parameters
----------
rhs : array_like
A 2 dimensional array with the same number of rows as the
PSD matrix represented by the class instance.
Returns
-------
C^{-1} * rhs, where C is the covariance matrix represented
by this class instance.
Notes
-----
This function exploits the factor structure for efficiency. | solve | python | statsmodels/statsmodels | statsmodels/stats/correlation_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/correlation_tools.py | BSD-3-Clause |
def logdet(self):
"""
Returns the logarithm of the determinant of a
factor-structured matrix.
"""
logdet = np.sum(np.log(self.diag))
logdet += np.sum(np.log(self.scales))
logdet += np.sum(np.log(1 + 1 / self.scales))
return logdet | Returns the logarithm of the determinant of a
factor-structured matrix. | logdet | python | statsmodels/statsmodels | statsmodels/stats/correlation_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/correlation_tools.py | BSD-3-Clause |
def corr_nearest_factor(corr, rank, ctol=1e-6, lam_min=1e-30,
lam_max=1e30, maxiter=1000):
"""
Find the nearest correlation matrix with factor structure to a
given square matrix.
Parameters
----------
corr : square array
The target matrix (to which the nearest correlation matrix is
sought). Must be square, but need not be positive
semidefinite.
rank : int
The rank of the factor structure of the solution, i.e., the
number of linearly independent columns of X.
ctol : positive real
Convergence criterion.
lam_min : float
Tuning parameter for spectral projected gradient optimization
(smallest allowed step in the search direction).
lam_max : float
Tuning parameter for spectral projected gradient optimization
(largest allowed step in the search direction).
maxiter : int
Maximum number of iterations in spectral projected gradient
optimization.
Returns
-------
rslt : Bunch
rslt.corr is a FactoredPSDMatrix defining the estimated
correlation structure. Other fields of `rslt` contain
returned values from spg_optim.
Notes
-----
A correlation matrix has factor structure if it can be written in
the form I + XX' - diag(XX'), where X is n x k with linearly
independent columns, and with each row having sum of squares at
most equal to 1. The approximation is made in terms of the
Frobenius norm.
This routine is useful when one has an approximate correlation
matrix that is not positive semidefinite, and there is need to
estimate the inverse, square root, or inverse square root of the
population correlation matrix. The factor structure allows these
tasks to be done without constructing any n x n matrices.
This is a non-convex problem with no known guaranteed globally
convergent algorithm for computing the solution. Borsdof, Higham
and Raydan (2010) compared several methods for this problem and
found the spectral projected gradient (SPG) method (used here) to
perform best.
The input matrix `corr` can be a dense numpy array or any scipy
sparse matrix. The latter is useful if the input matrix is
obtained by thresholding a very large sample correlation matrix.
If `corr` is sparse, the calculations are optimized to save
memory, so no working matrix with more than 10^6 elements is
constructed.
References
----------
.. [*] R Borsdof, N Higham, M Raydan (2010). Computing a nearest
correlation matrix with factor structure. SIAM J Matrix Anal Appl,
31:5, 2603-2622.
http://eprints.ma.man.ac.uk/1523/01/covered/MIMS_ep2009_87.pdf
Examples
--------
Hard thresholding a correlation matrix may result in a matrix that
is not positive semidefinite. We can approximate a hard
thresholded correlation matrix with a PSD matrix as follows, where
`corr` is the input correlation matrix.
>>> import numpy as np
>>> from statsmodels.stats.correlation_tools import corr_nearest_factor
>>> np.random.seed(1234)
>>> b = 1.5 - np.random.rand(10, 1)
>>> x = np.random.randn(100,1).dot(b.T) + np.random.randn(100,10)
>>> corr = np.corrcoef(x.T)
>>> corr = corr * (np.abs(corr) >= 0.3)
>>> rslt = corr_nearest_factor(corr, 3)
"""
p, _ = corr.shape
# Starting values (following the PCA method in BHR).
u, s, vt = svds(corr, rank)
X = u * np.sqrt(s)
nm = np.sqrt((X**2).sum(1))
ii = np.flatnonzero(nm > 1e-5)
X[ii, :] /= nm[ii][:, None]
# Zero the diagonal
corr1 = corr.copy()
if type(corr1) is np.ndarray:
np.fill_diagonal(corr1, 0)
elif sparse.issparse(corr1):
corr1.setdiag(np.zeros(corr1.shape[0]))
corr1.eliminate_zeros()
corr1.sort_indices()
else:
raise ValueError("Matrix type not supported")
# The gradient, from lemma 4.1 of BHR.
def grad(X):
gr = np.dot(X, np.dot(X.T, X))
if type(corr1) is np.ndarray:
gr -= np.dot(corr1, X)
else:
gr -= corr1.dot(X)
gr -= (X*X).sum(1)[:, None] * X
return 4*gr
# The objective function (sum of squared deviations between fitted
# and observed arrays).
def func(X):
if type(corr1) is np.ndarray:
M = np.dot(X, X.T)
np.fill_diagonal(M, 0)
M -= corr1
fval = (M*M).sum()
return fval
else:
fval = 0.
# Control the size of intermediates
max_ws = 1e6
bs = int(max_ws / X.shape[0])
ir = 0
while ir < X.shape[0]:
ir2 = min(ir+bs, X.shape[0])
u = np.dot(X[ir:ir2, :], X.T)
ii = np.arange(u.shape[0])
u[ii, ir+ii] = 0
u -= np.asarray(corr1[ir:ir2, :].todense())
fval += (u*u).sum()
ir += bs
return fval
rslt = _spg_optim(func, grad, X, _project_correlation_factors, ctol=ctol,
lam_min=lam_min, lam_max=lam_max, maxiter=maxiter)
root = rslt.params
diag = 1 - (root**2).sum(1)
soln = FactoredPSDMatrix(diag, root)
rslt.corr = soln
del rslt.params
return rslt | Find the nearest correlation matrix with factor structure to a
given square matrix.
Parameters
----------
corr : square array
The target matrix (to which the nearest correlation matrix is
sought). Must be square, but need not be positive
semidefinite.
rank : int
The rank of the factor structure of the solution, i.e., the
number of linearly independent columns of X.
ctol : positive real
Convergence criterion.
lam_min : float
Tuning parameter for spectral projected gradient optimization
(smallest allowed step in the search direction).
lam_max : float
Tuning parameter for spectral projected gradient optimization
(largest allowed step in the search direction).
maxiter : int
Maximum number of iterations in spectral projected gradient
optimization.
Returns
-------
rslt : Bunch
rslt.corr is a FactoredPSDMatrix defining the estimated
correlation structure. Other fields of `rslt` contain
returned values from spg_optim.
Notes
-----
A correlation matrix has factor structure if it can be written in
the form I + XX' - diag(XX'), where X is n x k with linearly
independent columns, and with each row having sum of squares at
most equal to 1. The approximation is made in terms of the
Frobenius norm.
This routine is useful when one has an approximate correlation
matrix that is not positive semidefinite, and there is need to
estimate the inverse, square root, or inverse square root of the
population correlation matrix. The factor structure allows these
tasks to be done without constructing any n x n matrices.
This is a non-convex problem with no known guaranteed globally
convergent algorithm for computing the solution. Borsdof, Higham
and Raydan (2010) compared several methods for this problem and
found the spectral projected gradient (SPG) method (used here) to
perform best.
The input matrix `corr` can be a dense numpy array or any scipy
sparse matrix. The latter is useful if the input matrix is
obtained by thresholding a very large sample correlation matrix.
If `corr` is sparse, the calculations are optimized to save
memory, so no working matrix with more than 10^6 elements is
constructed.
References
----------
.. [*] R Borsdof, N Higham, M Raydan (2010). Computing a nearest
correlation matrix with factor structure. SIAM J Matrix Anal Appl,
31:5, 2603-2622.
http://eprints.ma.man.ac.uk/1523/01/covered/MIMS_ep2009_87.pdf
Examples
--------
Hard thresholding a correlation matrix may result in a matrix that
is not positive semidefinite. We can approximate a hard
thresholded correlation matrix with a PSD matrix as follows, where
`corr` is the input correlation matrix.
>>> import numpy as np
>>> from statsmodels.stats.correlation_tools import corr_nearest_factor
>>> np.random.seed(1234)
>>> b = 1.5 - np.random.rand(10, 1)
>>> x = np.random.randn(100,1).dot(b.T) + np.random.randn(100,10)
>>> corr = np.corrcoef(x.T)
>>> corr = corr * (np.abs(corr) >= 0.3)
>>> rslt = corr_nearest_factor(corr, 3) | corr_nearest_factor | python | statsmodels/statsmodels | statsmodels/stats/correlation_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/correlation_tools.py | BSD-3-Clause |
def cov_nearest_factor_homog(cov, rank):
"""
Approximate an arbitrary square matrix with a factor-structured
matrix of the form k*I + XX'.
Parameters
----------
cov : array_like
The input array, must be square but need not be positive
semidefinite
rank : int
The rank of the fitted factor structure
Returns
-------
A FactoredPSDMatrix instance containing the fitted matrix
Notes
-----
This routine is useful if one has an estimated covariance matrix
that is not SPD, and the ultimate goal is to estimate the inverse,
square root, or inverse square root of the true covariance
matrix. The factor structure allows these tasks to be performed
without constructing any n x n matrices.
The calculations use the fact that if k is known, then X can be
determined from the eigen-decomposition of cov - k*I, which can
in turn be easily obtained form the eigen-decomposition of `cov`.
Thus the problem can be reduced to a 1-dimensional search for k
that does not require repeated eigen-decompositions.
If the input matrix is sparse, then cov - k*I is also sparse, so
the eigen-decomposition can be done efficiently using sparse
routines.
The one-dimensional search for the optimal value of k is not
convex, so a local minimum could be obtained.
Examples
--------
Hard thresholding a covariance matrix may result in a matrix that
is not positive semidefinite. We can approximate a hard
thresholded covariance matrix with a PSD matrix as follows:
>>> import numpy as np
>>> np.random.seed(1234)
>>> b = 1.5 - np.random.rand(10, 1)
>>> x = np.random.randn(100,1).dot(b.T) + np.random.randn(100,10)
>>> cov = np.cov(x)
>>> cov = cov * (np.abs(cov) >= 0.3)
>>> rslt = cov_nearest_factor_homog(cov, 3)
"""
m, n = cov.shape
Q, Lambda, _ = svds(cov, rank)
if sparse.issparse(cov):
QSQ = np.dot(Q.T, cov.dot(Q))
ts = cov.diagonal().sum()
tss = cov.dot(cov).diagonal().sum()
else:
QSQ = np.dot(Q.T, np.dot(cov, Q))
ts = np.trace(cov)
tss = np.trace(np.dot(cov, cov))
def fun(k):
Lambda_t = Lambda - k
v = tss + m*(k**2) + np.sum(Lambda_t**2) - 2*k*ts
v += 2*k*np.sum(Lambda_t) - 2*np.sum(np.diag(QSQ) * Lambda_t)
return v
# Get the optimal decomposition
k_opt = fminbound(fun, 0, 1e5)
Lambda_opt = Lambda - k_opt
fac_opt = Q * np.sqrt(Lambda_opt)
diag = k_opt * np.ones(m, dtype=np.float64) # - (fac_opt**2).sum(1)
return FactoredPSDMatrix(diag, fac_opt) | Approximate an arbitrary square matrix with a factor-structured
matrix of the form k*I + XX'.
Parameters
----------
cov : array_like
The input array, must be square but need not be positive
semidefinite
rank : int
The rank of the fitted factor structure
Returns
-------
A FactoredPSDMatrix instance containing the fitted matrix
Notes
-----
This routine is useful if one has an estimated covariance matrix
that is not SPD, and the ultimate goal is to estimate the inverse,
square root, or inverse square root of the true covariance
matrix. The factor structure allows these tasks to be performed
without constructing any n x n matrices.
The calculations use the fact that if k is known, then X can be
determined from the eigen-decomposition of cov - k*I, which can
in turn be easily obtained form the eigen-decomposition of `cov`.
Thus the problem can be reduced to a 1-dimensional search for k
that does not require repeated eigen-decompositions.
If the input matrix is sparse, then cov - k*I is also sparse, so
the eigen-decomposition can be done efficiently using sparse
routines.
The one-dimensional search for the optimal value of k is not
convex, so a local minimum could be obtained.
Examples
--------
Hard thresholding a covariance matrix may result in a matrix that
is not positive semidefinite. We can approximate a hard
thresholded covariance matrix with a PSD matrix as follows:
>>> import numpy as np
>>> np.random.seed(1234)
>>> b = 1.5 - np.random.rand(10, 1)
>>> x = np.random.randn(100,1).dot(b.T) + np.random.randn(100,10)
>>> cov = np.cov(x)
>>> cov = cov * (np.abs(cov) >= 0.3)
>>> rslt = cov_nearest_factor_homog(cov, 3) | cov_nearest_factor_homog | python | statsmodels/statsmodels | statsmodels/stats/correlation_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/correlation_tools.py | BSD-3-Clause |
def set_bandwidth(self, bw):
"""
Set the bandwidth to the given vector.
Parameters
----------
bw : array_like
A vector of non-negative bandwidth values.
"""
self.bw = bw
self._setup() | Set the bandwidth to the given vector.
Parameters
----------
bw : array_like
A vector of non-negative bandwidth values. | set_bandwidth | python | statsmodels/statsmodels | statsmodels/stats/correlation_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/correlation_tools.py | BSD-3-Clause |
def set_default_bw(self, loc, bwm=None):
"""
Set default bandwiths based on domain values.
Parameters
----------
loc : array_like
Values from the domain to which the kernel will
be applied.
bwm : scalar, optional
A non-negative scalar that is used to multiply
the default bandwidth.
"""
sd = loc.std(0)
q25, q75 = np.percentile(loc, [25, 75], axis=0)
iqr = (q75 - q25) / 1.349
bw = np.where(iqr < sd, iqr, sd)
bw *= 0.9 / loc.shape[0] ** 0.2
if bwm is not None:
bw *= bwm
# The final bandwidths
self.bw = np.asarray(bw, dtype=np.float64)
self._setup() | Set default bandwiths based on domain values.
Parameters
----------
loc : array_like
Values from the domain to which the kernel will
be applied.
bwm : scalar, optional
A non-negative scalar that is used to multiply
the default bandwidth. | set_default_bw | python | statsmodels/statsmodels | statsmodels/stats/correlation_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/correlation_tools.py | BSD-3-Clause |
def kernel_covariance(exog, loc, groups, kernel=None, bw=None):
"""
Use kernel averaging to estimate a multivariate covariance function.
The goal is to estimate a covariance function C(x, y) =
cov(Z(x), Z(y)) where x, y are vectors in R^p (e.g. representing
locations in time or space), and Z(.) represents a multivariate
process on R^p.
The data used for estimation can be observed at arbitrary values of the
position vector, and there can be multiple independent observations
from the process.
Parameters
----------
exog : array_like
The rows of exog are realizations of the process obtained at
specified points.
loc : array_like
The rows of loc are the locations (e.g. in space or time) at
which the rows of exog are observed.
groups : array_like
The values of groups are labels for distinct independent copies
of the process.
kernel : MultivariateKernel instance, optional
An instance of MultivariateKernel, defaults to
GaussianMultivariateKernel.
bw : array_like or scalar
A bandwidth vector, or bandwidth multiplier. If a 1d array, it
contains kernel bandwidths for each component of the process, and
must have length equal to the number of columns of exog. If a scalar,
bw is a bandwidth multiplier used to adjust the default bandwidth; if
None, a default bandwidth is used.
Returns
-------
A real-valued function C(x, y) that returns an estimate of the covariance
between values of the process located at x and y.
References
----------
.. [1] Genton M, W Kleiber (2015). Cross covariance functions for
multivariate geostatics. Statistical Science 30(2).
https://arxiv.org/pdf/1507.08017.pdf
"""
exog = np.asarray(exog)
loc = np.asarray(loc)
groups = np.asarray(groups)
if loc.ndim == 1:
loc = loc[:, None]
v = [exog.shape[0], loc.shape[0], len(groups)]
if min(v) != max(v):
msg = "exog, loc, and groups must have the same number of rows"
raise ValueError(msg)
# Map from group labels to the row indices in each group.
ix = {}
for i, g in enumerate(groups):
if g not in ix:
ix[g] = []
ix[g].append(i)
for g in ix.keys():
ix[g] = np.sort(ix[g])
if kernel is None:
kernel = GaussianMultivariateKernel()
if bw is None:
kernel.set_default_bw(loc)
elif np.isscalar(bw):
kernel.set_default_bw(loc, bwm=bw)
else:
kernel.set_bandwidth(bw)
def cov(x, y):
kx = kernel.call(x, loc)
ky = kernel.call(y, loc)
cm, cw = 0., 0.
for g, ii in ix.items():
m = len(ii)
j1, j2 = np.indices((m, m))
j1 = ii[j1.flat]
j2 = ii[j2.flat]
w = kx[j1] * ky[j2]
# TODO: some other form of broadcasting may be faster than
# einsum here
cm += np.einsum("ij,ik,i->jk", exog[j1, :], exog[j2, :], w)
cw += w.sum()
if cw < 1e-10:
msg = ("Effective sample size is 0. The bandwidth may be too " +
"small, or you are outside the range of your data.")
warnings.warn(msg)
return np.nan * np.ones_like(cm)
return cm / cw
return cov | Use kernel averaging to estimate a multivariate covariance function.
The goal is to estimate a covariance function C(x, y) =
cov(Z(x), Z(y)) where x, y are vectors in R^p (e.g. representing
locations in time or space), and Z(.) represents a multivariate
process on R^p.
The data used for estimation can be observed at arbitrary values of the
position vector, and there can be multiple independent observations
from the process.
Parameters
----------
exog : array_like
The rows of exog are realizations of the process obtained at
specified points.
loc : array_like
The rows of loc are the locations (e.g. in space or time) at
which the rows of exog are observed.
groups : array_like
The values of groups are labels for distinct independent copies
of the process.
kernel : MultivariateKernel instance, optional
An instance of MultivariateKernel, defaults to
GaussianMultivariateKernel.
bw : array_like or scalar
A bandwidth vector, or bandwidth multiplier. If a 1d array, it
contains kernel bandwidths for each component of the process, and
must have length equal to the number of columns of exog. If a scalar,
bw is a bandwidth multiplier used to adjust the default bandwidth; if
None, a default bandwidth is used.
Returns
-------
A real-valued function C(x, y) that returns an estimate of the covariance
between values of the process located at x and y.
References
----------
.. [1] Genton M, W Kleiber (2015). Cross covariance functions for
multivariate geostatics. Statistical Science 30(2).
https://arxiv.org/pdf/1507.08017.pdf | kernel_covariance | python | statsmodels/statsmodels | statsmodels/stats/correlation_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/correlation_tools.py | BSD-3-Clause |
def omni_normtest(resids, axis=0):
"""
Omnibus test for normality
Parameters
----------
resid : array_like
axis : int, optional
Default is 0
Returns
-------
Chi^2 score, two-tail probability
"""
# TODO: change to exception in summary branch and catch in summary()
# behavior changed between scipy 0.9 and 0.10
resids = np.asarray(resids)
n = resids.shape[axis]
if n < 8:
from warnings import warn
warn("omni_normtest is not valid with less than 8 observations; %i "
"samples were given." % int(n), ValueWarning)
return np.nan, np.nan
return stats.normaltest(resids, axis=axis) | Omnibus test for normality
Parameters
----------
resid : array_like
axis : int, optional
Default is 0
Returns
-------
Chi^2 score, two-tail probability | omni_normtest | python | statsmodels/statsmodels | statsmodels/stats/stattools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/stattools.py | BSD-3-Clause |
def robust_skewness(y, axis=0):
"""
Calculates the four skewness measures in Kim & White
Parameters
----------
y : array_like
Data to compute use in the estimator.
axis : int or None, optional
Axis along which the skewness measures are computed. If `None`, the
entire array is used.
Returns
-------
sk1 : ndarray
The standard skewness estimator.
sk2 : ndarray
Skewness estimator based on quartiles.
sk3 : ndarray
Skewness estimator based on mean-median difference, standardized by
absolute deviation.
sk4 : ndarray
Skewness estimator based on mean-median difference, standardized by
standard deviation.
Notes
-----
The robust skewness measures are defined
.. math::
SK_{2}=\\frac{\\left(q_{.75}-q_{.5}\\right)
-\\left(q_{.5}-q_{.25}\\right)}{q_{.75}-q_{.25}}
.. math::
SK_{3}=\\frac{\\mu-\\hat{q}_{0.5}}
{\\hat{E}\\left[\\left|y-\\hat{\\mu}\\right|\\right]}
.. math::
SK_{4}=\\frac{\\mu-\\hat{q}_{0.5}}{\\hat{\\sigma}}
.. [*] Tae-Hwan Kim and Halbert White, "On more robust estimation of
skewness and kurtosis," Finance Research Letters, vol. 1, pp. 56-73,
March 2004.
"""
if axis is None:
y = y.ravel()
axis = 0
y = np.sort(y, axis)
q1, q2, q3 = np.percentile(y, [25.0, 50.0, 75.0], axis=axis)
mu = y.mean(axis)
shape = (y.size,)
if axis is not None:
shape = list(mu.shape)
shape.insert(axis, 1)
shape = tuple(shape)
mu_b = np.reshape(mu, shape)
q2_b = np.reshape(q2, shape)
sigma = np.sqrt(np.mean(((y - mu_b)**2), axis))
sk1 = stats.skew(y, axis=axis)
sk2 = (q1 + q3 - 2.0 * q2) / (q3 - q1)
sk3 = (mu - q2) / np.mean(abs(y - q2_b), axis=axis)
sk4 = (mu - q2) / sigma
return sk1, sk2, sk3, sk4 | Calculates the four skewness measures in Kim & White
Parameters
----------
y : array_like
Data to compute use in the estimator.
axis : int or None, optional
Axis along which the skewness measures are computed. If `None`, the
entire array is used.
Returns
-------
sk1 : ndarray
The standard skewness estimator.
sk2 : ndarray
Skewness estimator based on quartiles.
sk3 : ndarray
Skewness estimator based on mean-median difference, standardized by
absolute deviation.
sk4 : ndarray
Skewness estimator based on mean-median difference, standardized by
standard deviation.
Notes
-----
The robust skewness measures are defined
.. math::
SK_{2}=\\frac{\\left(q_{.75}-q_{.5}\\right)
-\\left(q_{.5}-q_{.25}\\right)}{q_{.75}-q_{.25}}
.. math::
SK_{3}=\\frac{\\mu-\\hat{q}_{0.5}}
{\\hat{E}\\left[\\left|y-\\hat{\\mu}\\right|\\right]}
.. math::
SK_{4}=\\frac{\\mu-\\hat{q}_{0.5}}{\\hat{\\sigma}}
.. [*] Tae-Hwan Kim and Halbert White, "On more robust estimation of
skewness and kurtosis," Finance Research Letters, vol. 1, pp. 56-73,
March 2004. | robust_skewness | python | statsmodels/statsmodels | statsmodels/stats/stattools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/stattools.py | BSD-3-Clause |
def _kr3(y, alpha=5.0, beta=50.0):
"""
KR3 estimator from Kim & White
Parameters
----------
y : array_like, 1-d
Data to compute use in the estimator.
alpha : float, optional
Lower cut-off for measuring expectation in tail.
beta : float, optional
Lower cut-off for measuring expectation in center.
Returns
-------
kr3 : float
Robust kurtosis estimator based on standardized lower- and upper-tail
expected values
Notes
-----
.. [*] Tae-Hwan Kim and Halbert White, "On more robust estimation of
skewness and kurtosis," Finance Research Letters, vol. 1, pp. 56-73,
March 2004.
"""
perc = (alpha, 100.0 - alpha, beta, 100.0 - beta)
lower_alpha, upper_alpha, lower_beta, upper_beta = np.percentile(y, perc)
l_alpha = np.mean(y[y < lower_alpha])
u_alpha = np.mean(y[y > upper_alpha])
l_beta = np.mean(y[y < lower_beta])
u_beta = np.mean(y[y > upper_beta])
return (u_alpha - l_alpha) / (u_beta - l_beta) | KR3 estimator from Kim & White
Parameters
----------
y : array_like, 1-d
Data to compute use in the estimator.
alpha : float, optional
Lower cut-off for measuring expectation in tail.
beta : float, optional
Lower cut-off for measuring expectation in center.
Returns
-------
kr3 : float
Robust kurtosis estimator based on standardized lower- and upper-tail
expected values
Notes
-----
.. [*] Tae-Hwan Kim and Halbert White, "On more robust estimation of
skewness and kurtosis," Finance Research Letters, vol. 1, pp. 56-73,
March 2004. | _kr3 | python | statsmodels/statsmodels | statsmodels/stats/stattools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/stattools.py | BSD-3-Clause |
def expected_robust_kurtosis(ab=(5.0, 50.0), dg=(2.5, 25.0)):
"""
Calculates the expected value of the robust kurtosis measures in Kim and
White assuming the data are normally distributed.
Parameters
----------
ab : iterable, optional
Contains 100*(alpha, beta) in the kr3 measure where alpha is the tail
quantile cut-off for measuring the extreme tail and beta is the central
quantile cutoff for the standardization of the measure
db : iterable, optional
Contains 100*(delta, gamma) in the kr4 measure where delta is the tail
quantile for measuring extreme values and gamma is the central quantile
used in the the standardization of the measure
Returns
-------
ekr : ndarray, 4-element
Contains the expected values of the 4 robust kurtosis measures
Notes
-----
See `robust_kurtosis` for definitions of the robust kurtosis measures
"""
alpha, beta = ab
delta, gamma = dg
expected_value = np.zeros(4)
ppf = stats.norm.ppf
pdf = stats.norm.pdf
q1, q2, q3, q5, q6, q7 = ppf(np.array((1.0, 2.0, 3.0, 5.0, 6.0, 7.0)) / 8)
expected_value[0] = 3
expected_value[1] = ((q7 - q5) + (q3 - q1)) / (q6 - q2)
q_alpha, q_beta = ppf(np.array((alpha / 100.0, beta / 100.0)))
expected_value[2] = (2 * pdf(q_alpha) / alpha) / (2 * pdf(q_beta) / beta)
q_delta, q_gamma = ppf(np.array((delta / 100.0, gamma / 100.0)))
expected_value[3] = (-2.0 * q_delta) / (-2.0 * q_gamma)
return expected_value | Calculates the expected value of the robust kurtosis measures in Kim and
White assuming the data are normally distributed.
Parameters
----------
ab : iterable, optional
Contains 100*(alpha, beta) in the kr3 measure where alpha is the tail
quantile cut-off for measuring the extreme tail and beta is the central
quantile cutoff for the standardization of the measure
db : iterable, optional
Contains 100*(delta, gamma) in the kr4 measure where delta is the tail
quantile for measuring extreme values and gamma is the central quantile
used in the the standardization of the measure
Returns
-------
ekr : ndarray, 4-element
Contains the expected values of the 4 robust kurtosis measures
Notes
-----
See `robust_kurtosis` for definitions of the robust kurtosis measures | expected_robust_kurtosis | python | statsmodels/statsmodels | statsmodels/stats/stattools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/stattools.py | BSD-3-Clause |
def robust_kurtosis(y, axis=0, ab=(5.0, 50.0), dg=(2.5, 25.0), excess=True):
"""
Calculates the four kurtosis measures in Kim & White
Parameters
----------
y : array_like
Data to compute use in the estimator.
axis : int or None, optional
Axis along which the kurtosis are computed. If `None`, the
entire array is used.
a iterable, optional
Contains 100*(alpha, beta) in the kr3 measure where alpha is the tail
quantile cut-off for measuring the extreme tail and beta is the central
quantile cutoff for the standardization of the measure
db : iterable, optional
Contains 100*(delta, gamma) in the kr4 measure where delta is the tail
quantile for measuring extreme values and gamma is the central quantile
used in the the standardization of the measure
excess : bool, optional
If true (default), computed values are excess of those for a standard
normal distribution.
Returns
-------
kr1 : ndarray
The standard kurtosis estimator.
kr2 : ndarray
Kurtosis estimator based on octiles.
kr3 : ndarray
Kurtosis estimators based on exceedance expectations.
kr4 : ndarray
Kurtosis measure based on the spread between high and low quantiles.
Notes
-----
The robust kurtosis measures are defined
.. math::
KR_{2}=\\frac{\\left(\\hat{q}_{.875}-\\hat{q}_{.625}\\right)
+\\left(\\hat{q}_{.375}-\\hat{q}_{.125}\\right)}
{\\hat{q}_{.75}-\\hat{q}_{.25}}
.. math::
KR_{3}=\\frac{\\hat{E}\\left(y|y>\\hat{q}_{1-\\alpha}\\right)
-\\hat{E}\\left(y|y<\\hat{q}_{\\alpha}\\right)}
{\\hat{E}\\left(y|y>\\hat{q}_{1-\\beta}\\right)
-\\hat{E}\\left(y|y<\\hat{q}_{\\beta}\\right)}
.. math::
KR_{4}=\\frac{\\hat{q}_{1-\\delta}-\\hat{q}_{\\delta}}
{\\hat{q}_{1-\\gamma}-\\hat{q}_{\\gamma}}
where :math:`\\hat{q}_{p}` is the estimated quantile at :math:`p`.
.. [*] Tae-Hwan Kim and Halbert White, "On more robust estimation of
skewness and kurtosis," Finance Research Letters, vol. 1, pp. 56-73,
March 2004.
"""
if (axis is None or
(y.squeeze().ndim == 1 and y.ndim != 1)):
y = y.ravel()
axis = 0
alpha, beta = ab
delta, gamma = dg
perc = (12.5, 25.0, 37.5, 62.5, 75.0, 87.5,
delta, 100.0 - delta, gamma, 100.0 - gamma)
e1, e2, e3, e5, e6, e7, fd, f1md, fg, f1mg = np.percentile(y, perc,
axis=axis)
expected_value = (expected_robust_kurtosis(ab, dg)
if excess else np.zeros(4))
kr1 = stats.kurtosis(y, axis, False) - expected_value[0]
kr2 = ((e7 - e5) + (e3 - e1)) / (e6 - e2) - expected_value[1]
if y.ndim == 1:
kr3 = _kr3(y, alpha, beta)
else:
kr3 = np.apply_along_axis(_kr3, axis, y, alpha, beta)
kr3 -= expected_value[2]
kr4 = (f1md - fd) / (f1mg - fg) - expected_value[3]
return kr1, kr2, kr3, kr4 | Calculates the four kurtosis measures in Kim & White
Parameters
----------
y : array_like
Data to compute use in the estimator.
axis : int or None, optional
Axis along which the kurtosis are computed. If `None`, the
entire array is used.
a iterable, optional
Contains 100*(alpha, beta) in the kr3 measure where alpha is the tail
quantile cut-off for measuring the extreme tail and beta is the central
quantile cutoff for the standardization of the measure
db : iterable, optional
Contains 100*(delta, gamma) in the kr4 measure where delta is the tail
quantile for measuring extreme values and gamma is the central quantile
used in the the standardization of the measure
excess : bool, optional
If true (default), computed values are excess of those for a standard
normal distribution.
Returns
-------
kr1 : ndarray
The standard kurtosis estimator.
kr2 : ndarray
Kurtosis estimator based on octiles.
kr3 : ndarray
Kurtosis estimators based on exceedance expectations.
kr4 : ndarray
Kurtosis measure based on the spread between high and low quantiles.
Notes
-----
The robust kurtosis measures are defined
.. math::
KR_{2}=\\frac{\\left(\\hat{q}_{.875}-\\hat{q}_{.625}\\right)
+\\left(\\hat{q}_{.375}-\\hat{q}_{.125}\\right)}
{\\hat{q}_{.75}-\\hat{q}_{.25}}
.. math::
KR_{3}=\\frac{\\hat{E}\\left(y|y>\\hat{q}_{1-\\alpha}\\right)
-\\hat{E}\\left(y|y<\\hat{q}_{\\alpha}\\right)}
{\\hat{E}\\left(y|y>\\hat{q}_{1-\\beta}\\right)
-\\hat{E}\\left(y|y<\\hat{q}_{\\beta}\\right)}
.. math::
KR_{4}=\\frac{\\hat{q}_{1-\\delta}-\\hat{q}_{\\delta}}
{\\hat{q}_{1-\\gamma}-\\hat{q}_{\\gamma}}
where :math:`\\hat{q}_{p}` is the estimated quantile at :math:`p`.
.. [*] Tae-Hwan Kim and Halbert White, "On more robust estimation of
skewness and kurtosis," Finance Research Letters, vol. 1, pp. 56-73,
March 2004. | robust_kurtosis | python | statsmodels/statsmodels | statsmodels/stats/stattools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/stattools.py | BSD-3-Clause |
def _medcouple_1d(y):
"""
Calculates the medcouple robust measure of skew.
Parameters
----------
y : array_like, 1-d
Data to compute use in the estimator.
Returns
-------
mc : float
The medcouple statistic
Notes
-----
The current algorithm requires a O(N**2) memory allocations, and so may
not work for very large arrays (N>10000).
.. [*] M. Hubert and E. Vandervieren, "An adjusted boxplot for skewed
distributions" Computational Statistics & Data Analysis, vol. 52, pp.
5186-5201, August 2008.
"""
# Parameter changes the algorithm to the slower for large n
y = np.squeeze(np.asarray(y))
if y.ndim != 1:
raise ValueError("y must be squeezable to a 1-d array")
y = np.sort(y)
n = y.shape[0]
if n % 2 == 0:
mf = (y[n // 2 - 1] + y[n // 2]) / 2
else:
mf = y[(n - 1) // 2]
z = y - mf
lower = z[z <= 0.0]
upper = z[z >= 0.0]
upper = upper[:, None]
standardization = upper - lower
is_zero = np.logical_and(lower == 0.0, upper == 0.0)
standardization[is_zero] = np.inf
spread = upper + lower
h = spread / standardization
# GH5395
num_ties = np.sum(lower == 0.0)
if num_ties:
# Replacements has -1 above the anti-diagonal, 0 on the anti-diagonal,
# and 1 below the anti-diagonal
replacements = np.ones((num_ties, num_ties)) - np.eye(num_ties)
replacements -= 2 * np.triu(replacements)
# Convert diagonal to anti-diagonal
replacements = np.fliplr(replacements)
# Always replace upper right block
h[:num_ties, -num_ties:] = replacements
return np.median(h) | Calculates the medcouple robust measure of skew.
Parameters
----------
y : array_like, 1-d
Data to compute use in the estimator.
Returns
-------
mc : float
The medcouple statistic
Notes
-----
The current algorithm requires a O(N**2) memory allocations, and so may
not work for very large arrays (N>10000).
.. [*] M. Hubert and E. Vandervieren, "An adjusted boxplot for skewed
distributions" Computational Statistics & Data Analysis, vol. 52, pp.
5186-5201, August 2008. | _medcouple_1d | python | statsmodels/statsmodels | statsmodels/stats/stattools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/stattools.py | BSD-3-Clause |
def medcouple(y, axis=0):
"""
Calculate the medcouple robust measure of skew.
Parameters
----------
y : array_like
Data to compute use in the estimator.
axis : {int, None}
Axis along which the medcouple statistic is computed. If `None`, the
entire array is used.
Returns
-------
mc : ndarray
The medcouple statistic with the same shape as `y`, with the specified
axis removed.
Notes
-----
The current algorithm requires a O(N**2) memory allocations, and so may
not work for very large arrays (N>10000).
.. [*] M. Hubert and E. Vandervieren, "An adjusted boxplot for skewed
distributions" Computational Statistics & Data Analysis, vol. 52, pp.
5186-5201, August 2008.
"""
y = np.asarray(y, dtype=np.double) # GH 4243
if axis is None:
return _medcouple_1d(y.ravel())
return np.apply_along_axis(_medcouple_1d, axis, y) | Calculate the medcouple robust measure of skew.
Parameters
----------
y : array_like
Data to compute use in the estimator.
axis : {int, None}
Axis along which the medcouple statistic is computed. If `None`, the
entire array is used.
Returns
-------
mc : ndarray
The medcouple statistic with the same shape as `y`, with the specified
axis removed.
Notes
-----
The current algorithm requires a O(N**2) memory allocations, and so may
not work for very large arrays (N>10000).
.. [*] M. Hubert and E. Vandervieren, "An adjusted boxplot for skewed
distributions" Computational Statistics & Data Analysis, vol. 52, pp.
5186-5201, August 2008. | medcouple | python | statsmodels/statsmodels | statsmodels/stats/stattools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/stattools.py | BSD-3-Clause |
def _mover_confint(stat1, stat2, ci1, ci2, contrast="diff"):
"""
References
----------
.. [#] Krishnamoorthy, K., Jie Peng, and Dan Zhang. 2016. “Modified Large
Sample Confidence Intervals for Poisson Distributions: Ratio, Weighted
Average, and Product of Means.” Communications in Statistics - Theory
and Methods 45 (1): 83–97. https://doi.org/10.1080/03610926.2013.821486.
.. [#] Li, Yanhong, John J. Koval, Allan Donner, and G. Y. Zou. 2010.
“Interval Estimation for the Area under the Receiver Operating
Characteristic Curve When Data Are Subject to Error.” Statistics in
Medicine 29 (24): 2521–31. https://doi.org/10.1002/sim.4015.
.. [#] Zou, G. Y., and A. Donner. 2008. “Construction of Confidence Limits
about Effect Measures: A General Approach.” Statistics in Medicine 27
(10): 1693–1702. https://doi.org/10.1002/sim.3095.
"""
if contrast == "diff":
stat = stat1 - stat2
low_half = np.sqrt((stat1 - ci1[0])**2 + (stat2 - ci2[1])**2)
upp_half = np.sqrt((stat1 - ci1[1])**2 + (stat2 - ci2[0])**2)
ci = (stat - low_half, stat + upp_half)
elif contrast == "sum":
stat = stat1 + stat2
low_half = np.sqrt((stat1 - ci1[0])**2 + (stat2 - ci2[0])**2)
upp_half = np.sqrt((stat1 - ci1[1])**2 + (stat2 - ci2[1])**2)
ci = (stat - low_half, stat + upp_half)
elif contrast == "ratio":
# stat = stat1 / stat2
prod = stat1 * stat2
term1 = stat2**2 - (ci2[1] - stat2)**2
term2 = stat2**2 - (ci2[0] - stat2)**2
low_ = (prod -
np.sqrt(prod**2 - term1 * (stat1**2 - (ci1[0] - stat1)**2))
) / term1
upp_ = (prod +
np.sqrt(prod**2 - term2 * (stat1**2 - (ci1[1] - stat1)**2))
) / term2
# method 2 Li, Tang, Wong 2014
low1, upp1 = ci1
low2, upp2 = ci2
term1 = upp2 * (2 * stat2 - upp2)
term2 = low2 * (2 * stat2 - low2)
low = (prod -
np.sqrt(prod**2 - term1 * low1 * (2 * stat1 - low1))
) / term1
upp = (prod +
np.sqrt(prod**2 - term2 * upp1 * (2 * stat1 - upp1))
) / term2
assert_allclose((low_, upp_), (low, upp), atol=1e-15, rtol=1e-10)
ci = (low, upp)
return ci | References
----------
.. [#] Krishnamoorthy, K., Jie Peng, and Dan Zhang. 2016. “Modified Large
Sample Confidence Intervals for Poisson Distributions: Ratio, Weighted
Average, and Product of Means.” Communications in Statistics - Theory
and Methods 45 (1): 83–97. https://doi.org/10.1080/03610926.2013.821486.
.. [#] Li, Yanhong, John J. Koval, Allan Donner, and G. Y. Zou. 2010.
“Interval Estimation for the Area under the Receiver Operating
Characteristic Curve When Data Are Subject to Error.” Statistics in
Medicine 29 (24): 2521–31. https://doi.org/10.1002/sim.4015.
.. [#] Zou, G. Y., and A. Donner. 2008. “Construction of Confidence Limits
about Effect Measures: A General Approach.” Statistics in Medicine 27
(10): 1693–1702. https://doi.org/10.1002/sim.3095. | _mover_confint | python | statsmodels/statsmodels | statsmodels/stats/_inference_tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_inference_tools.py | BSD-3-Clause |
def _check_nested_exog(small, large):
"""
Check if a larger exog nests a smaller exog
Parameters
----------
small : ndarray
exog from smaller model
large : ndarray
exog from larger model
Returns
-------
bool
True if small is nested by large
"""
if small.shape[1] > large.shape[1]:
return False
coef = np.linalg.lstsq(large, small, rcond=None)[0]
err = small - large @ coef
return np.linalg.matrix_rank(np.c_[large, err]) == large.shape[1] | Check if a larger exog nests a smaller exog
Parameters
----------
small : ndarray
exog from smaller model
large : ndarray
exog from larger model
Returns
-------
bool
True if small is nested by large | _check_nested_exog | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def compare_cox(results_x, results_z, store=False):
"""
Compute the Cox test for non-nested models
Parameters
----------
results_x : Result instance
result instance of first model
results_z : Result instance
result instance of second model
store : bool, default False
If true, then the intermediate results are returned.
Returns
-------
tstat : float
t statistic for the test that including the fitted values of the
first model in the second model has no effect.
pvalue : float
two-sided pvalue for the t statistic
res_store : ResultsStore, optional
Intermediate results. Returned if store is True.
Notes
-----
Tests of non-nested hypothesis might not provide unambiguous answers.
The test should be performed in both directions and it is possible
that both or neither test rejects. see [1]_ for more information.
Formulas from [1]_, section 8.3.4 translated to code
Matches results for Example 8.3 in Greene
References
----------
.. [1] Greene, W. H. Econometric Analysis. New Jersey. Prentice Hall;
5th edition. (2002).
"""
if _check_nested_results(results_x, results_z):
raise ValueError(NESTED_ERROR.format(test="Cox comparison"))
x = results_x.model.exog
z = results_z.model.exog
nobs = results_x.model.endog.shape[0]
sigma2_x = results_x.ssr / nobs
sigma2_z = results_z.ssr / nobs
yhat_x = results_x.fittedvalues
res_dx = OLS(yhat_x, z).fit()
err_zx = res_dx.resid
res_xzx = OLS(err_zx, x).fit()
err_xzx = res_xzx.resid
sigma2_zx = sigma2_x + np.dot(err_zx.T, err_zx) / nobs
c01 = nobs / 2. * (np.log(sigma2_z) - np.log(sigma2_zx))
v01 = sigma2_x * np.dot(err_xzx.T, err_xzx) / sigma2_zx ** 2
q = c01 / np.sqrt(v01)
pval = 2 * stats.norm.sf(np.abs(q))
if store:
res = ResultsStore()
res.res_dx = res_dx
res.res_xzx = res_xzx
res.c01 = c01
res.v01 = v01
res.q = q
res.pvalue = pval
res.dist = stats.norm
return q, pval, res
return q, pval | Compute the Cox test for non-nested models
Parameters
----------
results_x : Result instance
result instance of first model
results_z : Result instance
result instance of second model
store : bool, default False
If true, then the intermediate results are returned.
Returns
-------
tstat : float
t statistic for the test that including the fitted values of the
first model in the second model has no effect.
pvalue : float
two-sided pvalue for the t statistic
res_store : ResultsStore, optional
Intermediate results. Returned if store is True.
Notes
-----
Tests of non-nested hypothesis might not provide unambiguous answers.
The test should be performed in both directions and it is possible
that both or neither test rejects. see [1]_ for more information.
Formulas from [1]_, section 8.3.4 translated to code
Matches results for Example 8.3 in Greene
References
----------
.. [1] Greene, W. H. Econometric Analysis. New Jersey. Prentice Hall;
5th edition. (2002). | compare_cox | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def compare_j(results_x, results_z, store=False):
"""
Compute the J-test for non-nested models
Parameters
----------
results_x : RegressionResults
The result instance of first model.
results_z : RegressionResults
The result instance of second model.
store : bool, default False
If true, then the intermediate results are returned.
Returns
-------
tstat : float
t statistic for the test that including the fitted values of the
first model in the second model has no effect.
pvalue : float
two-sided pvalue for the t statistic
res_store : ResultsStore, optional
Intermediate results. Returned if store is True.
Notes
-----
From description in Greene, section 8.3.3. Matches results for Example
8.3, Greene.
Tests of non-nested hypothesis might not provide unambiguous answers.
The test should be performed in both directions and it is possible
that both or neither test rejects. see Greene for more information.
References
----------
.. [1] Greene, W. H. Econometric Analysis. New Jersey. Prentice Hall;
5th edition. (2002).
"""
# TODO: Allow cov to be specified
if _check_nested_results(results_x, results_z):
raise ValueError(NESTED_ERROR.format(test="J comparison"))
y = results_x.model.endog
z = results_z.model.exog
yhat_x = results_x.fittedvalues
res_zx = OLS(y, np.column_stack((yhat_x, z))).fit()
tstat = res_zx.tvalues[0]
pval = res_zx.pvalues[0]
if store:
res = ResultsStore()
res.res_zx = res_zx
res.dist = stats.t(res_zx.df_resid)
res.teststat = tstat
res.pvalue = pval
return tstat, pval, res
return tstat, pval | Compute the J-test for non-nested models
Parameters
----------
results_x : RegressionResults
The result instance of first model.
results_z : RegressionResults
The result instance of second model.
store : bool, default False
If true, then the intermediate results are returned.
Returns
-------
tstat : float
t statistic for the test that including the fitted values of the
first model in the second model has no effect.
pvalue : float
two-sided pvalue for the t statistic
res_store : ResultsStore, optional
Intermediate results. Returned if store is True.
Notes
-----
From description in Greene, section 8.3.3. Matches results for Example
8.3, Greene.
Tests of non-nested hypothesis might not provide unambiguous answers.
The test should be performed in both directions and it is possible
that both or neither test rejects. see Greene for more information.
References
----------
.. [1] Greene, W. H. Econometric Analysis. New Jersey. Prentice Hall;
5th edition. (2002). | compare_j | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def acorr_ljungbox(x, lags=None, boxpierce=False, model_df=0, period=None,
return_df=True, auto_lag=False):
"""
Ljung-Box test of autocorrelation in residuals.
Parameters
----------
x : array_like
The data series. The data is demeaned before the test statistic is
computed.
lags : {int, array_like}, default None
If lags is an integer then this is taken to be the largest lag
that is included, the test result is reported for all smaller lag
length. If lags is a list or array, then all lags are included up to
the largest lag in the list, however only the tests for the lags in
the list are reported. If lags is None, then the default maxlag is
min(10, nobs // 5). The default number of lags changes if period
is set.
boxpierce : bool, default False
If true, then additional to the results of the Ljung-Box test also the
Box-Pierce test results are returned.
model_df : int, default 0
Number of degrees of freedom consumed by the model. In an ARMA model,
this value is usually p+q where p is the AR order and q is the MA
order. This value is subtracted from the degrees-of-freedom used in
the test so that the adjusted dof for the statistics are
lags - model_df. If lags - model_df <= 0, then NaN is returned.
period : int, default None
The period of a Seasonal time series. Used to compute the max lag
for seasonal data which uses min(2*period, nobs // 5) if set. If None,
then the default rule is used to set the number of lags. When set, must
be >= 2.
auto_lag : bool, default False
Flag indicating whether to automatically determine the optimal lag
length based on threshold of maximum correlation value.
Returns
-------
DataFrame
Frame with columns:
* lb_stat - The Ljung-Box test statistic.
* lb_pvalue - The p-value based on chi-square distribution. The
p-value is computed as 1 - chi2.cdf(lb_stat, dof) where dof is
lag - model_df. If lag - model_df <= 0, then NaN is returned for
the pvalue.
* bp_stat - The Box-Pierce test statistic.
* bp_pvalue - The p-value based for Box-Pierce test on chi-square
distribution. The p-value is computed as 1 - chi2.cdf(bp_stat, dof)
where dof is lag - model_df. If lag - model_df <= 0, then NaN is
returned for the pvalue.
See Also
--------
statsmodels.regression.linear_model.OLS.fit
Regression model fitting.
statsmodels.regression.linear_model.RegressionResults
Results from linear regression models.
statsmodels.stats.stattools.q_stat
Ljung-Box test statistic computed from estimated
autocorrelations.
Notes
-----
Ljung-Box and Box-Pierce statistic differ in their scaling of the
autocorrelation function. Ljung-Box test is has better finite-sample
properties.
References
----------
.. [*] Green, W. "Econometric Analysis," 5th ed., Pearson, 2003.
.. [*] J. Carlos Escanciano, Ignacio N. Lobato
"An automatic Portmanteau test for serial correlation".,
Volume 151, 2009.
Examples
--------
>>> import statsmodels.api as sm
>>> data = sm.datasets.sunspots.load_pandas().data
>>> res = sm.tsa.ARMA(data["SUNACTIVITY"], (1,1)).fit(disp=-1)
>>> sm.stats.acorr_ljungbox(res.resid, lags=[10], return_df=True)
lb_stat lb_pvalue
10 214.106992 1.827374e-40
"""
# Avoid cyclic import
from statsmodels.tsa.stattools._stattools import acf
x = array_like(x, "x")
period = int_like(period, "period", optional=True)
model_df = int_like(model_df, "model_df", optional=False)
if period is not None and period <= 1:
raise ValueError("period must be >= 2")
if model_df < 0:
raise ValueError("model_df must be >= 0")
nobs = x.shape[0]
if auto_lag:
maxlag = nobs - 1
# Compute sum of squared autocorrelations
sacf = acf(x, nlags=maxlag, fft=False)
if not boxpierce:
q_sacf = (nobs * (nobs + 2) *
np.cumsum(sacf[1:maxlag + 1] ** 2
/ (nobs - np.arange(1, maxlag + 1))))
else:
q_sacf = nobs * np.cumsum(sacf[1:maxlag + 1] ** 2)
# obtain thresholds
q = 2.4
threshold = np.sqrt(q * np.log(nobs))
threshold_metric = np.abs(sacf).max() * np.sqrt(nobs)
# compute penalized sum of squared autocorrelations
if (threshold_metric <= threshold):
q_sacf = q_sacf - (np.arange(1, nobs) * np.log(nobs))
else:
q_sacf = q_sacf - (2 * np.arange(1, nobs))
# note: np.argmax returns first (i.e., smallest) index of largest value
lags = np.argmax(q_sacf)
lags = max(1, lags) # optimal lag has to be at least 1
lags = int_like(lags, "lags")
lags = np.arange(1, lags + 1)
elif period is not None:
lags = np.arange(1, min(nobs // 5, 2 * period) + 1, dtype=int)
elif lags is None:
lags = np.arange(1, min(nobs // 5, 10) + 1, dtype=int)
elif not isinstance(lags, Iterable):
lags = int_like(lags, "lags")
lags = np.arange(1, lags + 1)
lags = array_like(lags, "lags", dtype="int")
maxlag = lags.max()
# normalize by nobs not (nobs-nlags)
# SS: unbiased=False is default now
sacf = acf(x, nlags=maxlag, fft=False)
sacf2 = sacf[1:maxlag + 1] ** 2 / (nobs - np.arange(1, maxlag + 1))
qljungbox = nobs * (nobs + 2) * np.cumsum(sacf2)[lags - 1]
adj_lags = lags - model_df
pval = np.full_like(qljungbox, np.nan)
loc = adj_lags > 0
pval[loc] = stats.chi2.sf(qljungbox[loc], adj_lags[loc])
if not boxpierce:
return pd.DataFrame({"lb_stat": qljungbox, "lb_pvalue": pval},
index=lags)
qboxpierce = nobs * np.cumsum(sacf[1:maxlag + 1] ** 2)[lags - 1]
pvalbp = np.full_like(qljungbox, np.nan)
pvalbp[loc] = stats.chi2.sf(qboxpierce[loc], adj_lags[loc])
return pd.DataFrame({"lb_stat": qljungbox, "lb_pvalue": pval,
"bp_stat": qboxpierce, "bp_pvalue": pvalbp},
index=lags) | Ljung-Box test of autocorrelation in residuals.
Parameters
----------
x : array_like
The data series. The data is demeaned before the test statistic is
computed.
lags : {int, array_like}, default None
If lags is an integer then this is taken to be the largest lag
that is included, the test result is reported for all smaller lag
length. If lags is a list or array, then all lags are included up to
the largest lag in the list, however only the tests for the lags in
the list are reported. If lags is None, then the default maxlag is
min(10, nobs // 5). The default number of lags changes if period
is set.
boxpierce : bool, default False
If true, then additional to the results of the Ljung-Box test also the
Box-Pierce test results are returned.
model_df : int, default 0
Number of degrees of freedom consumed by the model. In an ARMA model,
this value is usually p+q where p is the AR order and q is the MA
order. This value is subtracted from the degrees-of-freedom used in
the test so that the adjusted dof for the statistics are
lags - model_df. If lags - model_df <= 0, then NaN is returned.
period : int, default None
The period of a Seasonal time series. Used to compute the max lag
for seasonal data which uses min(2*period, nobs // 5) if set. If None,
then the default rule is used to set the number of lags. When set, must
be >= 2.
auto_lag : bool, default False
Flag indicating whether to automatically determine the optimal lag
length based on threshold of maximum correlation value.
Returns
-------
DataFrame
Frame with columns:
* lb_stat - The Ljung-Box test statistic.
* lb_pvalue - The p-value based on chi-square distribution. The
p-value is computed as 1 - chi2.cdf(lb_stat, dof) where dof is
lag - model_df. If lag - model_df <= 0, then NaN is returned for
the pvalue.
* bp_stat - The Box-Pierce test statistic.
* bp_pvalue - The p-value based for Box-Pierce test on chi-square
distribution. The p-value is computed as 1 - chi2.cdf(bp_stat, dof)
where dof is lag - model_df. If lag - model_df <= 0, then NaN is
returned for the pvalue.
See Also
--------
statsmodels.regression.linear_model.OLS.fit
Regression model fitting.
statsmodels.regression.linear_model.RegressionResults
Results from linear regression models.
statsmodels.stats.stattools.q_stat
Ljung-Box test statistic computed from estimated
autocorrelations.
Notes
-----
Ljung-Box and Box-Pierce statistic differ in their scaling of the
autocorrelation function. Ljung-Box test is has better finite-sample
properties.
References
----------
.. [*] Green, W. "Econometric Analysis," 5th ed., Pearson, 2003.
.. [*] J. Carlos Escanciano, Ignacio N. Lobato
"An automatic Portmanteau test for serial correlation".,
Volume 151, 2009.
Examples
--------
>>> import statsmodels.api as sm
>>> data = sm.datasets.sunspots.load_pandas().data
>>> res = sm.tsa.ARMA(data["SUNACTIVITY"], (1,1)).fit(disp=-1)
>>> sm.stats.acorr_ljungbox(res.resid, lags=[10], return_df=True)
lb_stat lb_pvalue
10 214.106992 1.827374e-40 | acorr_ljungbox | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def acorr_lm(resid, nlags=None, store=False, *, period=None,
ddof=0, cov_type="nonrobust", cov_kwds=None):
"""
Lagrange Multiplier tests for autocorrelation.
This is a generic Lagrange Multiplier test for autocorrelation. Returns
Engle's ARCH test if resid is the squared residual array. Breusch-Godfrey
is a variation on this test with additional exogenous variables.
Parameters
----------
resid : array_like
Time series to test.
nlags : int, default None
Highest lag to use.
store : bool, default False
If true then the intermediate results are also returned.
period : int, default none
The period of a Seasonal time series. Used to compute the max lag
for seasonal data which uses min(2*period, nobs // 5) if set. If None,
then the default rule is used to set the number of lags. When set, must
be >= 2.
ddof : int, default 0
The number of degrees of freedom consumed by the model used to
produce resid. The default value is 0.
cov_type : str, default "nonrobust"
Covariance type. The default is "nonrobust` which uses the classic
OLS covariance estimator. Specify one of "HC0", "HC1", "HC2", "HC3"
to use White's covariance estimator. All covariance types supported
by ``OLS.fit`` are accepted.
cov_kwds : dict, default None
Dictionary of covariance options passed to ``OLS.fit``. See OLS.fit for
more details.
Returns
-------
lm : float
Lagrange multiplier test statistic.
lmpval : float
The p-value for Lagrange multiplier test.
fval : float
The f statistic of the F test, alternative version of the same
test based on F test for the parameter restriction.
fpval : float
The pvalue of the F test.
res_store : ResultsStore, optional
Intermediate results. Only returned if store=True.
See Also
--------
het_arch
Conditional heteroskedasticity testing.
acorr_breusch_godfrey
Breusch-Godfrey test for serial correlation.
acorr_ljung_box
Ljung-Box test for serial correlation.
Notes
-----
The test statistic is computed as (nobs - ddof) * r2 where r2 is the
R-squared from a regression on the residual on nlags lags of the
residual.
"""
resid = array_like(resid, "resid", ndim=1)
cov_type = string_like(cov_type, "cov_type")
cov_kwds = {} if cov_kwds is None else cov_kwds
cov_kwds = dict_like(cov_kwds, "cov_kwds")
nobs = resid.shape[0]
if period is not None and nlags is None:
maxlag = min(nobs // 5, 2 * period)
elif nlags is None:
maxlag = min(10, nobs // 5)
else:
maxlag = nlags
xdall = lagmat(resid[:, None], maxlag, trim="both")
nobs = xdall.shape[0]
xdall = np.c_[np.ones((nobs, 1)), xdall]
xshort = resid[-nobs:]
res_store = ResultsStore()
usedlag = maxlag
resols = OLS(xshort, xdall[:, :usedlag + 1]).fit(cov_type=cov_type,
cov_kwds=cov_kwds)
fval = float(resols.fvalue)
fpval = float(resols.f_pvalue)
if cov_type == "nonrobust":
lm = (nobs - ddof) * resols.rsquared
lmpval = stats.chi2.sf(lm, usedlag)
# Note: deg of freedom for LM test: nvars - constant = lags used
else:
r_matrix = np.hstack((np.zeros((usedlag, 1)), np.eye(usedlag)))
test_stat = resols.wald_test(r_matrix, use_f=False, scalar=True)
lm = float(test_stat.statistic)
lmpval = float(test_stat.pvalue)
if store:
res_store.resols = resols
res_store.usedlag = usedlag
return lm, lmpval, fval, fpval, res_store
else:
return lm, lmpval, fval, fpval | Lagrange Multiplier tests for autocorrelation.
This is a generic Lagrange Multiplier test for autocorrelation. Returns
Engle's ARCH test if resid is the squared residual array. Breusch-Godfrey
is a variation on this test with additional exogenous variables.
Parameters
----------
resid : array_like
Time series to test.
nlags : int, default None
Highest lag to use.
store : bool, default False
If true then the intermediate results are also returned.
period : int, default none
The period of a Seasonal time series. Used to compute the max lag
for seasonal data which uses min(2*period, nobs // 5) if set. If None,
then the default rule is used to set the number of lags. When set, must
be >= 2.
ddof : int, default 0
The number of degrees of freedom consumed by the model used to
produce resid. The default value is 0.
cov_type : str, default "nonrobust"
Covariance type. The default is "nonrobust` which uses the classic
OLS covariance estimator. Specify one of "HC0", "HC1", "HC2", "HC3"
to use White's covariance estimator. All covariance types supported
by ``OLS.fit`` are accepted.
cov_kwds : dict, default None
Dictionary of covariance options passed to ``OLS.fit``. See OLS.fit for
more details.
Returns
-------
lm : float
Lagrange multiplier test statistic.
lmpval : float
The p-value for Lagrange multiplier test.
fval : float
The f statistic of the F test, alternative version of the same
test based on F test for the parameter restriction.
fpval : float
The pvalue of the F test.
res_store : ResultsStore, optional
Intermediate results. Only returned if store=True.
See Also
--------
het_arch
Conditional heteroskedasticity testing.
acorr_breusch_godfrey
Breusch-Godfrey test for serial correlation.
acorr_ljung_box
Ljung-Box test for serial correlation.
Notes
-----
The test statistic is computed as (nobs - ddof) * r2 where r2 is the
R-squared from a regression on the residual on nlags lags of the
residual. | acorr_lm | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def het_arch(resid, nlags=None, store=False, ddof=0):
"""
Engle's Test for Autoregressive Conditional Heteroscedasticity (ARCH).
Parameters
----------
resid : ndarray
residuals from an estimation, or time series
nlags : int, default None
Highest lag to use.
store : bool, default False
If true then the intermediate results are also returned
ddof : int, default 0
If the residuals are from a regression, or ARMA estimation, then there
are recommendations to correct the degrees of freedom by the number
of parameters that have been estimated, for example ddof=p+q for an
ARMA(p,q).
Returns
-------
lm : float
Lagrange multiplier test statistic
lmpval : float
p-value for Lagrange multiplier test
fval : float
fstatistic for F test, alternative version of the same test based on
F test for the parameter restriction
fpval : float
pvalue for F test
res_store : ResultsStore, optional
Intermediate results. Returned if store is True.
Notes
-----
verified against R:FinTS::ArchTest
"""
return acorr_lm(resid ** 2, nlags=nlags, store=store, ddof=ddof) | Engle's Test for Autoregressive Conditional Heteroscedasticity (ARCH).
Parameters
----------
resid : ndarray
residuals from an estimation, or time series
nlags : int, default None
Highest lag to use.
store : bool, default False
If true then the intermediate results are also returned
ddof : int, default 0
If the residuals are from a regression, or ARMA estimation, then there
are recommendations to correct the degrees of freedom by the number
of parameters that have been estimated, for example ddof=p+q for an
ARMA(p,q).
Returns
-------
lm : float
Lagrange multiplier test statistic
lmpval : float
p-value for Lagrange multiplier test
fval : float
fstatistic for F test, alternative version of the same test based on
F test for the parameter restriction
fpval : float
pvalue for F test
res_store : ResultsStore, optional
Intermediate results. Returned if store is True.
Notes
-----
verified against R:FinTS::ArchTest | het_arch | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def acorr_breusch_godfrey(res, nlags=None, store=False):
"""
Breusch-Godfrey Lagrange Multiplier tests for residual autocorrelation.
Parameters
----------
res : RegressionResults
Estimation results for which the residuals are tested for serial
correlation.
nlags : int, optional
Number of lags to include in the auxiliary regression. (nlags is
highest lag).
store : bool, default False
If store is true, then an additional class instance that contains
intermediate results is returned.
Returns
-------
lm : float
Lagrange multiplier test statistic.
lmpval : float
The p-value for Lagrange multiplier test.
fval : float
The value of the f statistic for F test, alternative version of the
same test based on F test for the parameter restriction.
fpval : float
The pvalue for F test.
res_store : ResultsStore
A class instance that holds intermediate results. Only returned if
store=True.
Notes
-----
BG adds lags of residual to exog in the design matrix for the auxiliary
regression with residuals as endog. See [1]_, section 12.7.1.
References
----------
.. [1] Greene, W. H. Econometric Analysis. New Jersey. Prentice Hall;
5th edition. (2002).
"""
x = np.asarray(res.resid).squeeze()
if x.ndim != 1:
raise ValueError("Model resid must be a 1d array. Cannot be used on"
" multivariate models.")
exog_old = res.model.exog
nobs = x.shape[0]
if nlags is None:
nlags = min(10, nobs // 5)
x = np.concatenate((np.zeros(nlags), x))
xdall = lagmat(x[:, None], nlags, trim="both")
nobs = xdall.shape[0]
xdall = np.c_[np.ones((nobs, 1)), xdall]
xshort = x[-nobs:]
if exog_old is None:
exog = xdall
else:
exog = np.column_stack((exog_old, xdall))
k_vars = exog.shape[1]
resols = OLS(xshort, exog).fit()
ft = resols.f_test(np.eye(nlags, k_vars, k_vars - nlags))
fval = ft.fvalue
fpval = ft.pvalue
fval = float(np.squeeze(fval))
fpval = float(np.squeeze(fpval))
lm = nobs * resols.rsquared
lmpval = stats.chi2.sf(lm, nlags)
# Note: degrees of freedom for LM test is nvars minus constant = usedlags
if store:
res_store = ResultsStore()
res_store.resols = resols
res_store.usedlag = nlags
return lm, lmpval, fval, fpval, res_store
else:
return lm, lmpval, fval, fpval | Breusch-Godfrey Lagrange Multiplier tests for residual autocorrelation.
Parameters
----------
res : RegressionResults
Estimation results for which the residuals are tested for serial
correlation.
nlags : int, optional
Number of lags to include in the auxiliary regression. (nlags is
highest lag).
store : bool, default False
If store is true, then an additional class instance that contains
intermediate results is returned.
Returns
-------
lm : float
Lagrange multiplier test statistic.
lmpval : float
The p-value for Lagrange multiplier test.
fval : float
The value of the f statistic for F test, alternative version of the
same test based on F test for the parameter restriction.
fpval : float
The pvalue for F test.
res_store : ResultsStore
A class instance that holds intermediate results. Only returned if
store=True.
Notes
-----
BG adds lags of residual to exog in the design matrix for the auxiliary
regression with residuals as endog. See [1]_, section 12.7.1.
References
----------
.. [1] Greene, W. H. Econometric Analysis. New Jersey. Prentice Hall;
5th edition. (2002). | acorr_breusch_godfrey | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def _check_het_test(x: np.ndarray, test_name: str) -> None:
"""
Check validity of the exogenous regressors in a heteroskedasticity test
Parameters
----------
x : ndarray
The exogenous regressor array
test_name : str
The test name for the exception
"""
x_max = x.max(axis=0)
if (
not np.any(((x_max - x.min(axis=0)) == 0) & (x_max != 0))
or x.shape[1] < 2
):
raise ValueError(
f"{test_name} test requires exog to have at least "
"two columns where one is a constant."
) | Check validity of the exogenous regressors in a heteroskedasticity test
Parameters
----------
x : ndarray
The exogenous regressor array
test_name : str
The test name for the exception | _check_het_test | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def het_white(resid, exog):
"""
White's Lagrange Multiplier Test for Heteroscedasticity.
Parameters
----------
resid : array_like
The residuals. The squared residuals are used as the endogenous
variable.
exog : array_like
The explanatory variables for the variance. Squares and interaction
terms are automatically included in the auxiliary regression.
Returns
-------
lm : float
The lagrange multiplier statistic.
lm_pvalue :float
The p-value of lagrange multiplier test.
fvalue : float
The f-statistic of the hypothesis that the error variance does not
depend on x. This is an alternative test variant not the original
LM test.
f_pvalue : float
The p-value for the f-statistic.
Notes
-----
Assumes x contains constant (for counting dof).
question: does f-statistic make sense? constant ?
References
----------
Greene section 11.4.1 5th edition p. 222. Test statistic reproduces
Greene 5th, example 11.3.
"""
x = array_like(exog, "exog", ndim=2)
y = array_like(resid, "resid", ndim=2, shape=(x.shape[0], 1))
_check_het_test(x, "White's heteroskedasticity")
nobs, nvars0 = x.shape
i0, i1 = np.triu_indices(nvars0)
exog = x[:, i0] * x[:, i1]
nobs, nvars = exog.shape
assert nvars == nvars0 * (nvars0 - 1) / 2. + nvars0
resols = OLS(y ** 2, exog).fit()
fval = resols.fvalue
fpval = resols.f_pvalue
lm = nobs * resols.rsquared
# Note: degrees of freedom for LM test is nvars minus constant
# degrees of freedom take possible reduced rank in exog into account
# df_model checks the rank to determine df
# extra calculation that can be removed:
assert resols.df_model == np.linalg.matrix_rank(exog) - 1
lmpval = stats.chi2.sf(lm, resols.df_model)
return lm, lmpval, fval, fpval | White's Lagrange Multiplier Test for Heteroscedasticity.
Parameters
----------
resid : array_like
The residuals. The squared residuals are used as the endogenous
variable.
exog : array_like
The explanatory variables for the variance. Squares and interaction
terms are automatically included in the auxiliary regression.
Returns
-------
lm : float
The lagrange multiplier statistic.
lm_pvalue :float
The p-value of lagrange multiplier test.
fvalue : float
The f-statistic of the hypothesis that the error variance does not
depend on x. This is an alternative test variant not the original
LM test.
f_pvalue : float
The p-value for the f-statistic.
Notes
-----
Assumes x contains constant (for counting dof).
question: does f-statistic make sense? constant ?
References
----------
Greene section 11.4.1 5th edition p. 222. Test statistic reproduces
Greene 5th, example 11.3. | het_white | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def het_goldfeldquandt(y, x, idx=None, split=None, drop=None,
alternative="increasing", store=False):
"""
Goldfeld-Quandt homoskedasticity test.
This test examines whether the residual variance is the same in 2
subsamples.
Parameters
----------
y : array_like
endogenous variable
x : array_like
exogenous variable, regressors
idx : int, default None
column index of variable according to which observations are
sorted for the split
split : {int, float}, default None
If an integer, this is the index at which sample is split.
If a float in 0<split<1 then split is interpreted as fraction
of the observations in the first sample. If None, uses nobs//2.
drop : {int, float}, default None
If this is not None, then observation are dropped from the middle
part of the sorted series. If 0<split<1 then split is interpreted
as fraction of the number of observations to be dropped.
Note: Currently, observations are dropped between split and
split+drop, where split and drop are the indices (given by rounding
if specified as fraction). The first sample is [0:split], the
second sample is [split+drop:]
alternative : {"increasing", "decreasing", "two-sided"}
The default is increasing. This specifies the alternative for the
p-value calculation.
store : bool, default False
Flag indicating to return the regression results
Returns
-------
fval : float
value of the F-statistic
pval : float
p-value of the hypothesis that the variance in one subsample is
larger than in the other subsample
ordering : str
The ordering used in the alternative.
res_store : ResultsStore, optional
Storage for the intermediate and final results that are calculated
Notes
-----
The Null hypothesis is that the variance in the two sub-samples are the
same. The alternative hypothesis, can be increasing, i.e. the variance
in the second sample is larger than in the first, or decreasing or
two-sided.
Results are identical R, but the drop option is defined differently.
(sorting by idx not tested yet)
"""
x = np.asarray(x)
y = np.asarray(y) # **2
nobs, nvars = x.shape
if split is None:
split = nobs // 2
elif (0 < split) and (split < 1):
split = int(nobs * split)
if drop is None:
start2 = split
elif (0 < drop) and (drop < 1):
start2 = split + int(nobs * drop)
else:
start2 = split + drop
if idx is not None:
xsortind = np.argsort(x[:, idx])
y = y[xsortind]
x = x[xsortind, :]
resols1 = OLS(y[:split], x[:split]).fit()
resols2 = OLS(y[start2:], x[start2:]).fit()
fval = resols2.mse_resid / resols1.mse_resid
# if fval>1:
if alternative.lower() in ["i", "inc", "increasing"]:
fpval = stats.f.sf(fval, resols1.df_resid, resols2.df_resid)
ordering = "increasing"
elif alternative.lower() in ["d", "dec", "decreasing"]:
fpval = stats.f.sf(1. / fval, resols2.df_resid, resols1.df_resid)
ordering = "decreasing"
elif alternative.lower() in ["2", "2-sided", "two-sided"]:
fpval_sm = stats.f.cdf(fval, resols2.df_resid, resols1.df_resid)
fpval_la = stats.f.sf(fval, resols2.df_resid, resols1.df_resid)
fpval = 2 * min(fpval_sm, fpval_la)
ordering = "two-sided"
else:
raise ValueError("invalid alternative")
if store:
res = ResultsStore()
res.__doc__ = "Test Results for Goldfeld-Quandt test of" \
"heterogeneity"
res.fval = fval
res.fpval = fpval
res.df_fval = (resols2.df_resid, resols1.df_resid)
res.resols1 = resols1
res.resols2 = resols2
res.ordering = ordering
res.split = split
res._str = """\
The Goldfeld-Quandt test for null hypothesis that the variance in the second
subsample is {} than in the first subsample:
F-statistic ={:8.4f} and p-value ={:8.4f}""".format(ordering, fval, fpval)
return fval, fpval, ordering, res
return fval, fpval, ordering | Goldfeld-Quandt homoskedasticity test.
This test examines whether the residual variance is the same in 2
subsamples.
Parameters
----------
y : array_like
endogenous variable
x : array_like
exogenous variable, regressors
idx : int, default None
column index of variable according to which observations are
sorted for the split
split : {int, float}, default None
If an integer, this is the index at which sample is split.
If a float in 0<split<1 then split is interpreted as fraction
of the observations in the first sample. If None, uses nobs//2.
drop : {int, float}, default None
If this is not None, then observation are dropped from the middle
part of the sorted series. If 0<split<1 then split is interpreted
as fraction of the number of observations to be dropped.
Note: Currently, observations are dropped between split and
split+drop, where split and drop are the indices (given by rounding
if specified as fraction). The first sample is [0:split], the
second sample is [split+drop:]
alternative : {"increasing", "decreasing", "two-sided"}
The default is increasing. This specifies the alternative for the
p-value calculation.
store : bool, default False
Flag indicating to return the regression results
Returns
-------
fval : float
value of the F-statistic
pval : float
p-value of the hypothesis that the variance in one subsample is
larger than in the other subsample
ordering : str
The ordering used in the alternative.
res_store : ResultsStore, optional
Storage for the intermediate and final results that are calculated
Notes
-----
The Null hypothesis is that the variance in the two sub-samples are the
same. The alternative hypothesis, can be increasing, i.e. the variance
in the second sample is larger than in the first, or decreasing or
two-sided.
Results are identical R, but the drop option is defined differently.
(sorting by idx not tested yet) | het_goldfeldquandt | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def linear_harvey_collier(res, order_by=None, skip=None):
"""
Harvey Collier test for linearity
The Null hypothesis is that the regression is correctly modeled as linear.
Parameters
----------
res : RegressionResults
A results instance from a linear regression.
order_by : array_like, default None
Integer array specifying the order of the residuals. If not provided,
the order of the residuals is not changed. If provided, must have
the same number of observations as the endogenous variable.
skip : int, default None
The number of observations to use for initial OLS, if None then skip is
set equal to the number of regressors (columns in exog).
Returns
-------
tvalue : float
The test statistic, based on ttest_1sample.
pvalue : float
The pvalue of the test.
See Also
--------
statsmodels.stats.diadnostic.recursive_olsresiduals
Recursive OLS residual calculation used in the test.
Notes
-----
This test is a t-test that the mean of the recursive ols residuals is zero.
Calculating the recursive residuals might take some time for large samples.
"""
# I think this has different ddof than
# B.H. Baltagi, Econometrics, 2011, chapter 8
# but it matches Gretl and R:lmtest, pvalue at decimal=13
rr = recursive_olsresiduals(res, skip=skip, alpha=0.95, order_by=order_by)
return stats.ttest_1samp(rr[3][3:], 0) | Harvey Collier test for linearity
The Null hypothesis is that the regression is correctly modeled as linear.
Parameters
----------
res : RegressionResults
A results instance from a linear regression.
order_by : array_like, default None
Integer array specifying the order of the residuals. If not provided,
the order of the residuals is not changed. If provided, must have
the same number of observations as the endogenous variable.
skip : int, default None
The number of observations to use for initial OLS, if None then skip is
set equal to the number of regressors (columns in exog).
Returns
-------
tvalue : float
The test statistic, based on ttest_1sample.
pvalue : float
The pvalue of the test.
See Also
--------
statsmodels.stats.diadnostic.recursive_olsresiduals
Recursive OLS residual calculation used in the test.
Notes
-----
This test is a t-test that the mean of the recursive ols residuals is zero.
Calculating the recursive residuals might take some time for large samples. | linear_harvey_collier | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def linear_rainbow(res, frac=0.5, order_by=None, use_distance=False,
center=None):
"""
Rainbow test for linearity
The null hypothesis is the fit of the model using full sample is the same
as using a central subset. The alternative is that the fits are difference.
The rainbow test has power against many different forms of nonlinearity.
Parameters
----------
res : RegressionResults
A results instance from a linear regression.
frac : float, default 0.5
The fraction of the data to include in the center model.
order_by : {ndarray, str, List[str]}, default None
If an ndarray, the values in the array are used to sort the
observations. If a string or a list of strings, these are interpreted
as column name(s) which are then used to lexicographically sort the
data.
use_distance : bool, default False
Flag indicating whether data should be ordered by the Mahalanobis
distance to the center.
center : {float, int}, default None
If a float, the value must be in [0, 1] and the center is center *
nobs of the ordered data. If an integer, must be in [0, nobs) and
is interpreted as the observation of the ordered data to use.
Returns
-------
fstat : float
The test statistic based on the F test.
pvalue : float
The pvalue of the test.
Notes
-----
This test assumes residuals are homoskedastic and may reject a correct
linear specification if the residuals are heteroskedastic.
"""
if not isinstance(res, RegressionResultsWrapper):
raise TypeError("res must be a results instance from a linear model.")
frac = float_like(frac, "frac")
use_distance = bool_like(use_distance, "use_distance")
nobs = res.nobs
endog = res.model.endog
exog = res.model.exog
if order_by is not None and use_distance:
raise ValueError("order_by and use_distance cannot be simultaneously"
"used.")
if order_by is not None:
if isinstance(order_by, np.ndarray):
order_by = array_like(order_by, "order_by", ndim=1, dtype="int")
else:
if isinstance(order_by, str):
order_by = [order_by]
try:
cols = res.model.data.orig_exog[order_by].copy()
except (IndexError, KeyError):
raise TypeError("order_by must contain valid column names "
"from the exog data used to construct res,"
"and exog must be a pandas DataFrame.")
name = "__index__"
while name in cols:
name += '_'
cols[name] = np.arange(cols.shape[0])
cols = cols.sort_values(order_by)
order_by = np.asarray(cols[name])
endog = endog[order_by]
exog = exog[order_by]
if use_distance:
center = int(nobs) // 2 if center is None else center
if isinstance(center, float):
if not 0.0 <= center <= 1.0:
raise ValueError("center must be in (0, 1) when a float.")
center = int(center * (nobs-1))
else:
center = int_like(center, "center")
if not 0 < center < nobs - 1:
raise ValueError("center must be in [0, nobs) when an int.")
center_obs = exog[center:center+1]
from scipy.spatial.distance import cdist
try:
err = exog - center_obs
vi = np.linalg.inv(err.T @ err / nobs)
except np.linalg.LinAlgError:
err = exog - exog.mean(0)
vi = np.linalg.inv(err.T @ err / nobs)
dist = cdist(exog, center_obs, metric='mahalanobis', VI=vi)
idx = np.argsort(dist.ravel())
endog = endog[idx]
exog = exog[idx]
lowidx = np.ceil(0.5 * (1 - frac) * nobs).astype(int)
uppidx = np.floor(lowidx + frac * nobs).astype(int)
if uppidx - lowidx < exog.shape[1]:
raise ValueError("frac is too small to perform test. frac * nobs"
"must be greater than the number of exogenous"
"variables in the model.")
mi_sl = slice(lowidx, uppidx)
res_mi = OLS(endog[mi_sl], exog[mi_sl]).fit()
nobs_mi = res_mi.model.endog.shape[0]
ss_mi = res_mi.ssr
ss = res.ssr
fstat = (ss - ss_mi) / (nobs - nobs_mi) / ss_mi * res_mi.df_resid
pval = stats.f.sf(fstat, nobs - nobs_mi, res_mi.df_resid)
return fstat, pval | Rainbow test for linearity
The null hypothesis is the fit of the model using full sample is the same
as using a central subset. The alternative is that the fits are difference.
The rainbow test has power against many different forms of nonlinearity.
Parameters
----------
res : RegressionResults
A results instance from a linear regression.
frac : float, default 0.5
The fraction of the data to include in the center model.
order_by : {ndarray, str, List[str]}, default None
If an ndarray, the values in the array are used to sort the
observations. If a string or a list of strings, these are interpreted
as column name(s) which are then used to lexicographically sort the
data.
use_distance : bool, default False
Flag indicating whether data should be ordered by the Mahalanobis
distance to the center.
center : {float, int}, default None
If a float, the value must be in [0, 1] and the center is center *
nobs of the ordered data. If an integer, must be in [0, nobs) and
is interpreted as the observation of the ordered data to use.
Returns
-------
fstat : float
The test statistic based on the F test.
pvalue : float
The pvalue of the test.
Notes
-----
This test assumes residuals are homoskedastic and may reject a correct
linear specification if the residuals are heteroskedastic. | linear_rainbow | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def linear_lm(resid, exog, func=None):
"""
Lagrange multiplier test for linearity against functional alternative
# TODO: Remove the restriction
limitations: Assumes currently that the first column is integer.
Currently it does not check whether the transformed variables contain NaNs,
for example log of negative number.
Parameters
----------
resid : ndarray
residuals of a regression
exog : ndarray
exogenous variables for which linearity is tested
func : callable, default None
If func is None, then squares are used. func needs to take an array
of exog and return an array of transformed variables.
Returns
-------
lm : float
Lagrange multiplier test statistic
lm_pval : float
p-value of Lagrange multiplier tes
ftest : ContrastResult instance
the results from the F test variant of this test
Notes
-----
Written to match Gretl's linearity test. The test runs an auxiliary
regression of the residuals on the combined original and transformed
regressors. The Null hypothesis is that the linear specification is
correct.
"""
if func is None:
def func(x):
return np.power(x, 2)
exog = np.asarray(exog)
exog_aux = np.column_stack((exog, func(exog[:, 1:])))
nobs, k_vars = exog.shape
ls = OLS(resid, exog_aux).fit()
ftest = ls.f_test(np.eye(k_vars - 1, k_vars * 2 - 1, k_vars))
lm = nobs * ls.rsquared
lm_pval = stats.chi2.sf(lm, k_vars - 1)
return lm, lm_pval, ftest | Lagrange multiplier test for linearity against functional alternative
# TODO: Remove the restriction
limitations: Assumes currently that the first column is integer.
Currently it does not check whether the transformed variables contain NaNs,
for example log of negative number.
Parameters
----------
resid : ndarray
residuals of a regression
exog : ndarray
exogenous variables for which linearity is tested
func : callable, default None
If func is None, then squares are used. func needs to take an array
of exog and return an array of transformed variables.
Returns
-------
lm : float
Lagrange multiplier test statistic
lm_pval : float
p-value of Lagrange multiplier tes
ftest : ContrastResult instance
the results from the F test variant of this test
Notes
-----
Written to match Gretl's linearity test. The test runs an auxiliary
regression of the residuals on the combined original and transformed
regressors. The Null hypothesis is that the linear specification is
correct. | linear_lm | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def spec_white(resid, exog):
"""
White's Two-Moment Specification Test
Parameters
----------
resid : array_like
OLS residuals.
exog : array_like
OLS design matrix.
Returns
-------
stat : float
The test statistic.
pval : float
A chi-square p-value for test statistic.
dof : int
The degrees of freedom.
See Also
--------
het_white
White's test for heteroskedasticity.
Notes
-----
Implements the two-moment specification test described by White's
Theorem 2 (1980, p. 823) which compares the standard OLS covariance
estimator with White's heteroscedasticity-consistent estimator. The
test statistic is shown to be chi-square distributed.
Null hypothesis is homoscedastic and correctly specified.
Assumes the OLS design matrix contains an intercept term and at least
one variable. The intercept is removed to calculate the test statistic.
Interaction terms (squares and crosses of OLS regressors) are added to
the design matrix to calculate the test statistic.
Degrees-of-freedom (full rank) = nvar + nvar * (nvar + 1) / 2
Linearly dependent columns are removed to avoid singular matrix error.
References
----------
.. [*] White, H. (1980). A heteroskedasticity-consistent covariance matrix
estimator and a direct test for heteroscedasticity. Econometrica, 48:
817-838.
"""
x = array_like(exog, "exog", ndim=2)
e = array_like(resid, "resid", ndim=1)
if x.shape[1] < 2 or not np.any(np.ptp(x, 0) == 0.0):
raise ValueError("White's specification test requires at least two"
"columns where one is a constant.")
# add interaction terms
i0, i1 = np.triu_indices(x.shape[1])
exog = np.delete(x[:, i0] * x[:, i1], 0, 1)
# collinearity check - see _fit_collinear
atol = 1e-14
rtol = 1e-13
tol = atol + rtol * exog.var(0)
r = np.linalg.qr(exog, mode="r")
mask = np.abs(r.diagonal()) < np.sqrt(tol)
exog = exog[:, np.where(~mask)[0]]
# calculate test statistic
sqe = e * e
sqmndevs = sqe - np.mean(sqe)
d = np.dot(exog.T, sqmndevs)
devx = exog - np.mean(exog, axis=0)
devx *= sqmndevs[:, None]
b = devx.T.dot(devx)
stat = d.dot(np.linalg.solve(b, d))
# chi-square test
dof = devx.shape[1]
pval = stats.chi2.sf(stat, dof)
return stat, pval, dof | White's Two-Moment Specification Test
Parameters
----------
resid : array_like
OLS residuals.
exog : array_like
OLS design matrix.
Returns
-------
stat : float
The test statistic.
pval : float
A chi-square p-value for test statistic.
dof : int
The degrees of freedom.
See Also
--------
het_white
White's test for heteroskedasticity.
Notes
-----
Implements the two-moment specification test described by White's
Theorem 2 (1980, p. 823) which compares the standard OLS covariance
estimator with White's heteroscedasticity-consistent estimator. The
test statistic is shown to be chi-square distributed.
Null hypothesis is homoscedastic and correctly specified.
Assumes the OLS design matrix contains an intercept term and at least
one variable. The intercept is removed to calculate the test statistic.
Interaction terms (squares and crosses of OLS regressors) are added to
the design matrix to calculate the test statistic.
Degrees-of-freedom (full rank) = nvar + nvar * (nvar + 1) / 2
Linearly dependent columns are removed to avoid singular matrix error.
References
----------
.. [*] White, H. (1980). A heteroskedasticity-consistent covariance matrix
estimator and a direct test for heteroscedasticity. Econometrica, 48:
817-838. | spec_white | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def recursive_olsresiduals(res, skip=None, lamda=0.0, alpha=0.95,
order_by=None):
"""
Calculate recursive ols with residuals and Cusum test statistic
Parameters
----------
res : RegressionResults
Results from estimation of a regression model.
skip : int, default None
The number of observations to use for initial OLS, if None then skip is
set equal to the number of regressors (columns in exog).
lamda : float, default 0.0
The weight for Ridge correction to initial (X'X)^{-1}.
alpha : {0.90, 0.95, 0.99}, default 0.95
Confidence level of test, currently only two values supported,
used for confidence interval in cusum graph.
order_by : array_like, default None
Integer array specifying the order of the residuals. If not provided,
the order of the residuals is not changed. If provided, must have
the same number of observations as the endogenous variable.
Returns
-------
rresid : ndarray
The recursive ols residuals.
rparams : ndarray
The recursive ols parameter estimates.
rypred : ndarray
The recursive prediction of endogenous variable.
rresid_standardized : ndarray
The recursive residuals standardized so that N(0,sigma2) distributed,
where sigma2 is the error variance.
rresid_scaled : ndarray
The recursive residuals normalize so that N(0,1) distributed.
rcusum : ndarray
The cumulative residuals for cusum test.
rcusumci : ndarray
The confidence interval for cusum test using a size of alpha.
Notes
-----
It produces same recursive residuals as other version. This version updates
the inverse of the X'X matrix and does not require matrix inversion during
updating. looks efficient but no timing
Confidence interval in Greene and Brown, Durbin and Evans is the same as
in Ploberger after a little bit of algebra.
References
----------
jplv to check formulas, follows Harvey
BigJudge 5.5.2b for formula for inverse(X'X) updating
Greene section 7.5.2
Brown, R. L., J. Durbin, and J. M. Evans. “Techniques for Testing the
Constancy of Regression Relationships over Time.”
Journal of the Royal Statistical Society. Series B (Methodological) 37,
no. 2 (1975): 149-192.
"""
if not isinstance(res, RegressionResultsWrapper):
raise TypeError("res a regression results instance")
y = res.model.endog
x = res.model.exog
order_by = array_like(order_by, "order_by", dtype="int", optional=True,
ndim=1, shape=(y.shape[0],))
# intialize with skip observations
if order_by is not None:
x = x[order_by]
y = y[order_by]
nobs, nvars = x.shape
if skip is None:
skip = nvars
rparams = np.nan * np.zeros((nobs, nvars))
rresid = np.nan * np.zeros(nobs)
rypred = np.nan * np.zeros(nobs)
rvarraw = np.nan * np.zeros(nobs)
x0 = x[:skip]
if np.linalg.matrix_rank(x0) < x0.shape[1]:
err_msg = """\
"The initial regressor matrix, x[:skip], issingular. You must use a value of
skip large enough to ensure that the first OLS estimator is well-defined.
"""
raise ValueError(err_msg)
y0 = y[:skip]
# add Ridge to start (not in jplv)
xtxi = np.linalg.inv(np.dot(x0.T, x0) + lamda * np.eye(nvars))
xty = np.dot(x0.T, y0) # xi * y #np.dot(xi, y)
beta = np.dot(xtxi, xty)
rparams[skip - 1] = beta
yipred = np.dot(x[skip - 1], beta)
rypred[skip - 1] = yipred
rresid[skip - 1] = y[skip - 1] - yipred
rvarraw[skip - 1] = 1 + np.dot(x[skip - 1], np.dot(xtxi, x[skip - 1]))
for i in range(skip, nobs):
xi = x[i:i + 1, :]
yi = y[i]
# get prediction error with previous beta
yipred = np.dot(xi, beta)
rypred[i] = np.squeeze(yipred)
residi = yi - yipred
rresid[i] = np.squeeze(residi)
# update beta and inverse(X'X)
tmp = np.dot(xtxi, xi.T)
ft = 1 + np.dot(xi, tmp)
xtxi = xtxi - np.dot(tmp, tmp.T) / ft # BigJudge equ 5.5.15
beta = beta + (tmp * residi / ft).ravel() # BigJudge equ 5.5.14
rparams[i] = beta
rvarraw[i] = np.squeeze(ft)
rresid_scaled = rresid / np.sqrt(rvarraw) # N(0,sigma2) distributed
nrr = nobs - skip
# sigma2 = rresid_scaled[skip-1:].var(ddof=1) #var or sum of squares ?
# Greene has var, jplv and Ploberger have sum of squares (Ass.:mean=0)
# Gretl uses: by reverse engineering matching their numbers
sigma2 = rresid_scaled[skip:].var(ddof=1)
rresid_standardized = rresid_scaled / np.sqrt(sigma2) # N(0,1) distributed
rcusum = rresid_standardized[skip - 1:].cumsum()
# confidence interval points in Greene p136 looks strange. Cleared up
# this assumes sum of independent standard normal, which does not take into
# account that we make many tests at the same time
if alpha == 0.90:
a = 0.850
elif alpha == 0.95:
a = 0.948
elif alpha == 0.99:
a = 1.143
else:
raise ValueError("alpha can only be 0.9, 0.95 or 0.99")
# following taken from Ploberger,
# crit = a * np.sqrt(nrr)
rcusumci = (a * np.sqrt(nrr) + 2 * a * np.arange(0, nobs - skip) / np.sqrt(
nrr)) * np.array([[-1.], [+1.]])
return (rresid, rparams, rypred, rresid_standardized, rresid_scaled,
rcusum, rcusumci) | Calculate recursive ols with residuals and Cusum test statistic
Parameters
----------
res : RegressionResults
Results from estimation of a regression model.
skip : int, default None
The number of observations to use for initial OLS, if None then skip is
set equal to the number of regressors (columns in exog).
lamda : float, default 0.0
The weight for Ridge correction to initial (X'X)^{-1}.
alpha : {0.90, 0.95, 0.99}, default 0.95
Confidence level of test, currently only two values supported,
used for confidence interval in cusum graph.
order_by : array_like, default None
Integer array specifying the order of the residuals. If not provided,
the order of the residuals is not changed. If provided, must have
the same number of observations as the endogenous variable.
Returns
-------
rresid : ndarray
The recursive ols residuals.
rparams : ndarray
The recursive ols parameter estimates.
rypred : ndarray
The recursive prediction of endogenous variable.
rresid_standardized : ndarray
The recursive residuals standardized so that N(0,sigma2) distributed,
where sigma2 is the error variance.
rresid_scaled : ndarray
The recursive residuals normalize so that N(0,1) distributed.
rcusum : ndarray
The cumulative residuals for cusum test.
rcusumci : ndarray
The confidence interval for cusum test using a size of alpha.
Notes
-----
It produces same recursive residuals as other version. This version updates
the inverse of the X'X matrix and does not require matrix inversion during
updating. looks efficient but no timing
Confidence interval in Greene and Brown, Durbin and Evans is the same as
in Ploberger after a little bit of algebra.
References
----------
jplv to check formulas, follows Harvey
BigJudge 5.5.2b for formula for inverse(X'X) updating
Greene section 7.5.2
Brown, R. L., J. Durbin, and J. M. Evans. “Techniques for Testing the
Constancy of Regression Relationships over Time.”
Journal of the Royal Statistical Society. Series B (Methodological) 37,
no. 2 (1975): 149-192. | recursive_olsresiduals | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def breaks_hansen(olsresults):
"""
Test for model stability, breaks in parameters for ols, Hansen 1992
Parameters
----------
olsresults : RegressionResults
Results from estimation of a regression model.
Returns
-------
teststat : float
Hansen's test statistic.
crit : ndarray
The critical values at alpha=0.95 for different nvars.
Notes
-----
looks good in example, maybe not very powerful for small changes in
parameters
According to Greene, distribution of test statistics depends on nvar but
not on nobs.
Test statistic is verified against R:strucchange
References
----------
Greene section 7.5.1, notation follows Greene
"""
x = olsresults.model.exog
resid = array_like(olsresults.resid, "resid", shape=(x.shape[0], 1))
nobs, nvars = x.shape
resid2 = resid ** 2
ft = np.c_[x * resid[:, None], (resid2 - resid2.mean())]
score = ft.cumsum(0)
f = nobs * (ft[:, :, None] * ft[:, None, :]).sum(0)
s = (score[:, :, None] * score[:, None, :]).sum(0)
h = np.trace(np.dot(np.linalg.inv(f), s))
crit95 = np.array([(2, 1.01), (6, 1.9), (15, 3.75), (19, 4.52)],
dtype=[("nobs", int), ("crit", float)])
# TODO: get critical values from Bruce Hansen's 1992 paper
return h, crit95 | Test for model stability, breaks in parameters for ols, Hansen 1992
Parameters
----------
olsresults : RegressionResults
Results from estimation of a regression model.
Returns
-------
teststat : float
Hansen's test statistic.
crit : ndarray
The critical values at alpha=0.95 for different nvars.
Notes
-----
looks good in example, maybe not very powerful for small changes in
parameters
According to Greene, distribution of test statistics depends on nvar but
not on nobs.
Test statistic is verified against R:strucchange
References
----------
Greene section 7.5.1, notation follows Greene | breaks_hansen | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def breaks_cusumolsresid(resid, ddof=0):
"""
Cusum test for parameter stability based on ols residuals.
Parameters
----------
resid : ndarray
An array of residuals from an OLS estimation.
ddof : int
The number of parameters in the OLS estimation, used as degrees
of freedom correction for error variance.
Returns
-------
sup_b : float
The test statistic, maximum of absolute value of scaled cumulative OLS
residuals.
pval : float
Probability of observing the data under the null hypothesis of no
structural change, based on asymptotic distribution which is a Brownian
Bridge
crit: list
The tabulated critical values, for alpha = 1%, 5% and 10%.
Notes
-----
Tested against R:structchange.
Not clear: Assumption 2 in Ploberger, Kramer assumes that exog x have
asymptotically zero mean, x.mean(0) = [1, 0, 0, ..., 0]
Is this really necessary? I do not see how it can affect the test statistic
under the null. It does make a difference under the alternative.
Also, the asymptotic distribution of test statistic depends on this.
From examples it looks like there is little power for standard cusum if
exog (other than constant) have mean zero.
References
----------
Ploberger, Werner, and Walter Kramer. “The Cusum Test with OLS Residuals.”
Econometrica 60, no. 2 (March 1992): 271-285.
"""
resid = np.asarray(resid).ravel()
nobs = len(resid)
nobssigma2 = (resid ** 2).sum()
if ddof > 0:
nobssigma2 = nobssigma2 / (nobs - ddof) * nobs
# b is asymptotically a Brownian Bridge
b = resid.cumsum() / np.sqrt(nobssigma2) # use T*sigma directly
# asymptotically distributed as standard Brownian Bridge
sup_b = np.abs(b).max()
crit = [(1, 1.63), (5, 1.36), (10, 1.22)]
# Note stats.kstwobign.isf(0.1) is distribution of sup.abs of Brownian
# Bridge
# >>> stats.kstwobign.isf([0.01,0.05,0.1])
# array([ 1.62762361, 1.35809864, 1.22384787])
pval = stats.kstwobign.sf(sup_b)
return sup_b, pval, crit | Cusum test for parameter stability based on ols residuals.
Parameters
----------
resid : ndarray
An array of residuals from an OLS estimation.
ddof : int
The number of parameters in the OLS estimation, used as degrees
of freedom correction for error variance.
Returns
-------
sup_b : float
The test statistic, maximum of absolute value of scaled cumulative OLS
residuals.
pval : float
Probability of observing the data under the null hypothesis of no
structural change, based on asymptotic distribution which is a Brownian
Bridge
crit: list
The tabulated critical values, for alpha = 1%, 5% and 10%.
Notes
-----
Tested against R:structchange.
Not clear: Assumption 2 in Ploberger, Kramer assumes that exog x have
asymptotically zero mean, x.mean(0) = [1, 0, 0, ..., 0]
Is this really necessary? I do not see how it can affect the test statistic
under the null. It does make a difference under the alternative.
Also, the asymptotic distribution of test statistic depends on this.
From examples it looks like there is little power for standard cusum if
exog (other than constant) have mean zero.
References
----------
Ploberger, Werner, and Walter Kramer. “The Cusum Test with OLS Residuals.”
Econometrica 60, no. 2 (March 1992): 271-285. | breaks_cusumolsresid | python | statsmodels/statsmodels | statsmodels/stats/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/diagnostic.py | BSD-3-Clause |
def variance(self, decomp_type, n=5000, conf=0.99):
"""
A helper function to calculate the variance/std. Used to keep
the decomposition functions cleaner
"""
if self.submitted_n is not None:
n = self.submitted_n
if self.submitted_conf is not None:
conf = self.submitted_conf
if self.submitted_weight is not None:
submitted_weight = [
self.submitted_weight,
1 - self.submitted_weight,
]
bi = self.bi
bifurcate = self.bifurcate
endow_eff_list = []
coef_eff_list = []
int_eff_list = []
exp_eff_list = []
unexp_eff_list = []
for _ in range(0, n):
endog = np.column_stack((self.bi_col, self.endog))
exog = self.exog
amount = len(endog)
samples = np.random.randint(0, high=amount, size=amount)
endog = endog[samples]
exog = exog[samples]
neumark = np.delete(exog, bifurcate, axis=1)
exog_f = exog[np.where(exog[:, bifurcate] == bi[0])]
exog_s = exog[np.where(exog[:, bifurcate] == bi[1])]
endog_f = endog[np.where(endog[:, 0] == bi[0])]
endog_s = endog[np.where(endog[:, 0] == bi[1])]
exog_f = np.delete(exog_f, bifurcate, axis=1)
exog_s = np.delete(exog_s, bifurcate, axis=1)
endog_f = endog_f[:, 1]
endog_s = endog_s[:, 1]
endog = endog[:, 1]
two_fold_type = self.two_fold_type
if self.hasconst is False:
exog_f = add_constant(exog_f, prepend=False)
exog_s = add_constant(exog_s, prepend=False)
exog = add_constant(exog, prepend=False)
neumark = add_constant(neumark, prepend=False)
_f_model = OLS(endog_f, exog_f).fit(
cov_type=self.cov_type, cov_kwds=self.cov_kwds
)
_s_model = OLS(endog_s, exog_s).fit(
cov_type=self.cov_type, cov_kwds=self.cov_kwds
)
exog_f_mean = np.mean(exog_f, axis=0)
exog_s_mean = np.mean(exog_s, axis=0)
if decomp_type == 3:
endow_eff = (exog_f_mean - exog_s_mean) @ _s_model.params
coef_eff = exog_s_mean @ (_f_model.params - _s_model.params)
int_eff = (exog_f_mean - exog_s_mean) @ (
_f_model.params - _s_model.params
)
endow_eff_list.append(endow_eff)
coef_eff_list.append(coef_eff)
int_eff_list.append(int_eff)
elif decomp_type == 2:
len_f = len(exog_f)
len_s = len(exog_s)
if two_fold_type == "cotton":
t_params = (len_f / (len_f + len_s) * _f_model.params) + (
len_s / (len_f + len_s) * _s_model.params
)
elif two_fold_type == "reimers":
t_params = 0.5 * (_f_model.params + _s_model.params)
elif two_fold_type == "self_submitted":
t_params = (
submitted_weight[0] * _f_model.params
+ submitted_weight[1] * _s_model.params
)
elif two_fold_type == "nuemark":
_t_model = OLS(endog, neumark).fit(
cov_type=self.cov_type, cov_kwds=self.cov_kwds
)
t_params = _t_model.params
else:
_t_model = OLS(endog, exog).fit(
cov_type=self.cov_type, cov_kwds=self.cov_kwds
)
t_params = np.delete(_t_model.params, bifurcate)
unexplained = (exog_f_mean @ (_f_model.params - t_params)) + (
exog_s_mean @ (t_params - _s_model.params)
)
explained = (exog_f_mean - exog_s_mean) @ t_params
unexp_eff_list.append(unexplained)
exp_eff_list.append(explained)
high, low = int(n * conf), int(n * (1 - conf))
if decomp_type == 3:
return [
np.std(np.sort(endow_eff_list)[low:high]),
np.std(np.sort(coef_eff_list)[low:high]),
np.std(np.sort(int_eff_list)[low:high]),
]
elif decomp_type == 2:
return [
np.std(np.sort(unexp_eff_list)[low:high]),
np.std(np.sort(exp_eff_list)[low:high]),
] | A helper function to calculate the variance/std. Used to keep
the decomposition functions cleaner | variance | python | statsmodels/statsmodels | statsmodels/stats/oaxaca.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oaxaca.py | BSD-3-Clause |
def three_fold(self, std=False, n=None, conf=None):
"""
Calculates the three-fold Oaxaca Blinder Decompositions
Parameters
----------
std: boolean, optional
If true, bootstrapped standard errors will be calculated.
n: int, optional
A amount of iterations to calculate the bootstrapped
standard errors. This defaults to 5000.
conf: float, optional
This is the confidence required for the standard error
calculation. Defaults to .99, but could be anything less
than or equal to one. One is heavy discouraged, due to the
extreme outliers inflating the variance.
Returns
-------
OaxacaResults
A results container for the three-fold decomposition.
"""
self.submitted_n = n
self.submitted_conf = conf
self.submitted_weight = None
std_val = None
self.endow_eff = (
self.exog_f_mean - self.exog_s_mean
) @ self._s_model.params
self.coef_eff = self.exog_s_mean @ (
self._f_model.params - self._s_model.params
)
self.int_eff = (self.exog_f_mean - self.exog_s_mean) @ (
self._f_model.params - self._s_model.params
)
if std is True:
std_val = self.variance(3)
return OaxacaResults(
(self.endow_eff, self.coef_eff, self.int_eff, self.gap),
3,
std_val=std_val,
) | Calculates the three-fold Oaxaca Blinder Decompositions
Parameters
----------
std: boolean, optional
If true, bootstrapped standard errors will be calculated.
n: int, optional
A amount of iterations to calculate the bootstrapped
standard errors. This defaults to 5000.
conf: float, optional
This is the confidence required for the standard error
calculation. Defaults to .99, but could be anything less
than or equal to one. One is heavy discouraged, due to the
extreme outliers inflating the variance.
Returns
-------
OaxacaResults
A results container for the three-fold decomposition. | three_fold | python | statsmodels/statsmodels | statsmodels/stats/oaxaca.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oaxaca.py | BSD-3-Clause |
def two_fold(
self,
std=False,
two_fold_type="pooled",
submitted_weight=None,
n=None,
conf=None,
):
"""
Calculates the two-fold or pooled Oaxaca Blinder Decompositions
Methods
-------
std: boolean, optional
If true, bootstrapped standard errors will be calculated.
two_fold_type: string, optional
This method allows for the specific calculation of the
non-discriminatory model. There are four different types
available at this time. pooled, cotton, reimers, self_submitted.
Pooled is assumed and if a non-viable parameter is given,
pooled will be ran.
pooled - This type assumes that the pooled model's parameters
(a normal regression) is the non-discriminatory model.
This includes the indicator variable. This is generally
the best idea. If you have economic justification for
using others, then use others.
nuemark - This is similar to the pooled type, but the regression
is not done including the indicator variable.
cotton - This type uses the adjusted in Cotton (1988), which
accounts for the undervaluation of one group causing the
overevalution of another. It uses the sample size weights for
a linear combination of the two model parameters
reimers - This type uses a linear combination of the two
models with both parameters being 50% of the
non-discriminatory model.
self_submitted - This allows the user to submit their
own weights. Please be sure to put the weight of the larger mean
group only. This should be submitted in the
submitted_weights variable.
submitted_weight: int/float, required only for self_submitted,
This is the submitted weight for the larger mean. If the
weight for the larger mean is p, then the weight for the
other mean is 1-p. Only submit the first value.
n: int, optional
A amount of iterations to calculate the bootstrapped
standard errors. This defaults to 5000.
conf: float, optional
This is the confidence required for the standard error
calculation. Defaults to .99, but could be anything less
than or equal to one. One is heavy discouraged, due to the
extreme outliers inflating the variance.
Returns
-------
OaxacaResults
A results container for the two-fold decomposition.
"""
self.submitted_n = n
self.submitted_conf = conf
std_val = None
self.two_fold_type = two_fold_type
self.submitted_weight = submitted_weight
if two_fold_type == "cotton":
self.t_params = (
self.len_f / (self.len_f + self.len_s) * self._f_model.params
) + (self.len_s / (self.len_f + self.len_s) * self._s_model.params)
elif two_fold_type == "reimers":
self.t_params = 0.5 * (self._f_model.params + self._s_model.params)
elif two_fold_type == "self_submitted":
if submitted_weight is None:
raise ValueError("Please submit weights")
submitted_weight = [submitted_weight, 1 - submitted_weight]
self.t_params = (
submitted_weight[0] * self._f_model.params
+ submitted_weight[1] * self._s_model.params
)
elif two_fold_type == "nuemark":
self._t_model = OLS(self.endog, self.neumark).fit(
cov_type=self.cov_type, cov_kwds=self.cov_kwds
)
self.t_params = self._t_model.params
else:
self._t_model = OLS(self.endog, self.exog).fit(
cov_type=self.cov_type, cov_kwds=self.cov_kwds
)
self.t_params = np.delete(self._t_model.params, self.bifurcate)
self.unexplained = (
self.exog_f_mean @ (self._f_model.params - self.t_params)
) + (self.exog_s_mean @ (self.t_params - self._s_model.params))
self.explained = (self.exog_f_mean - self.exog_s_mean) @ self.t_params
if std is True:
std_val = self.variance(2)
return OaxacaResults(
(self.unexplained, self.explained, self.gap), 2, std_val=std_val
) | Calculates the two-fold or pooled Oaxaca Blinder Decompositions
Methods
-------
std: boolean, optional
If true, bootstrapped standard errors will be calculated.
two_fold_type: string, optional
This method allows for the specific calculation of the
non-discriminatory model. There are four different types
available at this time. pooled, cotton, reimers, self_submitted.
Pooled is assumed and if a non-viable parameter is given,
pooled will be ran.
pooled - This type assumes that the pooled model's parameters
(a normal regression) is the non-discriminatory model.
This includes the indicator variable. This is generally
the best idea. If you have economic justification for
using others, then use others.
nuemark - This is similar to the pooled type, but the regression
is not done including the indicator variable.
cotton - This type uses the adjusted in Cotton (1988), which
accounts for the undervaluation of one group causing the
overevalution of another. It uses the sample size weights for
a linear combination of the two model parameters
reimers - This type uses a linear combination of the two
models with both parameters being 50% of the
non-discriminatory model.
self_submitted - This allows the user to submit their
own weights. Please be sure to put the weight of the larger mean
group only. This should be submitted in the
submitted_weights variable.
submitted_weight: int/float, required only for self_submitted,
This is the submitted weight for the larger mean. If the
weight for the larger mean is p, then the weight for the
other mean is 1-p. Only submit the first value.
n: int, optional
A amount of iterations to calculate the bootstrapped
standard errors. This defaults to 5000.
conf: float, optional
This is the confidence required for the standard error
calculation. Defaults to .99, but could be anything less
than or equal to one. One is heavy discouraged, due to the
extreme outliers inflating the variance.
Returns
-------
OaxacaResults
A results container for the two-fold decomposition. | two_fold | python | statsmodels/statsmodels | statsmodels/stats/oaxaca.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oaxaca.py | BSD-3-Clause |
def summary(self):
"""
Print a summary table with the Oaxaca-Blinder effects
"""
if self.model_type == 2:
if self.std is None:
print(
dedent(
f"""\
Oaxaca-Blinder Two-fold Effects
Unexplained Effect: {self.params[0]:.5f}
Explained Effect: {self.params[1]:.5f}
Gap: {self.params[2]:.5f}"""
)
)
else:
print(
dedent(
"""\
Oaxaca-Blinder Two-fold Effects
Unexplained Effect: {:.5f}
Unexplained Standard Error: {:.5f}
Explained Effect: {:.5f}
Explained Standard Error: {:.5f}
Gap: {:.5f}""".format(
self.params[0],
self.std[0],
self.params[1],
self.std[1],
self.params[2],
)
)
)
if self.model_type == 3:
if self.std is None:
print(
dedent(
f"""\
Oaxaca-Blinder Three-fold Effects
Endowment Effect: {self.params[0]:.5f}
Coefficient Effect: {self.params[1]:.5f}
Interaction Effect: {self.params[2]:.5f}
Gap: {self.params[3]:.5f}"""
)
)
else:
print(
dedent(
f"""\
Oaxaca-Blinder Three-fold Effects
Endowment Effect: {self.params[0]:.5f}
Endowment Standard Error: {self.std[0]:.5f}
Coefficient Effect: {self.params[1]:.5f}
Coefficient Standard Error: {self.std[1]:.5f}
Interaction Effect: {self.params[2]:.5f}
Interaction Standard Error: {self.std[2]:.5f}
Gap: {self.params[3]:.5f}"""
)
) | Print a summary table with the Oaxaca-Blinder effects | summary | python | statsmodels/statsmodels | statsmodels/stats/oaxaca.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/oaxaca.py | BSD-3-Clause |
def trimboth(a, proportiontocut, axis=0):
"""
Slices off a proportion of items from both ends of an array.
Slices off the passed proportion of items from both ends of the passed
array (i.e., with `proportiontocut` = 0.1, slices leftmost 10% **and**
rightmost 10% of scores). You must pre-sort the array if you want
'proper' trimming. Slices off less if proportion results in a
non-integer slice index (i.e., conservatively slices off
`proportiontocut`).
Parameters
----------
a : array_like
Data to trim.
proportiontocut : float or int
Proportion of data to trim at each end.
axis : int or None
Axis along which the observations are trimmed. The default is to trim
along axis=0. If axis is None then the array will be flattened before
trimming.
Returns
-------
out : array-like
Trimmed version of array `a`.
Examples
--------
>>> from scipy import stats
>>> a = np.arange(20)
>>> b = stats.trimboth(a, 0.1)
>>> b.shape
(16,)
"""
a = np.asarray(a)
if axis is None:
a = a.ravel()
axis = 0
nobs = a.shape[axis]
lowercut = int(proportiontocut * nobs)
uppercut = nobs - lowercut
if (lowercut >= uppercut):
raise ValueError("Proportion too big.")
sl = [slice(None)] * a.ndim
sl[axis] = slice(lowercut, uppercut)
return a[tuple(sl)] | Slices off a proportion of items from both ends of an array.
Slices off the passed proportion of items from both ends of the passed
array (i.e., with `proportiontocut` = 0.1, slices leftmost 10% **and**
rightmost 10% of scores). You must pre-sort the array if you want
'proper' trimming. Slices off less if proportion results in a
non-integer slice index (i.e., conservatively slices off
`proportiontocut`).
Parameters
----------
a : array_like
Data to trim.
proportiontocut : float or int
Proportion of data to trim at each end.
axis : int or None
Axis along which the observations are trimmed. The default is to trim
along axis=0. If axis is None then the array will be flattened before
trimming.
Returns
-------
out : array-like
Trimmed version of array `a`.
Examples
--------
>>> from scipy import stats
>>> a = np.arange(20)
>>> b = stats.trimboth(a, 0.1)
>>> b.shape
(16,) | trimboth | python | statsmodels/statsmodels | statsmodels/stats/robust_compare.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/robust_compare.py | BSD-3-Clause |
def trim_mean(a, proportiontocut, axis=0):
"""
Return mean of array after trimming observations from both tails.
If `proportiontocut` = 0.1, slices off 'leftmost' and 'rightmost' 10% of
scores. Slices off LESS if proportion results in a non-integer slice
index (i.e., conservatively slices off `proportiontocut` ).
Parameters
----------
a : array_like
Input array
proportiontocut : float
Fraction to cut off at each tail of the sorted observations.
axis : int or None
Axis along which the trimmed means are computed. The default is axis=0.
If axis is None then the trimmed mean will be computed for the
flattened array.
Returns
-------
trim_mean : ndarray
Mean of trimmed array.
"""
newa = trimboth(np.sort(a, axis), proportiontocut, axis=axis)
return np.mean(newa, axis=axis) | Return mean of array after trimming observations from both tails.
If `proportiontocut` = 0.1, slices off 'leftmost' and 'rightmost' 10% of
scores. Slices off LESS if proportion results in a non-integer slice
index (i.e., conservatively slices off `proportiontocut` ).
Parameters
----------
a : array_like
Input array
proportiontocut : float
Fraction to cut off at each tail of the sorted observations.
axis : int or None
Axis along which the trimmed means are computed. The default is axis=0.
If axis is None then the trimmed mean will be computed for the
flattened array.
Returns
-------
trim_mean : ndarray
Mean of trimmed array. | trim_mean | python | statsmodels/statsmodels | statsmodels/stats/robust_compare.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/robust_compare.py | BSD-3-Clause |
def data_trimmed(self):
"""numpy array of trimmed and sorted data
"""
# returns a view
return self.data_sorted[self.sl] | numpy array of trimmed and sorted data | data_trimmed | python | statsmodels/statsmodels | statsmodels/stats/robust_compare.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/robust_compare.py | BSD-3-Clause |
def data_winsorized(self):
"""winsorized data
"""
lb = np.expand_dims(self.lowerbound, self.axis)
ub = np.expand_dims(self.upperbound, self.axis)
return np.clip(self.data_sorted, lb, ub) | winsorized data | data_winsorized | python | statsmodels/statsmodels | statsmodels/stats/robust_compare.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/robust_compare.py | BSD-3-Clause |
def mean_trimmed(self):
"""mean of trimmed data
"""
return np.mean(self.data_sorted[tuple(self.sl)], self.axis) | mean of trimmed data | mean_trimmed | python | statsmodels/statsmodels | statsmodels/stats/robust_compare.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/robust_compare.py | BSD-3-Clause |
def mean_winsorized(self):
"""mean of winsorized data
"""
return np.mean(self.data_winsorized, self.axis) | mean of winsorized data | mean_winsorized | python | statsmodels/statsmodels | statsmodels/stats/robust_compare.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/robust_compare.py | BSD-3-Clause |
def var_winsorized(self):
"""variance of winsorized data
"""
# hardcoded ddof = 1
return np.var(self.data_winsorized, ddof=1, axis=self.axis) | variance of winsorized data | var_winsorized | python | statsmodels/statsmodels | statsmodels/stats/robust_compare.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/robust_compare.py | BSD-3-Clause |
def std_mean_trimmed(self):
"""standard error of trimmed mean
"""
se = np.sqrt(self.var_winsorized / self.nobs_reduced)
# trimming creates correlation across trimmed observations
# trimming is based on order statistics of the data
# wilcox 2012, p.61
se *= np.sqrt(self.nobs / self.nobs_reduced)
return se | standard error of trimmed mean | std_mean_trimmed | python | statsmodels/statsmodels | statsmodels/stats/robust_compare.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/robust_compare.py | BSD-3-Clause |
def std_mean_winsorized(self):
"""standard error of winsorized mean
"""
# the following matches Wilcox, WRS2
std_ = np.sqrt(self.var_winsorized / self.nobs)
std_ *= (self.nobs - 1) / (self.nobs_reduced - 1)
# old version
# tm = self
# formula from an old SAS manual page, simplified
# std_ = np.sqrt(tm.var_winsorized / (tm.nobs_reduced - 1) *
# (tm.nobs - 1.) / tm.nobs)
return std_ | standard error of winsorized mean | std_mean_winsorized | python | statsmodels/statsmodels | statsmodels/stats/robust_compare.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/robust_compare.py | BSD-3-Clause |
def ttest_mean(self, value=0, transform='trimmed',
alternative='two-sided'):
"""
One sample t-test for trimmed or Winsorized mean
Parameters
----------
value : float
Value of the mean under the Null hypothesis
transform : {'trimmed', 'winsorized'}
Specified whether the mean test is based on trimmed or winsorized
data.
alternative : {'two-sided', 'larger', 'smaller'}
Notes
-----
p-value is based on the approximate t-distribution of the test
statistic. The approximation is valid if the underlying distribution
is symmetric.
"""
import statsmodels.stats.weightstats as smws
df = self.nobs_reduced - 1
if transform == 'trimmed':
mean_ = self.mean_trimmed
std_ = self.std_mean_trimmed
elif transform == 'winsorized':
mean_ = self.mean_winsorized
std_ = self.std_mean_winsorized
else:
raise ValueError("transform can only be 'trimmed' or 'winsorized'")
res = smws._tstat_generic(mean_, 0, std_,
df, alternative=alternative, diff=value)
return res + (df,) | One sample t-test for trimmed or Winsorized mean
Parameters
----------
value : float
Value of the mean under the Null hypothesis
transform : {'trimmed', 'winsorized'}
Specified whether the mean test is based on trimmed or winsorized
data.
alternative : {'two-sided', 'larger', 'smaller'}
Notes
-----
p-value is based on the approximate t-distribution of the test
statistic. The approximation is valid if the underlying distribution
is symmetric. | ttest_mean | python | statsmodels/statsmodels | statsmodels/stats/robust_compare.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/robust_compare.py | BSD-3-Clause |
def reset_fraction(self, frac):
"""create a TrimmedMean instance with a new trimming fraction
This reuses the sorted array from the current instance.
"""
tm = TrimmedMean(self.data_sorted, frac, is_sorted=True,
axis=self.axis)
tm.data = self.data
# TODO: this will not work if there is processing of meta-information
# in __init__,
# for example storing a pandas DataFrame or Series index
return tm | create a TrimmedMean instance with a new trimming fraction
This reuses the sorted array from the current instance. | reset_fraction | python | statsmodels/statsmodels | statsmodels/stats/robust_compare.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/robust_compare.py | BSD-3-Clause |
def scale_transform(data, center='median', transform='abs', trim_frac=0.2,
axis=0):
"""Transform data for variance comparison for Levene type tests
Parameters
----------
data : array_like
Observations for the data.
center : "median", "mean", "trimmed" or float
Statistic used for centering observations. If a float, then this
value is used to center. Default is median.
transform : 'abs', 'square', 'identity' or a callable
The transform for the centered data.
trim_frac : float in [0, 0.5)
Fraction of observations that are trimmed on each side of the sorted
observations. This is only used if center is `trimmed`.
axis : int
Axis along which the data are transformed when centering.
Returns
-------
res : ndarray
transformed data in the same shape as the original data.
"""
x = np.asarray(data) # x is shorthand from earlier code
if transform == 'abs':
tfunc = np.abs
elif transform == 'square':
tfunc = lambda x: x * x # noqa
elif transform == 'identity':
tfunc = lambda x: x # noqa
elif callable(transform):
tfunc = transform
else:
raise ValueError('transform should be abs, square or exp')
if center == 'median':
res = tfunc(x - np.expand_dims(np.median(x, axis=axis), axis))
elif center == 'mean':
res = tfunc(x - np.expand_dims(np.mean(x, axis=axis), axis))
elif center == 'trimmed':
center = trim_mean(x, trim_frac, axis=axis)
res = tfunc(x - np.expand_dims(center, axis))
elif isinstance(center, numbers.Number):
res = tfunc(x - center)
else:
raise ValueError('center should be median, mean or trimmed')
return res | Transform data for variance comparison for Levene type tests
Parameters
----------
data : array_like
Observations for the data.
center : "median", "mean", "trimmed" or float
Statistic used for centering observations. If a float, then this
value is used to center. Default is median.
transform : 'abs', 'square', 'identity' or a callable
The transform for the centered data.
trim_frac : float in [0, 0.5)
Fraction of observations that are trimmed on each side of the sorted
observations. This is only used if center is `trimmed`.
axis : int
Axis along which the data are transformed when centering.
Returns
-------
res : ndarray
transformed data in the same shape as the original data. | scale_transform | python | statsmodels/statsmodels | statsmodels/stats/robust_compare.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/robust_compare.py | BSD-3-Clause |
def _int_ifclose(x, dec=1, width=4):
'''helper function for creating result string for int or float
only dec=1 and width=4 is implemented
Parameters
----------
x : int or float
value to format
dec : 1
number of decimals to print if x is not an integer
width : 4
width of string
Returns
-------
xint : int or float
x is converted to int if it is within 1e-14 of an integer
x_string : str
x formatted as string, either '%4d' or '%4.1f'
'''
xint = int(round(x))
if np.max(np.abs(xint - x)) < 1e-14:
return xint, '%4d' % xint
else:
return x, '%4.1f' % x | helper function for creating result string for int or float
only dec=1 and width=4 is implemented
Parameters
----------
x : int or float
value to format
dec : 1
number of decimals to print if x is not an integer
width : 4
width of string
Returns
-------
xint : int or float
x is converted to int if it is within 1e-14 of an integer
x_string : str
x formatted as string, either '%4d' or '%4.1f' | _int_ifclose | python | statsmodels/statsmodels | statsmodels/stats/inter_rater.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/inter_rater.py | BSD-3-Clause |
def aggregate_raters(data, n_cat=None):
'''convert raw data with shape (subject, rater) to (subject, cat_counts)
brings data into correct format for fleiss_kappa
bincount will raise exception if data cannot be converted to integer.
Parameters
----------
data : array_like, 2-Dim
data containing category assignment with subjects in rows and raters
in columns.
n_cat : None or int
If None, then the data is converted to integer categories,
0,1,2,...,n_cat-1. Because of the relabeling only category levels
with non-zero counts are included.
If this is an integer, then the category levels in the data are already
assumed to be in integers, 0,1,2,...,n_cat-1. In this case, the
returned array may contain columns with zero count, if no subject
has been categorized with this level.
Returns
-------
arr : nd_array, (n_rows, n_cat)
Contains counts of raters that assigned a category level to individuals.
Subjects are in rows, category levels in columns.
categories : nd_array, (n_category_levels,)
Contains the category levels.
'''
data = np.asarray(data)
n_rows = data.shape[0]
if n_cat is None:
#I could add int conversion (reverse_index) to np.unique
cat_uni, cat_int = np.unique(data.ravel(), return_inverse=True)
n_cat = len(cat_uni)
data_ = cat_int.reshape(data.shape)
else:
cat_uni = np.arange(n_cat) #for return only, assumed cat levels
data_ = data
tt = np.zeros((n_rows, n_cat), int)
for idx, row in enumerate(data_):
ro = np.bincount(row)
tt[idx, :len(ro)] = ro
return tt, cat_uni | convert raw data with shape (subject, rater) to (subject, cat_counts)
brings data into correct format for fleiss_kappa
bincount will raise exception if data cannot be converted to integer.
Parameters
----------
data : array_like, 2-Dim
data containing category assignment with subjects in rows and raters
in columns.
n_cat : None or int
If None, then the data is converted to integer categories,
0,1,2,...,n_cat-1. Because of the relabeling only category levels
with non-zero counts are included.
If this is an integer, then the category levels in the data are already
assumed to be in integers, 0,1,2,...,n_cat-1. In this case, the
returned array may contain columns with zero count, if no subject
has been categorized with this level.
Returns
-------
arr : nd_array, (n_rows, n_cat)
Contains counts of raters that assigned a category level to individuals.
Subjects are in rows, category levels in columns.
categories : nd_array, (n_category_levels,)
Contains the category levels. | aggregate_raters | python | statsmodels/statsmodels | statsmodels/stats/inter_rater.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/inter_rater.py | BSD-3-Clause |
def to_table(data, bins=None):
'''convert raw data with shape (subject, rater) to (rater1, rater2)
brings data into correct format for cohens_kappa
Parameters
----------
data : array_like, 2-Dim
data containing category assignment with subjects in rows and raters
in columns.
bins : None, int or tuple of array_like
If None, then the data is converted to integer categories,
0,1,2,...,n_cat-1. Because of the relabeling only category levels
with non-zero counts are included.
If this is an integer, then the category levels in the data are already
assumed to be in integers, 0,1,2,...,n_cat-1. In this case, the
returned array may contain columns with zero count, if no subject
has been categorized with this level.
If bins are a tuple of two array_like, then the bins are directly used
by ``numpy.histogramdd``. This is useful if we want to merge categories.
Returns
-------
arr : nd_array, (n_cat, n_cat)
Contingency table that contains counts of category level with rater1
in rows and rater2 in columns.
Notes
-----
no NaN handling, delete rows with missing values
This works also for more than two raters. In that case the dimension of
the resulting contingency table is the same as the number of raters
instead of 2-dimensional.
'''
data = np.asarray(data)
n_rows, n_cols = data.shape
if bins is None:
#I could add int conversion (reverse_index) to np.unique
cat_uni, cat_int = np.unique(data.ravel(), return_inverse=True)
n_cat = len(cat_uni)
data_ = cat_int.reshape(data.shape)
bins_ = np.arange(n_cat+1) - 0.5
#alternative implementation with double loop
#tt = np.asarray([[(x == [i,j]).all(1).sum() for j in cat_uni]
# for i in cat_uni] )
#other altervative: unique rows and bincount
elif np.isscalar(bins):
bins_ = np.arange(bins+1) - 0.5
data_ = data
else:
bins_ = bins
data_ = data
tt = np.histogramdd(data_, (bins_,)*n_cols)
return tt[0], bins_ | convert raw data with shape (subject, rater) to (rater1, rater2)
brings data into correct format for cohens_kappa
Parameters
----------
data : array_like, 2-Dim
data containing category assignment with subjects in rows and raters
in columns.
bins : None, int or tuple of array_like
If None, then the data is converted to integer categories,
0,1,2,...,n_cat-1. Because of the relabeling only category levels
with non-zero counts are included.
If this is an integer, then the category levels in the data are already
assumed to be in integers, 0,1,2,...,n_cat-1. In this case, the
returned array may contain columns with zero count, if no subject
has been categorized with this level.
If bins are a tuple of two array_like, then the bins are directly used
by ``numpy.histogramdd``. This is useful if we want to merge categories.
Returns
-------
arr : nd_array, (n_cat, n_cat)
Contingency table that contains counts of category level with rater1
in rows and rater2 in columns.
Notes
-----
no NaN handling, delete rows with missing values
This works also for more than two raters. In that case the dimension of
the resulting contingency table is the same as the number of raters
instead of 2-dimensional. | to_table | python | statsmodels/statsmodels | statsmodels/stats/inter_rater.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/inter_rater.py | BSD-3-Clause |
def fleiss_kappa(table, method='fleiss'):
"""Fleiss' and Randolph's kappa multi-rater agreement measure
Parameters
----------
table : array_like, 2-D
assumes subjects in rows, and categories in columns. Convert raw data
into this format by using
:func:`statsmodels.stats.inter_rater.aggregate_raters`
method : str
Method 'fleiss' returns Fleiss' kappa which uses the sample margin
to define the chance outcome.
Method 'randolph' or 'uniform' (only first 4 letters are needed)
returns Randolph's (2005) multirater kappa which assumes a uniform
distribution of the categories to define the chance outcome.
Returns
-------
kappa : float
Fleiss's or Randolph's kappa statistic for inter rater agreement
Notes
-----
no variance or hypothesis tests yet
Interrater agreement measures like Fleiss's kappa measure agreement relative
to chance agreement. Different authors have proposed ways of defining
these chance agreements. Fleiss' is based on the marginal sample distribution
of categories, while Randolph uses a uniform distribution of categories as
benchmark. Warrens (2010) showed that Randolph's kappa is always larger or
equal to Fleiss' kappa. Under some commonly observed condition, Fleiss' and
Randolph's kappa provide lower and upper bounds for two similar kappa_like
measures by Light (1971) and Hubert (1977).
References
----------
Wikipedia https://en.wikipedia.org/wiki/Fleiss%27_kappa
Fleiss, Joseph L. 1971. "Measuring Nominal Scale Agreement among Many
Raters." Psychological Bulletin 76 (5): 378-82.
https://doi.org/10.1037/h0031619.
Randolph, Justus J. 2005 "Free-Marginal Multirater Kappa (multirater
K [free]): An Alternative to Fleiss' Fixed-Marginal Multirater Kappa."
Presented at the Joensuu Learning and Instruction Symposium, vol. 2005
https://eric.ed.gov/?id=ED490661
Warrens, Matthijs J. 2010. "Inequalities between Multi-Rater Kappas."
Advances in Data Analysis and Classification 4 (4): 271-86.
https://doi.org/10.1007/s11634-010-0073-4.
"""
table = 1.0 * np.asarray(table) #avoid integer division
n_sub, n_cat = table.shape
n_total = table.sum()
n_rater = table.sum(1)
n_rat = n_rater.max()
#assume fully ranked
assert n_total == n_sub * n_rat
#marginal frequency of categories
p_cat = table.sum(0) / n_total
table2 = table * table
p_rat = (table2.sum(1) - n_rat) / (n_rat * (n_rat - 1.))
p_mean = p_rat.mean()
if method == 'fleiss':
p_mean_exp = (p_cat*p_cat).sum()
elif method.startswith('rand') or method.startswith('unif'):
p_mean_exp = 1 / n_cat
kappa = (p_mean - p_mean_exp) / (1- p_mean_exp)
return kappa | Fleiss' and Randolph's kappa multi-rater agreement measure
Parameters
----------
table : array_like, 2-D
assumes subjects in rows, and categories in columns. Convert raw data
into this format by using
:func:`statsmodels.stats.inter_rater.aggregate_raters`
method : str
Method 'fleiss' returns Fleiss' kappa which uses the sample margin
to define the chance outcome.
Method 'randolph' or 'uniform' (only first 4 letters are needed)
returns Randolph's (2005) multirater kappa which assumes a uniform
distribution of the categories to define the chance outcome.
Returns
-------
kappa : float
Fleiss's or Randolph's kappa statistic for inter rater agreement
Notes
-----
no variance or hypothesis tests yet
Interrater agreement measures like Fleiss's kappa measure agreement relative
to chance agreement. Different authors have proposed ways of defining
these chance agreements. Fleiss' is based on the marginal sample distribution
of categories, while Randolph uses a uniform distribution of categories as
benchmark. Warrens (2010) showed that Randolph's kappa is always larger or
equal to Fleiss' kappa. Under some commonly observed condition, Fleiss' and
Randolph's kappa provide lower and upper bounds for two similar kappa_like
measures by Light (1971) and Hubert (1977).
References
----------
Wikipedia https://en.wikipedia.org/wiki/Fleiss%27_kappa
Fleiss, Joseph L. 1971. "Measuring Nominal Scale Agreement among Many
Raters." Psychological Bulletin 76 (5): 378-82.
https://doi.org/10.1037/h0031619.
Randolph, Justus J. 2005 "Free-Marginal Multirater Kappa (multirater
K [free]): An Alternative to Fleiss' Fixed-Marginal Multirater Kappa."
Presented at the Joensuu Learning and Instruction Symposium, vol. 2005
https://eric.ed.gov/?id=ED490661
Warrens, Matthijs J. 2010. "Inequalities between Multi-Rater Kappas."
Advances in Data Analysis and Classification 4 (4): 271-86.
https://doi.org/10.1007/s11634-010-0073-4. | fleiss_kappa | python | statsmodels/statsmodels | statsmodels/stats/inter_rater.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/inter_rater.py | BSD-3-Clause |
def cohens_kappa(table, weights=None, return_results=True, wt=None):
'''Compute Cohen's kappa with variance and equal-zero test
Parameters
----------
table : array_like, 2-Dim
square array with results of two raters, one rater in rows, second
rater in columns
weights : array_like
The interpretation of weights depends on the wt argument.
If both are None, then the simple kappa is computed.
see wt for the case when wt is not None
If weights is two dimensional, then it is directly used as a weight
matrix. For computing the variance of kappa, the maximum of the
weights is assumed to be smaller or equal to one.
TODO: fix conflicting definitions in the 2-Dim case for
wt : {None, str}
If wt and weights are None, then the simple kappa is computed.
If wt is given, but weights is None, then the weights are set to
be [0, 1, 2, ..., k].
If weights is a one-dimensional array, then it is used to construct
the weight matrix given the following options.
wt in ['linear', 'ca' or None] : use linear weights, Cicchetti-Allison
actual weights are linear in the score "weights" difference
wt in ['quadratic', 'fc'] : use linear weights, Fleiss-Cohen
actual weights are squared in the score "weights" difference
wt = 'toeplitz' : weight matrix is constructed as a toeplitz matrix
from the one dimensional weights.
return_results : bool
If True (default), then an instance of KappaResults is returned.
If False, then only kappa is computed and returned.
Returns
-------
results or kappa
If return_results is True (default), then a results instance with all
statistics is returned
If return_results is False, then only kappa is calculated and returned.
Notes
-----
There are two conflicting definitions of the weight matrix, Wikipedia
versus SAS manual. However, the computation are invariant to rescaling
of the weights matrix, so there is no difference in the results.
Weights for 'linear' and 'quadratic' are interpreted as scores for the
categories, the weights in the computation are based on the pairwise
difference between the scores.
Weights for 'toeplitz' are a interpreted as weighted distance. The distance
only depends on how many levels apart two entries in the table are but
not on the levels themselves.
example:
weights = '0, 1, 2, 3' and wt is either linear or toeplitz means that the
weighting only depends on the simple distance of levels.
weights = '0, 0, 1, 1' and wt = 'linear' means that the first two levels
are zero distance apart and the same for the last two levels. This is
the sample as forming two aggregated levels by merging the first two and
the last two levels, respectively.
weights = [0, 1, 2, 3] and wt = 'quadratic' is the same as squaring these
weights and using wt = 'toeplitz'.
References
----------
Wikipedia
SAS Manual
'''
table = np.asarray(table, float) #avoid integer division
agree = np.diag(table).sum()
nobs = table.sum()
probs = table / nobs
freqs = probs #TODO: rename to use freqs instead of probs for observed
probs_diag = np.diag(probs)
freq_row = table.sum(1) / nobs
freq_col = table.sum(0) / nobs
prob_exp = freq_col * freq_row[:, None]
assert np.allclose(prob_exp.sum(), 1)
#print prob_exp.sum()
agree_exp = np.diag(prob_exp).sum() #need for kappa_max
if weights is None and wt is None:
kind = 'Simple'
kappa = (agree / nobs - agree_exp) / (1 - agree_exp)
if return_results:
#variance
term_a = probs_diag * (1 - (freq_row + freq_col) * (1 - kappa))**2
term_a = term_a.sum()
term_b = probs * (freq_col[:, None] + freq_row)**2
d_idx = np.arange(table.shape[0])
term_b[d_idx, d_idx] = 0 #set diagonal to zero
term_b = (1 - kappa)**2 * term_b.sum()
term_c = (kappa - agree_exp * (1-kappa))**2
var_kappa = (term_a + term_b - term_c) / (1 - agree_exp)**2 / nobs
#term_c = freq_col * freq_row[:, None] * (freq_col + freq_row[:,None])
term_c = freq_col * freq_row * (freq_col + freq_row)
var_kappa0 = (agree_exp + agree_exp**2 - term_c.sum())
var_kappa0 /= (1 - agree_exp)**2 * nobs
else:
if weights is None:
weights = np.arange(table.shape[0])
#weights follows the Wikipedia definition, not the SAS, which is 1 -
kind = 'Weighted'
weights = np.asarray(weights, float)
if weights.ndim == 1:
if wt in ['ca', 'linear', None]:
weights = np.abs(weights[:, None] - weights) / \
(weights[-1] - weights[0])
elif wt in ['fc', 'quadratic']:
weights = (weights[:, None] - weights)**2 / \
(weights[-1] - weights[0])**2
elif wt == 'toeplitz':
#assume toeplitz structure
from scipy.linalg import toeplitz
#weights = toeplitz(np.arange(table.shape[0]))
weights = toeplitz(weights)
else:
raise ValueError('wt option is not known')
else:
rows, cols = table.shape
if (table.shape != weights.shape):
raise ValueError('weights are not square')
#this is formula from Wikipedia
kappa = 1 - (weights * table).sum() / nobs / (weights * prob_exp).sum()
#TODO: add var_kappa for weighted version
if return_results:
var_kappa = np.nan
var_kappa0 = np.nan
#switch to SAS manual weights, problem if user specifies weights
#w is negative in some examples,
#but weights is scale invariant in examples and rough check of source
w = 1. - weights
w_row = (freq_col * w).sum(1)
w_col = (freq_row[:, None] * w).sum(0)
agree_wexp = (w * freq_col * freq_row[:, None]).sum()
term_a = freqs * (w - (w_col + w_row[:, None]) * (1 - kappa))**2
fac = 1. / ((1 - agree_wexp)**2 * nobs)
var_kappa = term_a.sum() - (kappa - agree_wexp * (1 - kappa))**2
var_kappa *= fac
freqse = freq_col * freq_row[:, None]
var_kappa0 = (freqse * (w - (w_col + w_row[:, None]))**2).sum()
var_kappa0 -= agree_wexp**2
var_kappa0 *= fac
kappa_max = (np.minimum(freq_row, freq_col).sum() - agree_exp) / \
(1 - agree_exp)
if return_results:
res = KappaResults( kind=kind,
kappa=kappa,
kappa_max=kappa_max,
weights=weights,
var_kappa=var_kappa,
var_kappa0=var_kappa0)
return res
else:
return kappa | Compute Cohen's kappa with variance and equal-zero test
Parameters
----------
table : array_like, 2-Dim
square array with results of two raters, one rater in rows, second
rater in columns
weights : array_like
The interpretation of weights depends on the wt argument.
If both are None, then the simple kappa is computed.
see wt for the case when wt is not None
If weights is two dimensional, then it is directly used as a weight
matrix. For computing the variance of kappa, the maximum of the
weights is assumed to be smaller or equal to one.
TODO: fix conflicting definitions in the 2-Dim case for
wt : {None, str}
If wt and weights are None, then the simple kappa is computed.
If wt is given, but weights is None, then the weights are set to
be [0, 1, 2, ..., k].
If weights is a one-dimensional array, then it is used to construct
the weight matrix given the following options.
wt in ['linear', 'ca' or None] : use linear weights, Cicchetti-Allison
actual weights are linear in the score "weights" difference
wt in ['quadratic', 'fc'] : use linear weights, Fleiss-Cohen
actual weights are squared in the score "weights" difference
wt = 'toeplitz' : weight matrix is constructed as a toeplitz matrix
from the one dimensional weights.
return_results : bool
If True (default), then an instance of KappaResults is returned.
If False, then only kappa is computed and returned.
Returns
-------
results or kappa
If return_results is True (default), then a results instance with all
statistics is returned
If return_results is False, then only kappa is calculated and returned.
Notes
-----
There are two conflicting definitions of the weight matrix, Wikipedia
versus SAS manual. However, the computation are invariant to rescaling
of the weights matrix, so there is no difference in the results.
Weights for 'linear' and 'quadratic' are interpreted as scores for the
categories, the weights in the computation are based on the pairwise
difference between the scores.
Weights for 'toeplitz' are a interpreted as weighted distance. The distance
only depends on how many levels apart two entries in the table are but
not on the levels themselves.
example:
weights = '0, 1, 2, 3' and wt is either linear or toeplitz means that the
weighting only depends on the simple distance of levels.
weights = '0, 0, 1, 1' and wt = 'linear' means that the first two levels
are zero distance apart and the same for the last two levels. This is
the sample as forming two aggregated levels by merging the first two and
the last two levels, respectively.
weights = [0, 1, 2, 3] and wt = 'quadratic' is the same as squaring these
weights and using wt = 'toeplitz'.
References
----------
Wikipedia
SAS Manual | cohens_kappa | python | statsmodels/statsmodels | statsmodels/stats/inter_rater.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/inter_rater.py | BSD-3-Clause |
def test_mvmean(data, mean_null=0, return_results=True):
"""Hotellings test for multivariate mean in one sample
Parameters
----------
data : array_like
data with observations in rows and variables in columns
mean_null : array_like
mean of the multivariate data under the null hypothesis
return_results : bool
If true, then a results instance is returned. If False, then only
the test statistic and pvalue are returned.
Returns
-------
results : instance of a results class with attributes
statistic, pvalue, t2 and df
(statistic, pvalue) : tuple
If return_results is false, then only the test statistic and the
pvalue are returned.
"""
x = np.asarray(data)
nobs, k_vars = x.shape
mean = x.mean(0)
cov = np.cov(x, rowvar=False, ddof=1)
diff = mean - mean_null
t2 = nobs * diff.dot(np.linalg.solve(cov, diff))
factor = (nobs - 1) * k_vars / (nobs - k_vars)
statistic = t2 / factor
df = (k_vars, nobs - k_vars)
pvalue = stats.f.sf(statistic, df[0], df[1])
if return_results:
res = HolderTuple(statistic=statistic,
pvalue=pvalue,
df=df,
t2=t2,
distr="F")
return res
else:
return statistic, pvalue | Hotellings test for multivariate mean in one sample
Parameters
----------
data : array_like
data with observations in rows and variables in columns
mean_null : array_like
mean of the multivariate data under the null hypothesis
return_results : bool
If true, then a results instance is returned. If False, then only
the test statistic and pvalue are returned.
Returns
-------
results : instance of a results class with attributes
statistic, pvalue, t2 and df
(statistic, pvalue) : tuple
If return_results is false, then only the test statistic and the
pvalue are returned. | test_mvmean | python | statsmodels/statsmodels | statsmodels/stats/multivariate.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multivariate.py | BSD-3-Clause |
def test_mvmean_2indep(data1, data2):
"""Hotellings test for multivariate mean in two independent samples
The null hypothesis is that both samples have the same mean.
The alternative hypothesis is that means differ.
Parameters
----------
data1 : array_like
first sample data with observations in rows and variables in columns
data2 : array_like
second sample data with observations in rows and variables in columns
Returns
-------
results : instance of a results class with attributes
statistic, pvalue, t2 and df
"""
x1 = array_like(data1, "x1", ndim=2)
x2 = array_like(data2, "x2", ndim=2)
nobs1, k_vars = x1.shape
nobs2, k_vars2 = x2.shape
if k_vars2 != k_vars:
msg = "both samples need to have the same number of columns"
raise ValueError(msg)
mean1 = x1.mean(0)
mean2 = x2.mean(0)
cov1 = np.cov(x1, rowvar=False, ddof=1)
cov2 = np.cov(x2, rowvar=False, ddof=1)
nobs_t = nobs1 + nobs2
combined_cov = ((nobs1 - 1) * cov1 + (nobs2 - 1) * cov2) / (nobs_t - 2)
diff = mean1 - mean2
t2 = (nobs1 * nobs2) / nobs_t * diff @ np.linalg.solve(combined_cov, diff)
factor = ((nobs_t - 2) * k_vars) / (nobs_t - k_vars - 1)
statistic = t2 / factor
df = (k_vars, nobs_t - 1 - k_vars)
pvalue = stats.f.sf(statistic, df[0], df[1])
return HolderTuple(statistic=statistic,
pvalue=pvalue,
df=df,
t2=t2,
distr="F") | Hotellings test for multivariate mean in two independent samples
The null hypothesis is that both samples have the same mean.
The alternative hypothesis is that means differ.
Parameters
----------
data1 : array_like
first sample data with observations in rows and variables in columns
data2 : array_like
second sample data with observations in rows and variables in columns
Returns
-------
results : instance of a results class with attributes
statistic, pvalue, t2 and df | test_mvmean_2indep | python | statsmodels/statsmodels | statsmodels/stats/multivariate.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multivariate.py | BSD-3-Clause |
def confint_mvmean(data, lin_transf=None, alpha=0.5, simult=False):
"""Confidence interval for linear transformation of a multivariate mean
Either pointwise or simultaneous confidence intervals are returned.
Parameters
----------
data : array_like
data with observations in rows and variables in columns
lin_transf : array_like or None
The linear transformation or contrast matrix for transforming the
vector of means. If this is None, then the identity matrix is used
which specifies the means themselves.
alpha : float in (0, 1)
confidence level for the confidence interval, commonly used is
alpha=0.05.
simult : bool
If ``simult`` is False (default), then the pointwise confidence
interval is returned.
Otherwise, a simultaneous confidence interval is returned.
Warning: additional simultaneous confidence intervals might be added
and the default for those might change.
Returns
-------
low : ndarray
lower confidence bound on the linear transformed
upp : ndarray
upper confidence bound on the linear transformed
values : ndarray
mean or their linear transformation, center of the confidence region
Notes
-----
Pointwise confidence interval is based on Johnson and Wichern
equation (5-21) page 224.
Simultaneous confidence interval is based on Johnson and Wichern
Result 5.3 page 225.
This looks like Sheffe simultaneous confidence intervals.
Bonferroni corrected simultaneous confidence interval might be added in
future
References
----------
Johnson, Richard A., and Dean W. Wichern. 2007. Applied Multivariate
Statistical Analysis. 6th ed. Upper Saddle River, N.J: Pearson Prentice
Hall.
"""
x = np.asarray(data)
nobs, k_vars = x.shape
if lin_transf is None:
lin_transf = np.eye(k_vars)
mean = x.mean(0)
cov = np.cov(x, rowvar=False, ddof=0)
ci = confint_mvmean_fromstats(mean, cov, nobs, lin_transf=lin_transf,
alpha=alpha, simult=simult)
return ci | Confidence interval for linear transformation of a multivariate mean
Either pointwise or simultaneous confidence intervals are returned.
Parameters
----------
data : array_like
data with observations in rows and variables in columns
lin_transf : array_like or None
The linear transformation or contrast matrix for transforming the
vector of means. If this is None, then the identity matrix is used
which specifies the means themselves.
alpha : float in (0, 1)
confidence level for the confidence interval, commonly used is
alpha=0.05.
simult : bool
If ``simult`` is False (default), then the pointwise confidence
interval is returned.
Otherwise, a simultaneous confidence interval is returned.
Warning: additional simultaneous confidence intervals might be added
and the default for those might change.
Returns
-------
low : ndarray
lower confidence bound on the linear transformed
upp : ndarray
upper confidence bound on the linear transformed
values : ndarray
mean or their linear transformation, center of the confidence region
Notes
-----
Pointwise confidence interval is based on Johnson and Wichern
equation (5-21) page 224.
Simultaneous confidence interval is based on Johnson and Wichern
Result 5.3 page 225.
This looks like Sheffe simultaneous confidence intervals.
Bonferroni corrected simultaneous confidence interval might be added in
future
References
----------
Johnson, Richard A., and Dean W. Wichern. 2007. Applied Multivariate
Statistical Analysis. 6th ed. Upper Saddle River, N.J: Pearson Prentice
Hall. | confint_mvmean | python | statsmodels/statsmodels | statsmodels/stats/multivariate.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multivariate.py | BSD-3-Clause |
def confint_mvmean_fromstats(mean, cov, nobs, lin_transf=None, alpha=0.05,
simult=False):
"""Confidence interval for linear transformation of a multivariate mean
Either pointwise or simultaneous confidence intervals are returned.
Data is provided in the form of summary statistics, mean, cov, nobs.
Parameters
----------
mean : ndarray
cov : ndarray
nobs : int
lin_transf : array_like or None
The linear transformation or contrast matrix for transforming the
vector of means. If this is None, then the identity matrix is used
which specifies the means themselves.
alpha : float in (0, 1)
confidence level for the confidence interval, commonly used is
alpha=0.05.
simult : bool
If simult is False (default), then pointwise confidence interval is
returned.
Otherwise, a simultaneous confidence interval is returned.
Warning: additional simultaneous confidence intervals might be added
and the default for those might change.
Notes
-----
Pointwise confidence interval is based on Johnson and Wichern
equation (5-21) page 224.
Simultaneous confidence interval is based on Johnson and Wichern
Result 5.3 page 225.
This looks like Sheffe simultaneous confidence intervals.
Bonferroni corrected simultaneous confidence interval might be added in
future
References
----------
Johnson, Richard A., and Dean W. Wichern. 2007. Applied Multivariate
Statistical Analysis. 6th ed. Upper Saddle River, N.J: Pearson Prentice
Hall.
"""
mean = np.asarray(mean)
cov = np.asarray(cov)
c = np.atleast_2d(lin_transf)
k_vars = len(mean)
if simult is False:
values = c.dot(mean)
quad_form = (c * cov.dot(c.T).T).sum(1)
df = nobs - 1
t_critval = stats.t.isf(alpha / 2, df)
ci_diff = np.sqrt(quad_form / df) * t_critval
low = values - ci_diff
upp = values + ci_diff
else:
values = c.dot(mean)
quad_form = (c * cov.dot(c.T).T).sum(1)
factor = (nobs - 1) * k_vars / (nobs - k_vars) / nobs
df = (k_vars, nobs - k_vars)
f_critval = stats.f.isf(alpha, df[0], df[1])
ci_diff = np.sqrt(factor * quad_form * f_critval)
low = values - ci_diff
upp = values + ci_diff
return low, upp, values # , (f_critval, factor, quad_form, df) | Confidence interval for linear transformation of a multivariate mean
Either pointwise or simultaneous confidence intervals are returned.
Data is provided in the form of summary statistics, mean, cov, nobs.
Parameters
----------
mean : ndarray
cov : ndarray
nobs : int
lin_transf : array_like or None
The linear transformation or contrast matrix for transforming the
vector of means. If this is None, then the identity matrix is used
which specifies the means themselves.
alpha : float in (0, 1)
confidence level for the confidence interval, commonly used is
alpha=0.05.
simult : bool
If simult is False (default), then pointwise confidence interval is
returned.
Otherwise, a simultaneous confidence interval is returned.
Warning: additional simultaneous confidence intervals might be added
and the default for those might change.
Notes
-----
Pointwise confidence interval is based on Johnson and Wichern
equation (5-21) page 224.
Simultaneous confidence interval is based on Johnson and Wichern
Result 5.3 page 225.
This looks like Sheffe simultaneous confidence intervals.
Bonferroni corrected simultaneous confidence interval might be added in
future
References
----------
Johnson, Richard A., and Dean W. Wichern. 2007. Applied Multivariate
Statistical Analysis. 6th ed. Upper Saddle River, N.J: Pearson Prentice
Hall. | confint_mvmean_fromstats | python | statsmodels/statsmodels | statsmodels/stats/multivariate.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multivariate.py | BSD-3-Clause |
def test_cov(cov, nobs, cov_null):
"""One sample hypothesis test for covariance equal to null covariance
The Null hypothesis is that cov = cov_null, against the alternative that
it is not equal to cov_null
Parameters
----------
cov : array_like
Covariance matrix of the data, estimated with denominator ``(N - 1)``,
i.e. `ddof=1`.
nobs : int
number of observations used in the estimation of the covariance
cov_null : nd_array
covariance under the null hypothesis
Returns
-------
res : instance of HolderTuple
results with ``statistic, pvalue`` and other attributes like ``df``
References
----------
Bartlett, M. S. 1954. “A Note on the Multiplying Factors for Various Χ2
Approximations.” Journal of the Royal Statistical Society. Series B
(Methodological) 16 (2): 296–98.
Rencher, Alvin C., and William F. Christensen. 2012. Methods of
Multivariate Analysis: Rencher/Methods. Wiley Series in Probability and
Statistics. Hoboken, NJ, USA: John Wiley & Sons, Inc.
https://doi.org/10.1002/9781118391686.
StataCorp, L. P. Stata Multivariate Statistics: Reference Manual.
Stata Press Publication.
"""
# using Stata formulas where cov_sample use nobs in denominator
# Bartlett 1954 has fewer terms
S = np.asarray(cov) * (nobs - 1) / nobs
S0 = np.asarray(cov_null)
k = cov.shape[0]
n = nobs
fact = nobs - 1.
fact *= 1 - (2 * k + 1 - 2 / (k + 1)) / (6 * (n - 1) - 1)
fact2 = _logdet(S0) - _logdet(n / (n - 1) * S)
fact2 += np.trace(n / (n - 1) * np.linalg.solve(S0, S)) - k
statistic = fact * fact2
df = k * (k + 1) / 2
pvalue = stats.chi2.sf(statistic, df)
return HolderTuple(statistic=statistic,
pvalue=pvalue,
df=df,
distr="chi2",
null="equal value",
cov_null=cov_null
) | One sample hypothesis test for covariance equal to null covariance
The Null hypothesis is that cov = cov_null, against the alternative that
it is not equal to cov_null
Parameters
----------
cov : array_like
Covariance matrix of the data, estimated with denominator ``(N - 1)``,
i.e. `ddof=1`.
nobs : int
number of observations used in the estimation of the covariance
cov_null : nd_array
covariance under the null hypothesis
Returns
-------
res : instance of HolderTuple
results with ``statistic, pvalue`` and other attributes like ``df``
References
----------
Bartlett, M. S. 1954. “A Note on the Multiplying Factors for Various Χ2
Approximations.” Journal of the Royal Statistical Society. Series B
(Methodological) 16 (2): 296–98.
Rencher, Alvin C., and William F. Christensen. 2012. Methods of
Multivariate Analysis: Rencher/Methods. Wiley Series in Probability and
Statistics. Hoboken, NJ, USA: John Wiley & Sons, Inc.
https://doi.org/10.1002/9781118391686.
StataCorp, L. P. Stata Multivariate Statistics: Reference Manual.
Stata Press Publication. | test_cov | python | statsmodels/statsmodels | statsmodels/stats/multivariate.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multivariate.py | BSD-3-Clause |
def _get_blocks(mat, block_len):
"""get diagonal blocks from matrix
"""
k = len(mat)
idx = np.cumsum(block_len)
if idx[-1] == k:
idx = idx[:-1]
elif idx[-1] > k:
raise ValueError("sum of block_len larger than shape of mat")
else:
# allow one missing block that is the remainder
pass
idx_blocks = np.split(np.arange(k), idx)
blocks = []
for ii in idx_blocks:
blocks.append(mat[ii[:, None], ii])
return blocks, idx_blocks | get diagonal blocks from matrix | _get_blocks | python | statsmodels/statsmodels | statsmodels/stats/multivariate.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/multivariate.py | BSD-3-Clause |
def gof_chisquare_discrete(distfn, arg, rvs, alpha, msg):
'''perform chisquare test for random sample of a discrete distribution
Parameters
----------
distname : str
name of distribution function
arg : sequence
parameters of distribution
alpha : float
significance level, threshold for p-value
Returns
-------
result : bool
0 if test passes, 1 if test fails
Notes
-----
originally written for scipy.stats test suite,
still needs to be checked for standalone usage, insufficient input checking
may not run yet (after copy/paste)
refactor: maybe a class, check returns, or separate binning from
test results
'''
# define parameters for test
## n=2000
n = len(rvs)
nsupp = 20
wsupp = 1.0/nsupp
## distfn = getattr(stats, distname)
## np.random.seed(9765456)
## rvs = distfn.rvs(size=n,*arg)
# construct intervals with minimum mass 1/nsupp
# intervalls are left-half-open as in a cdf difference
distsupport = lrange(max(distfn.a, -1000), min(distfn.b, 1000) + 1)
last = 0
distsupp = [max(distfn.a, -1000)]
distmass = []
for ii in distsupport:
current = distfn.cdf(ii,*arg)
if current - last >= wsupp-1e-14:
distsupp.append(ii)
distmass.append(current - last)
last = current
if current > (1-wsupp):
break
if distsupp[-1] < distfn.b:
distsupp.append(distfn.b)
distmass.append(1-last)
distsupp = np.array(distsupp)
distmass = np.array(distmass)
# convert intervals to right-half-open as required by histogram
histsupp = distsupp+1e-8
histsupp[0] = distfn.a
# find sample frequencies and perform chisquare test
#TODO: move to compatibility.py
freq, hsupp = np.histogram(rvs,histsupp)
# cdfs = distfn.cdf(distsupp,*arg)
(chis,pval) = stats.chisquare(np.array(freq),n*distmass)
return chis, pval, (pval > alpha), 'chisquare - test for %s' \
'at arg = %s with pval = %s' % (msg,str(arg),str(pval)) | perform chisquare test for random sample of a discrete distribution
Parameters
----------
distname : str
name of distribution function
arg : sequence
parameters of distribution
alpha : float
significance level, threshold for p-value
Returns
-------
result : bool
0 if test passes, 1 if test fails
Notes
-----
originally written for scipy.stats test suite,
still needs to be checked for standalone usage, insufficient input checking
may not run yet (after copy/paste)
refactor: maybe a class, check returns, or separate binning from
test results | gof_chisquare_discrete | python | statsmodels/statsmodels | statsmodels/stats/gof.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/gof.py | BSD-3-Clause |
def gof_binning_discrete(rvs, distfn, arg, nsupp=20):
'''get bins for chisquare type gof tests for a discrete distribution
Parameters
----------
rvs : ndarray
sample data
distname : str
name of distribution function
arg : sequence
parameters of distribution
nsupp : int
number of bins. The algorithm tries to find bins with equal weights.
depending on the distribution, the actual number of bins can be smaller.
Returns
-------
freq : ndarray
empirical frequencies for sample; not normalized, adds up to sample size
expfreq : ndarray
theoretical frequencies according to distribution
histsupp : ndarray
bin boundaries for histogram, (added 1e-8 for numerical robustness)
Notes
-----
The results can be used for a chisquare test ::
(chis,pval) = stats.chisquare(freq, expfreq)
originally written for scipy.stats test suite,
still needs to be checked for standalone usage, insufficient input checking
may not run yet (after copy/paste)
refactor: maybe a class, check returns, or separate binning from
test results
todo :
optimal number of bins ? (check easyfit),
recommendation in literature at least 5 expected observations in each bin
'''
# define parameters for test
## n=2000
n = len(rvs)
wsupp = 1.0/nsupp
## distfn = getattr(stats, distname)
## np.random.seed(9765456)
## rvs = distfn.rvs(size=n,*arg)
# construct intervals with minimum mass 1/nsupp
# intervalls are left-half-open as in a cdf difference
distsupport = lrange(max(distfn.a, -1000), min(distfn.b, 1000) + 1)
last = 0
distsupp = [max(distfn.a, -1000)]
distmass = []
for ii in distsupport:
current = distfn.cdf(ii,*arg)
if current - last >= wsupp-1e-14:
distsupp.append(ii)
distmass.append(current - last)
last = current
if current > (1-wsupp):
break
if distsupp[-1] < distfn.b:
distsupp.append(distfn.b)
distmass.append(1-last)
distsupp = np.array(distsupp)
distmass = np.array(distmass)
# convert intervals to right-half-open as required by histogram
histsupp = distsupp+1e-8
histsupp[0] = distfn.a
# find sample frequencies and perform chisquare test
freq,hsupp = np.histogram(rvs,histsupp)
#freq,hsupp = np.histogram(rvs,histsupp,new=True)
distfn.cdf(distsupp,*arg)
return np.array(freq), n*distmass, histsupp | get bins for chisquare type gof tests for a discrete distribution
Parameters
----------
rvs : ndarray
sample data
distname : str
name of distribution function
arg : sequence
parameters of distribution
nsupp : int
number of bins. The algorithm tries to find bins with equal weights.
depending on the distribution, the actual number of bins can be smaller.
Returns
-------
freq : ndarray
empirical frequencies for sample; not normalized, adds up to sample size
expfreq : ndarray
theoretical frequencies according to distribution
histsupp : ndarray
bin boundaries for histogram, (added 1e-8 for numerical robustness)
Notes
-----
The results can be used for a chisquare test ::
(chis,pval) = stats.chisquare(freq, expfreq)
originally written for scipy.stats test suite,
still needs to be checked for standalone usage, insufficient input checking
may not run yet (after copy/paste)
refactor: maybe a class, check returns, or separate binning from
test results
todo :
optimal number of bins ? (check easyfit),
recommendation in literature at least 5 expected observations in each bin | gof_binning_discrete | python | statsmodels/statsmodels | statsmodels/stats/gof.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/gof.py | BSD-3-Clause |
def chisquare(f_obs, f_exp=None, value=0, ddof=0, return_basic=True):
'''chisquare goodness-of-fit test
The null hypothesis is that the distance between the expected distribution
and the observed frequencies is ``value``. The alternative hypothesis is
that the distance is larger than ``value``. ``value`` is normalized in
terms of effect size.
The standard chisquare test has the null hypothesis that ``value=0``, that
is the distributions are the same.
Notes
-----
The case with value greater than zero is similar to an equivalence test,
that the exact null hypothesis is replaced by an approximate hypothesis.
However, TOST "reverses" null and alternative hypothesis, while here the
alternative hypothesis is that the distance (divergence) is larger than a
threshold.
References
----------
McLaren, ...
Drost,...
See Also
--------
powerdiscrepancy
scipy.stats.chisquare
'''
f_obs = np.asarray(f_obs)
n_bins = len(f_obs)
nobs = f_obs.sum(0)
if f_exp is None:
# uniform distribution
f_exp = np.empty(n_bins, float)
f_exp.fill(nobs / float(n_bins))
f_exp = np.asarray(f_exp, float)
chisq = ((f_obs - f_exp)**2 / f_exp).sum(0)
if value == 0:
pvalue = stats.chi2.sf(chisq, n_bins - 1 - ddof)
else:
pvalue = stats.ncx2.sf(chisq, n_bins - 1 - ddof, value**2 * nobs)
if return_basic:
return chisq, pvalue
else:
return chisq, pvalue #TODO: replace with TestResults | chisquare goodness-of-fit test
The null hypothesis is that the distance between the expected distribution
and the observed frequencies is ``value``. The alternative hypothesis is
that the distance is larger than ``value``. ``value`` is normalized in
terms of effect size.
The standard chisquare test has the null hypothesis that ``value=0``, that
is the distributions are the same.
Notes
-----
The case with value greater than zero is similar to an equivalence test,
that the exact null hypothesis is replaced by an approximate hypothesis.
However, TOST "reverses" null and alternative hypothesis, while here the
alternative hypothesis is that the distance (divergence) is larger than a
threshold.
References
----------
McLaren, ...
Drost,...
See Also
--------
powerdiscrepancy
scipy.stats.chisquare | chisquare | python | statsmodels/statsmodels | statsmodels/stats/gof.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/gof.py | BSD-3-Clause |
def chisquare_power(effect_size, nobs, n_bins, alpha=0.05, ddof=0):
'''power of chisquare goodness of fit test
effect size is sqrt of chisquare statistic divided by nobs
Parameters
----------
effect_size : float
This is the deviation from the Null of the normalized chi_square
statistic. This follows Cohen's definition (sqrt).
nobs : int or float
number of observations
n_bins : int (or float)
number of bins, or points in the discrete distribution
alpha : float in (0,1)
significance level of the test, default alpha=0.05
Returns
-------
power : float
power of the test at given significance level at effect size
Notes
-----
This function also works vectorized if all arguments broadcast.
This can also be used to calculate the power for power divergence test.
However, for the range of more extreme values of the power divergence
parameter, this power is not a very good approximation for samples of
small to medium size (Drost et al. 1989)
References
----------
Drost, ...
See Also
--------
chisquare_effectsize
statsmodels.stats.GofChisquarePower
'''
crit = stats.chi2.isf(alpha, n_bins - 1 - ddof)
power = stats.ncx2.sf(crit, n_bins - 1 - ddof, effect_size**2 * nobs)
return power | power of chisquare goodness of fit test
effect size is sqrt of chisquare statistic divided by nobs
Parameters
----------
effect_size : float
This is the deviation from the Null of the normalized chi_square
statistic. This follows Cohen's definition (sqrt).
nobs : int or float
number of observations
n_bins : int (or float)
number of bins, or points in the discrete distribution
alpha : float in (0,1)
significance level of the test, default alpha=0.05
Returns
-------
power : float
power of the test at given significance level at effect size
Notes
-----
This function also works vectorized if all arguments broadcast.
This can also be used to calculate the power for power divergence test.
However, for the range of more extreme values of the power divergence
parameter, this power is not a very good approximation for samples of
small to medium size (Drost et al. 1989)
References
----------
Drost, ...
See Also
--------
chisquare_effectsize
statsmodels.stats.GofChisquarePower | chisquare_power | python | statsmodels/statsmodels | statsmodels/stats/gof.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/gof.py | BSD-3-Clause |
def chisquare_effectsize(probs0, probs1, correction=None, cohen=True, axis=0):
'''effect size for a chisquare goodness-of-fit test
Parameters
----------
probs0 : array_like
probabilities or cell frequencies under the Null hypothesis
probs1 : array_like
probabilities or cell frequencies under the Alternative hypothesis
probs0 and probs1 need to have the same length in the ``axis`` dimension.
and broadcast in the other dimensions
Both probs0 and probs1 are normalized to add to one (in the ``axis``
dimension).
correction : None or tuple
If None, then the effect size is the chisquare statistic divide by
the number of observations.
If the correction is a tuple (nobs, df), then the effectsize is
corrected to have less bias and a smaller variance. However, the
correction can make the effectsize negative. In that case, the
effectsize is set to zero.
Pederson and Johnson (1990) as referenced in McLaren et all. (1994)
cohen : bool
If True, then the square root is returned as in the definition of the
effect size by Cohen (1977), If False, then the original effect size
is returned.
axis : int
If the probability arrays broadcast to more than 1 dimension, then
this is the axis over which the sums are taken.
Returns
-------
effectsize : float
effect size of chisquare test
'''
probs0 = np.asarray(probs0, float)
probs1 = np.asarray(probs1, float)
probs0 = probs0 / probs0.sum(axis)
probs1 = probs1 / probs1.sum(axis)
d2 = ((probs1 - probs0)**2 / probs0).sum(axis)
if correction is not None:
nobs, df = correction
diff = ((probs1 - probs0) / probs0).sum(axis)
d2 = np.maximum((d2 * nobs - diff - df) / (nobs - 1.), 0)
if cohen:
return np.sqrt(d2)
else:
return d2 | effect size for a chisquare goodness-of-fit test
Parameters
----------
probs0 : array_like
probabilities or cell frequencies under the Null hypothesis
probs1 : array_like
probabilities or cell frequencies under the Alternative hypothesis
probs0 and probs1 need to have the same length in the ``axis`` dimension.
and broadcast in the other dimensions
Both probs0 and probs1 are normalized to add to one (in the ``axis``
dimension).
correction : None or tuple
If None, then the effect size is the chisquare statistic divide by
the number of observations.
If the correction is a tuple (nobs, df), then the effectsize is
corrected to have less bias and a smaller variance. However, the
correction can make the effectsize negative. In that case, the
effectsize is set to zero.
Pederson and Johnson (1990) as referenced in McLaren et all. (1994)
cohen : bool
If True, then the square root is returned as in the definition of the
effect size by Cohen (1977), If False, then the original effect size
is returned.
axis : int
If the probability arrays broadcast to more than 1 dimension, then
this is the axis over which the sums are taken.
Returns
-------
effectsize : float
effect size of chisquare test | chisquare_effectsize | python | statsmodels/statsmodels | statsmodels/stats/gof.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/gof.py | BSD-3-Clause |
def pval_corrected(self, method=None):
'''p-values corrected for multiple testing problem
This uses the default p-value correction of the instance stored in
``self.multitest_method`` if method is None.
'''
import statsmodels.stats.multitest as smt
if method is None:
method = self.multitest_method
# TODO: breaks with method=None
return smt.multipletests(self.pvals_raw, method=method)[1] | p-values corrected for multiple testing problem
This uses the default p-value correction of the instance stored in
``self.multitest_method`` if method is None. | pval_corrected | python | statsmodels/statsmodels | statsmodels/stats/base.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/base.py | BSD-3-Clause |
def pval_table(self):
'''create a (n_levels, n_levels) array with corrected p_values
this needs to improve, similar to R pairwise output
'''
k = self.n_levels
pvals_mat = np.zeros((k, k))
# if we do not assume we have all pairs
pvals_mat[lzip(*self.all_pairs)] = self.pval_corrected()
return pvals_mat | create a (n_levels, n_levels) array with corrected p_values
this needs to improve, similar to R pairwise output | pval_table | python | statsmodels/statsmodels | statsmodels/stats/base.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/base.py | BSD-3-Clause |
def summary(self):
'''returns text summarizing the results
uses the default pvalue correction of the instance stored in
``self.multitest_method``
'''
import statsmodels.stats.multitest as smt
maxlevel = max(len(ss) for ss in self.all_pairs_names)
text = ('Corrected p-values using %s p-value correction\n\n'
% smt.multitest_methods_names[self.multitest_method])
text += 'Pairs' + (' ' * (maxlevel - 5 + 1)) + 'p-values\n'
text += '\n'.join(f'{pairs} {pv:6.4g}' for (pairs, pv) in
zip(self.all_pairs_names, self.pval_corrected()))
return text | returns text summarizing the results
uses the default pvalue correction of the instance stored in
``self.multitest_method`` | summary | python | statsmodels/statsmodels | statsmodels/stats/base.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/base.py | BSD-3-Clause |
def dispersion_poisson(results):
"""Score/LM type tests for Poisson variance assumptions
.. deprecated:: 0.14
dispersion_poisson moved to discrete._diagnostic_count
Null Hypothesis is
H0: var(y) = E(y) and assuming E(y) is correctly specified
H1: var(y) ~= E(y)
The tests are based on the constrained model, i.e. the Poisson model.
The tests differ in their assumed alternatives, and in their maintained
assumptions.
Parameters
----------
results : Poisson results instance
This can be a results instance for either a discrete Poisson or a GLM
with family Poisson.
Returns
-------
res : ndarray, shape (7, 2)
each row contains the test statistic and p-value for one of the 7 tests
computed here.
description : 2-D list of strings
Each test has two strings a descriptive name and a string for the
alternative hypothesis.
"""
msg = (
'dispersion_poisson here is deprecated, use the version in '
'discrete._diagnostic_count'
)
warnings.warn(msg, FutureWarning)
from statsmodels.discrete._diagnostics_count import test_poisson_dispersion
return test_poisson_dispersion(results, _old=True) | Score/LM type tests for Poisson variance assumptions
.. deprecated:: 0.14
dispersion_poisson moved to discrete._diagnostic_count
Null Hypothesis is
H0: var(y) = E(y) and assuming E(y) is correctly specified
H1: var(y) ~= E(y)
The tests are based on the constrained model, i.e. the Poisson model.
The tests differ in their assumed alternatives, and in their maintained
assumptions.
Parameters
----------
results : Poisson results instance
This can be a results instance for either a discrete Poisson or a GLM
with family Poisson.
Returns
-------
res : ndarray, shape (7, 2)
each row contains the test statistic and p-value for one of the 7 tests
computed here.
description : 2-D list of strings
Each test has two strings a descriptive name and a string for the
alternative hypothesis. | dispersion_poisson | python | statsmodels/statsmodels | statsmodels/stats/_diagnostic_other.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_diagnostic_other.py | BSD-3-Clause |
def dispersion_poisson_generic(results, exog_new_test, exog_new_control=None,
include_score=False, use_endog=True,
cov_type='HC3', cov_kwds=None, use_t=False):
"""A variable addition test for the variance function
.. deprecated:: 0.14
dispersion_poisson_generic moved to discrete._diagnostic_count
This uses an artificial regression to calculate a variant of an LM or
generalized score test for the specification of the variance assumption
in a Poisson model. The performed test is a Wald test on the coefficients
of the `exog_new_test`.
Warning: insufficiently tested, especially for options
"""
msg = (
'dispersion_poisson_generic here is deprecated, use the version in '
'discrete._diagnostic_count'
)
warnings.warn(msg, FutureWarning)
from statsmodels.discrete._diagnostics_count import (
_test_poisson_dispersion_generic,
)
res_test = _test_poisson_dispersion_generic(
results, exog_new_test, exog_new_control= exog_new_control,
include_score=include_score, use_endog=use_endog,
cov_type=cov_type, cov_kwds=cov_kwds, use_t=use_t,
)
return res_test | A variable addition test for the variance function
.. deprecated:: 0.14
dispersion_poisson_generic moved to discrete._diagnostic_count
This uses an artificial regression to calculate a variant of an LM or
generalized score test for the specification of the variance assumption
in a Poisson model. The performed test is a Wald test on the coefficients
of the `exog_new_test`.
Warning: insufficiently tested, especially for options | dispersion_poisson_generic | python | statsmodels/statsmodels | statsmodels/stats/_diagnostic_other.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_diagnostic_other.py | BSD-3-Clause |
def lm_test_glm(result, exog_extra, mean_deriv=None):
'''score/lagrange multiplier test for GLM
Wooldridge procedure for test of mean function in GLM
Parameters
----------
results : GLMResults instance
results instance with the constrained model
exog_extra : ndarray or None
additional exogenous variables for variable addition test
This can be set to None if mean_deriv is provided.
mean_deriv : None or ndarray
Extra moment condition that correspond to the partial derivative of
a mean function with respect to some parameters.
Returns
-------
test_results : Results instance
The results instance has the following attributes which are score
statistic and p-value for 3 versions of the score test.
c1, pval1 : nonrobust score_test results
c2, pval2 : score test results robust to over or under dispersion
c3, pval3 : score test results fully robust to any heteroscedasticity
The test results instance also has a simple summary method.
Notes
-----
TODO: add `df` to results and make df detection more robust
This implements the auxiliary regression procedure of Wooldridge,
implemented based on the presentation in chapter 8 in Handbook of
Applied Econometrics 2.
References
----------
Wooldridge, Jeffrey M. 1997. “Quasi-Likelihood Methods for Count Data.”
Handbook of Applied Econometrics 2: 352–406.
and other articles and text book by Wooldridge
'''
if hasattr(result, '_result'):
res = result._result
else:
res = result
mod = result.model
nobs = mod.endog.shape[0]
#mean_func = mod.family.link.inverse
dlinkinv = mod.family.link.inverse_deriv
# derivative of mean function w.r.t. beta (linear params)
def dm(x, linpred):
return dlinkinv(linpred)[:,None] * x
var_func = mod.family.variance
x = result.model.exog
x2 = exog_extra
# test omitted
try:
lin_pred = res.predict(which="linear")
except TypeError:
# TODO: Standardized to which="linear" and remove linear kwarg
lin_pred = res.predict(linear=True)
dm_incl = dm(x, lin_pred)
if x2 is not None:
dm_excl = dm(x2, lin_pred)
if mean_deriv is not None:
# allow both and stack
dm_excl = np.column_stack((dm_excl, mean_deriv))
elif mean_deriv is not None:
dm_excl = mean_deriv
else:
raise ValueError('either exog_extra or mean_deriv have to be provided')
# TODO check for rank or redundant, note OLS calculates the rank
k_constraint = dm_excl.shape[1]
fittedvalues = res.predict() # discrete has linpred instead of mean
v = var_func(fittedvalues)
std = np.sqrt(v)
res_ols1 = OLS(res.resid_response / std, np.column_stack((dm_incl, dm_excl)) / std[:, None]).fit()
# case: nonrobust assumes variance implied by distribution is correct
c1 = res_ols1.ess
pval1 = stats.chi2.sf(c1, k_constraint)
#print c1, stats.chi2.sf(c1, 2)
# case: robust to dispersion
c2 = nobs * res_ols1.rsquared
pval2 = stats.chi2.sf(c2, k_constraint)
#print c2, stats.chi2.sf(c2, 2)
# case: robust to heteroscedasticity
from statsmodels.stats.multivariate_tools import partial_project
pp = partial_project(dm_excl / std[:,None], dm_incl / std[:,None])
resid_p = res.resid_response / std
res_ols3 = OLS(np.ones(nobs), pp.resid * resid_p[:,None]).fit()
#c3 = nobs * res_ols3.rsquared # this is Wooldridge
c3b = res_ols3.ess # simpler if endog is ones
pval3 = stats.chi2.sf(c3b, k_constraint)
tres = TestResults(c1=c1, pval1=pval1,
c2=c2, pval2=pval2,
c3=c3b, pval3=pval3)
return tres | score/lagrange multiplier test for GLM
Wooldridge procedure for test of mean function in GLM
Parameters
----------
results : GLMResults instance
results instance with the constrained model
exog_extra : ndarray or None
additional exogenous variables for variable addition test
This can be set to None if mean_deriv is provided.
mean_deriv : None or ndarray
Extra moment condition that correspond to the partial derivative of
a mean function with respect to some parameters.
Returns
-------
test_results : Results instance
The results instance has the following attributes which are score
statistic and p-value for 3 versions of the score test.
c1, pval1 : nonrobust score_test results
c2, pval2 : score test results robust to over or under dispersion
c3, pval3 : score test results fully robust to any heteroscedasticity
The test results instance also has a simple summary method.
Notes
-----
TODO: add `df` to results and make df detection more robust
This implements the auxiliary regression procedure of Wooldridge,
implemented based on the presentation in chapter 8 in Handbook of
Applied Econometrics 2.
References
----------
Wooldridge, Jeffrey M. 1997. “Quasi-Likelihood Methods for Count Data.”
Handbook of Applied Econometrics 2: 352–406.
and other articles and text book by Wooldridge | lm_test_glm | python | statsmodels/statsmodels | statsmodels/stats/_diagnostic_other.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_diagnostic_other.py | BSD-3-Clause |
def cm_test_robust(resid, resid_deriv, instruments, weights=1):
'''score/lagrange multiplier of Wooldridge
generic version of Wooldridge procedure for test of conditional moments
Limitation: This version allows only for one unconditional moment
restriction, i.e. resid is scalar for each observation.
Another limitation is that it assumes independent observations, no
correlation in residuals and weights cannot be replaced by cross-observation
whitening.
Parameters
----------
resid : ndarray, (nobs, )
conditional moment restriction, E(r | x, params) = 0
resid_deriv : ndarray, (nobs, k_params)
derivative of conditional moment restriction with respect to parameters
instruments : ndarray, (nobs, k_instruments)
indicator variables of Wooldridge, multiplies the conditional momen
restriction
weights : ndarray
This is a weights function as used in WLS. The moment
restrictions are multiplied by weights. This corresponds to the
inverse of the variance in a heteroskedastic model.
Returns
-------
test_results : Results instance
??? TODO
Notes
-----
This implements the auxiliary regression procedure of Wooldridge,
implemented based on procedure 2.1 in Wooldridge 1990.
Wooldridge allows for multivariate conditional moments (`resid`)
TODO: check dimensions for multivariate case for extension
References
----------
Wooldridge
Wooldridge
and more Wooldridge
'''
# notation: Wooldridge uses too mamny Greek letters
# instruments is capital lambda
# resid is small phi
# resid_deriv is capital phi
# weights is C
nobs = resid.shape[0]
from statsmodels.stats.multivariate_tools import partial_project
w_sqrt = np.sqrt(weights)
if np.size(weights) > 1:
w_sqrt = w_sqrt[:,None]
pp = partial_project(instruments * w_sqrt, resid_deriv * w_sqrt)
mom_resid = pp.resid
moms_test = mom_resid * resid[:, None] * w_sqrt
# we get this here in case we extend resid to be more than 1-D
k_constraint = moms_test.shape[1]
# use OPG variance as in Wooldridge 1990. This might generalize
cov = moms_test.T.dot(moms_test)
diff = moms_test.sum(0)
# see Wooldridge last page in appendix
stat = diff.dot(np.linalg.solve(cov, diff))
# for checking, this corresponds to nobs * rsquared of auxiliary regression
stat2 = OLS(np.ones(nobs), moms_test).fit().ess
pval = stats.chi2.sf(stat, k_constraint)
return stat, pval, stat2 | score/lagrange multiplier of Wooldridge
generic version of Wooldridge procedure for test of conditional moments
Limitation: This version allows only for one unconditional moment
restriction, i.e. resid is scalar for each observation.
Another limitation is that it assumes independent observations, no
correlation in residuals and weights cannot be replaced by cross-observation
whitening.
Parameters
----------
resid : ndarray, (nobs, )
conditional moment restriction, E(r | x, params) = 0
resid_deriv : ndarray, (nobs, k_params)
derivative of conditional moment restriction with respect to parameters
instruments : ndarray, (nobs, k_instruments)
indicator variables of Wooldridge, multiplies the conditional momen
restriction
weights : ndarray
This is a weights function as used in WLS. The moment
restrictions are multiplied by weights. This corresponds to the
inverse of the variance in a heteroskedastic model.
Returns
-------
test_results : Results instance
??? TODO
Notes
-----
This implements the auxiliary regression procedure of Wooldridge,
implemented based on procedure 2.1 in Wooldridge 1990.
Wooldridge allows for multivariate conditional moments (`resid`)
TODO: check dimensions for multivariate case for extension
References
----------
Wooldridge
Wooldridge
and more Wooldridge | cm_test_robust | python | statsmodels/statsmodels | statsmodels/stats/_diagnostic_other.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/stats/_diagnostic_other.py | BSD-3-Clause |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.