code
stringlengths 26
870k
| docstring
stringlengths 1
65.6k
| func_name
stringlengths 1
194
| language
stringclasses 1
value | repo
stringlengths 8
68
| path
stringlengths 5
194
| url
stringlengths 46
254
| license
stringclasses 4
values |
---|---|---|---|---|---|---|---|
def ci_mean(self, sig=.05, method='gamma', epsilon=10 ** -8,
gamma_low=-10 ** 10, gamma_high=10 ** 10):
"""
Returns the confidence interval for the mean.
Parameters
----------
sig : float
significance level. Default is .05
method : str
Root finding method, Can be 'nested-brent' or
'gamma'. Default is 'gamma'
'gamma' Tries to solve for the gamma parameter in the
Lagrange (see Owen pg 22) and then determine the weights.
'nested brent' uses brents method to find the confidence
intervals but must maximize the likelihood ratio on every
iteration.
gamma is generally much faster. If the optimizations does not
converge, try expanding the gamma_high and gamma_low
variable.
gamma_low : float
Lower bound for gamma when finding lower limit.
If function returns f(a) and f(b) must have different signs,
consider lowering gamma_low.
gamma_high : float
Upper bound for gamma when finding upper limit.
If function returns f(a) and f(b) must have different signs,
consider raising gamma_high.
epsilon : float
When using 'nested-brent', amount to decrease (increase)
from the maximum (minimum) of the data when
starting the search. This is to protect against the
likelihood ratio being zero at the maximum (minimum)
value of the data. If data is very small in absolute value
(<10 ``**`` -6) consider shrinking epsilon
When using 'gamma', amount to decrease (increase) the
minimum (maximum) by to start the search for gamma.
If function returns f(a) and f(b) must have different signs,
consider lowering epsilon.
Returns
-------
Interval : tuple
Confidence interval for the mean
"""
endog = self.endog
sig = 1 - sig
if method == 'nested-brent':
self.r0 = chi2.ppf(sig, 1)
middle = np.mean(endog)
epsilon_u = (max(endog) - np.mean(endog)) * epsilon
epsilon_l = (np.mean(endog) - min(endog)) * epsilon
ulim = optimize.brentq(self._ci_limits_mu, middle,
max(endog) - epsilon_u)
llim = optimize.brentq(self._ci_limits_mu, middle,
min(endog) + epsilon_l)
return llim, ulim
if method == 'gamma':
self.r0 = chi2.ppf(sig, 1)
gamma_star_l = optimize.brentq(self._find_gamma, gamma_low,
min(endog) - epsilon)
gamma_star_u = optimize.brentq(self._find_gamma, \
max(endog) + epsilon, gamma_high)
weights_low = ((endog - gamma_star_l) ** -1) / \
np.sum((endog - gamma_star_l) ** -1)
weights_high = ((endog - gamma_star_u) ** -1) / \
np.sum((endog - gamma_star_u) ** -1)
mu_low = np.sum(weights_low * endog)
mu_high = np.sum(weights_high * endog)
return mu_low, mu_high | Returns the confidence interval for the mean.
Parameters
----------
sig : float
significance level. Default is .05
method : str
Root finding method, Can be 'nested-brent' or
'gamma'. Default is 'gamma'
'gamma' Tries to solve for the gamma parameter in the
Lagrange (see Owen pg 22) and then determine the weights.
'nested brent' uses brents method to find the confidence
intervals but must maximize the likelihood ratio on every
iteration.
gamma is generally much faster. If the optimizations does not
converge, try expanding the gamma_high and gamma_low
variable.
gamma_low : float
Lower bound for gamma when finding lower limit.
If function returns f(a) and f(b) must have different signs,
consider lowering gamma_low.
gamma_high : float
Upper bound for gamma when finding upper limit.
If function returns f(a) and f(b) must have different signs,
consider raising gamma_high.
epsilon : float
When using 'nested-brent', amount to decrease (increase)
from the maximum (minimum) of the data when
starting the search. This is to protect against the
likelihood ratio being zero at the maximum (minimum)
value of the data. If data is very small in absolute value
(<10 ``**`` -6) consider shrinking epsilon
When using 'gamma', amount to decrease (increase) the
minimum (maximum) by to start the search for gamma.
If function returns f(a) and f(b) must have different signs,
consider lowering epsilon.
Returns
-------
Interval : tuple
Confidence interval for the mean | ci_mean | python | statsmodels/statsmodels | statsmodels/emplike/descriptive.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/emplike/descriptive.py | BSD-3-Clause |
def test_var(self, sig2_0, return_weights=False):
"""
Returns -2 x log-likelihood ratio and the p-value for the
hypothesized variance
Parameters
----------
sig2_0 : float
Hypothesized variance to be tested
return_weights : bool
If True, returns the weights that maximize the
likelihood of observing sig2_0. Default is False
Returns
-------
test_results : tuple
The log-likelihood ratio and the p_value of sig2_0
Examples
--------
>>> import numpy as np
>>> import statsmodels.api as sm
>>> random_numbers = np.random.standard_normal(1000)*100
>>> el_analysis = sm.emplike.DescStat(random_numbers)
>>> hyp_test = el_analysis.test_var(9500)
"""
self.sig2_0 = sig2_0
mu_max = max(self.endog)
mu_min = min(self.endog)
llr = optimize.fminbound(self._opt_var, mu_min, mu_max, \
full_output=1)[1]
p_val = chi2.sf(llr, 1)
if return_weights:
return llr, p_val, self.new_weights.T
else:
return llr, p_val | Returns -2 x log-likelihood ratio and the p-value for the
hypothesized variance
Parameters
----------
sig2_0 : float
Hypothesized variance to be tested
return_weights : bool
If True, returns the weights that maximize the
likelihood of observing sig2_0. Default is False
Returns
-------
test_results : tuple
The log-likelihood ratio and the p_value of sig2_0
Examples
--------
>>> import numpy as np
>>> import statsmodels.api as sm
>>> random_numbers = np.random.standard_normal(1000)*100
>>> el_analysis = sm.emplike.DescStat(random_numbers)
>>> hyp_test = el_analysis.test_var(9500) | test_var | python | statsmodels/statsmodels | statsmodels/emplike/descriptive.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/emplike/descriptive.py | BSD-3-Clause |
def ci_var(self, lower_bound=None, upper_bound=None, sig=.05):
"""
Returns the confidence interval for the variance.
Parameters
----------
lower_bound : float
The minimum value the lower confidence interval can
take. The p-value from test_var(lower_bound) must be lower
than 1 - significance level. Default is .99 confidence
limit assuming normality
upper_bound : float
The maximum value the upper confidence interval
can take. The p-value from test_var(upper_bound) must be lower
than 1 - significance level. Default is .99 confidence
limit assuming normality
sig : float
The significance level. Default is .05
Returns
-------
Interval : tuple
Confidence interval for the variance
Examples
--------
>>> import numpy as np
>>> import statsmodels.api as sm
>>> random_numbers = np.random.standard_normal(100)
>>> el_analysis = sm.emplike.DescStat(random_numbers)
>>> el_analysis.ci_var()
(0.7539322567470305, 1.229998852496268)
>>> el_analysis.ci_var(.5, 2)
(0.7539322567469926, 1.2299988524962664)
Notes
-----
If the function returns the error f(a) and f(b) must have
different signs, consider lowering lower_bound and raising
upper_bound.
"""
endog = self.endog
if upper_bound is None:
upper_bound = ((self.nobs - 1) * endog.var()) / \
(chi2.ppf(.0001, self.nobs - 1))
if lower_bound is None:
lower_bound = ((self.nobs - 1) * endog.var()) / \
(chi2.ppf(.9999, self.nobs - 1))
self.r0 = chi2.ppf(1 - sig, 1)
llim = optimize.brentq(self._ci_limits_var, lower_bound, endog.var())
ulim = optimize.brentq(self._ci_limits_var, endog.var(), upper_bound)
return llim, ulim | Returns the confidence interval for the variance.
Parameters
----------
lower_bound : float
The minimum value the lower confidence interval can
take. The p-value from test_var(lower_bound) must be lower
than 1 - significance level. Default is .99 confidence
limit assuming normality
upper_bound : float
The maximum value the upper confidence interval
can take. The p-value from test_var(upper_bound) must be lower
than 1 - significance level. Default is .99 confidence
limit assuming normality
sig : float
The significance level. Default is .05
Returns
-------
Interval : tuple
Confidence interval for the variance
Examples
--------
>>> import numpy as np
>>> import statsmodels.api as sm
>>> random_numbers = np.random.standard_normal(100)
>>> el_analysis = sm.emplike.DescStat(random_numbers)
>>> el_analysis.ci_var()
(0.7539322567470305, 1.229998852496268)
>>> el_analysis.ci_var(.5, 2)
(0.7539322567469926, 1.2299988524962664)
Notes
-----
If the function returns the error f(a) and f(b) must have
different signs, consider lowering lower_bound and raising
upper_bound. | ci_var | python | statsmodels/statsmodels | statsmodels/emplike/descriptive.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/emplike/descriptive.py | BSD-3-Clause |
def plot_contour(self, mu_low, mu_high, var_low, var_high, mu_step,
var_step,
levs=[.2, .1, .05, .01, .001]):
"""
Returns a plot of the confidence region for a univariate
mean and variance.
Parameters
----------
mu_low : float
Lowest value of the mean to plot
mu_high : float
Highest value of the mean to plot
var_low : float
Lowest value of the variance to plot
var_high : float
Highest value of the variance to plot
mu_step : float
Increments to evaluate the mean
var_step : float
Increments to evaluate the mean
levs : list
Which values of significance the contour lines will be drawn.
Default is [.2, .1, .05, .01, .001]
Returns
-------
Figure
The contour plot
"""
fig, ax = utils.create_mpl_ax()
ax.set_ylabel('Variance')
ax.set_xlabel('Mean')
mu_vect = list(np.arange(mu_low, mu_high, mu_step))
var_vect = list(np.arange(var_low, var_high, var_step))
z = []
for sig0 in var_vect:
self.sig2_0 = sig0
for mu0 in mu_vect:
z.append(self._opt_var(mu0, pval=True))
z = np.asarray(z).reshape(len(var_vect), len(mu_vect))
ax.contour(mu_vect, var_vect, z, levels=levs)
return fig | Returns a plot of the confidence region for a univariate
mean and variance.
Parameters
----------
mu_low : float
Lowest value of the mean to plot
mu_high : float
Highest value of the mean to plot
var_low : float
Lowest value of the variance to plot
var_high : float
Highest value of the variance to plot
mu_step : float
Increments to evaluate the mean
var_step : float
Increments to evaluate the mean
levs : list
Which values of significance the contour lines will be drawn.
Default is [.2, .1, .05, .01, .001]
Returns
-------
Figure
The contour plot | plot_contour | python | statsmodels/statsmodels | statsmodels/emplike/descriptive.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/emplike/descriptive.py | BSD-3-Clause |
def test_skew(self, skew0, return_weights=False):
"""
Returns -2 x log-likelihood and p-value for the hypothesized
skewness.
Parameters
----------
skew0 : float
Skewness value to be tested
return_weights : bool
If True, function also returns the weights that
maximize the likelihood ratio. Default is False.
Returns
-------
test_results : tuple
The log-likelihood ratio and p_value of skew0
"""
self.skew0 = skew0
start_nuisance = np.array([self.endog.mean(),
self.endog.var()])
llr = optimize.fmin_powell(self._opt_skew, start_nuisance,
full_output=1, disp=0)[1]
p_val = chi2.sf(llr, 1)
if return_weights:
return llr, p_val, self.new_weights.T
return llr, p_val | Returns -2 x log-likelihood and p-value for the hypothesized
skewness.
Parameters
----------
skew0 : float
Skewness value to be tested
return_weights : bool
If True, function also returns the weights that
maximize the likelihood ratio. Default is False.
Returns
-------
test_results : tuple
The log-likelihood ratio and p_value of skew0 | test_skew | python | statsmodels/statsmodels | statsmodels/emplike/descriptive.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/emplike/descriptive.py | BSD-3-Clause |
def test_kurt(self, kurt0, return_weights=False):
"""
Returns -2 x log-likelihood and the p-value for the hypothesized
kurtosis.
Parameters
----------
kurt0 : float
Kurtosis value to be tested
return_weights : bool
If True, function also returns the weights that
maximize the likelihood ratio. Default is False.
Returns
-------
test_results : tuple
The log-likelihood ratio and p-value of kurt0
"""
self.kurt0 = kurt0
start_nuisance = np.array([self.endog.mean(),
self.endog.var()])
llr = optimize.fmin_powell(self._opt_kurt, start_nuisance,
full_output=1, disp=0)[1]
p_val = chi2.sf(llr, 1)
if return_weights:
return llr, p_val, self.new_weights.T
return llr, p_val | Returns -2 x log-likelihood and the p-value for the hypothesized
kurtosis.
Parameters
----------
kurt0 : float
Kurtosis value to be tested
return_weights : bool
If True, function also returns the weights that
maximize the likelihood ratio. Default is False.
Returns
-------
test_results : tuple
The log-likelihood ratio and p-value of kurt0 | test_kurt | python | statsmodels/statsmodels | statsmodels/emplike/descriptive.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/emplike/descriptive.py | BSD-3-Clause |
def test_joint_skew_kurt(self, skew0, kurt0, return_weights=False):
"""
Returns - 2 x log-likelihood and the p-value for the joint
hypothesis test for skewness and kurtosis
Parameters
----------
skew0 : float
Skewness value to be tested
kurt0 : float
Kurtosis value to be tested
return_weights : bool
If True, function also returns the weights that
maximize the likelihood ratio. Default is False.
Returns
-------
test_results : tuple
The log-likelihood ratio and p-value of the joint hypothesis test.
"""
self.skew0 = skew0
self.kurt0 = kurt0
start_nuisance = np.array([self.endog.mean(),
self.endog.var()])
llr = optimize.fmin_powell(self._opt_skew_kurt, start_nuisance,
full_output=1, disp=0)[1]
p_val = chi2.sf(llr, 2)
if return_weights:
return llr, p_val, self.new_weights.T
return llr, p_val | Returns - 2 x log-likelihood and the p-value for the joint
hypothesis test for skewness and kurtosis
Parameters
----------
skew0 : float
Skewness value to be tested
kurt0 : float
Kurtosis value to be tested
return_weights : bool
If True, function also returns the weights that
maximize the likelihood ratio. Default is False.
Returns
-------
test_results : tuple
The log-likelihood ratio and p-value of the joint hypothesis test. | test_joint_skew_kurt | python | statsmodels/statsmodels | statsmodels/emplike/descriptive.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/emplike/descriptive.py | BSD-3-Clause |
def ci_skew(self, sig=.05, upper_bound=None, lower_bound=None):
"""
Returns the confidence interval for skewness.
Parameters
----------
sig : float
The significance level. Default is .05
upper_bound : float
Maximum value of skewness the upper limit can be.
Default is .99 confidence limit assuming normality.
lower_bound : float
Minimum value of skewness the lower limit can be.
Default is .99 confidence level assuming normality.
Returns
-------
Interval : tuple
Confidence interval for the skewness
Notes
-----
If function returns f(a) and f(b) must have different signs, consider
expanding lower and upper bounds
"""
nobs = self.nobs
endog = self.endog
if upper_bound is None:
upper_bound = skew(endog) + \
2.5 * ((6. * nobs * (nobs - 1.)) / \
((nobs - 2.) * (nobs + 1.) * \
(nobs + 3.))) ** .5
if lower_bound is None:
lower_bound = skew(endog) - \
2.5 * ((6. * nobs * (nobs - 1.)) / \
((nobs - 2.) * (nobs + 1.) * \
(nobs + 3.))) ** .5
self.r0 = chi2.ppf(1 - sig, 1)
llim = optimize.brentq(self._ci_limits_skew, lower_bound, skew(endog))
ulim = optimize.brentq(self._ci_limits_skew, skew(endog), upper_bound)
return llim, ulim | Returns the confidence interval for skewness.
Parameters
----------
sig : float
The significance level. Default is .05
upper_bound : float
Maximum value of skewness the upper limit can be.
Default is .99 confidence limit assuming normality.
lower_bound : float
Minimum value of skewness the lower limit can be.
Default is .99 confidence level assuming normality.
Returns
-------
Interval : tuple
Confidence interval for the skewness
Notes
-----
If function returns f(a) and f(b) must have different signs, consider
expanding lower and upper bounds | ci_skew | python | statsmodels/statsmodels | statsmodels/emplike/descriptive.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/emplike/descriptive.py | BSD-3-Clause |
def ci_kurt(self, sig=.05, upper_bound=None, lower_bound=None):
"""
Returns the confidence interval for kurtosis.
Parameters
----------
sig : float
The significance level. Default is .05
upper_bound : float
Maximum value of kurtosis the upper limit can be.
Default is .99 confidence limit assuming normality.
lower_bound : float
Minimum value of kurtosis the lower limit can be.
Default is .99 confidence limit assuming normality.
Returns
-------
Interval : tuple
Lower and upper confidence limit
Notes
-----
For small n, upper_bound and lower_bound may have to be
provided by the user. Consider using test_kurt to find
values close to the desired significance level.
If function returns f(a) and f(b) must have different signs, consider
expanding the bounds.
"""
endog = self.endog
nobs = self.nobs
if upper_bound is None:
upper_bound = kurtosis(endog) + \
(2.5 * (2. * ((6. * nobs * (nobs - 1.)) / \
((nobs - 2.) * (nobs + 1.) * \
(nobs + 3.))) ** .5) * \
(((nobs ** 2.) - 1.) / ((nobs - 3.) *\
(nobs + 5.))) ** .5)
if lower_bound is None:
lower_bound = kurtosis(endog) - \
(2.5 * (2. * ((6. * nobs * (nobs - 1.)) / \
((nobs - 2.) * (nobs + 1.) * \
(nobs + 3.))) ** .5) * \
(((nobs ** 2.) - 1.) / ((nobs - 3.) *\
(nobs + 5.))) ** .5)
self.r0 = chi2.ppf(1 - sig, 1)
llim = optimize.brentq(self._ci_limits_kurt, lower_bound, \
kurtosis(endog))
ulim = optimize.brentq(self._ci_limits_kurt, kurtosis(endog), \
upper_bound)
return llim, ulim | Returns the confidence interval for kurtosis.
Parameters
----------
sig : float
The significance level. Default is .05
upper_bound : float
Maximum value of kurtosis the upper limit can be.
Default is .99 confidence limit assuming normality.
lower_bound : float
Minimum value of kurtosis the lower limit can be.
Default is .99 confidence limit assuming normality.
Returns
-------
Interval : tuple
Lower and upper confidence limit
Notes
-----
For small n, upper_bound and lower_bound may have to be
provided by the user. Consider using test_kurt to find
values close to the desired significance level.
If function returns f(a) and f(b) must have different signs, consider
expanding the bounds. | ci_kurt | python | statsmodels/statsmodels | statsmodels/emplike/descriptive.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/emplike/descriptive.py | BSD-3-Clause |
def mv_test_mean(self, mu_array, return_weights=False):
"""
Returns -2 x log likelihood and the p-value
for a multivariate hypothesis test of the mean
Parameters
----------
mu_array : 1d array
Hypothesized values for the mean. Must have same number of
elements as columns in endog
return_weights : bool
If True, returns the weights that maximize the
likelihood of mu_array. Default is False.
Returns
-------
test_results : tuple
The log-likelihood ratio and p-value for mu_array
"""
endog = self.endog
nobs = self.nobs
if len(mu_array) != endog.shape[1]:
raise ValueError('mu_array must have the same number of '
'elements as the columns of the data.')
mu_array = mu_array.reshape(1, endog.shape[1])
means = np.ones((endog.shape[0], endog.shape[1]))
means = mu_array * means
est_vect = endog - means
start_vals = 1. / nobs * np.ones(endog.shape[1])
eta_star = self._modif_newton(start_vals, est_vect,
np.ones(nobs) * (1. / nobs))
denom = 1 + np.dot(eta_star, est_vect.T)
self.new_weights = 1 / nobs * 1 / denom
llr = -2 * np.sum(np.log(nobs * self.new_weights))
p_val = chi2.sf(llr, mu_array.shape[1])
if return_weights:
return llr, p_val, self.new_weights.T
else:
return llr, p_val | Returns -2 x log likelihood and the p-value
for a multivariate hypothesis test of the mean
Parameters
----------
mu_array : 1d array
Hypothesized values for the mean. Must have same number of
elements as columns in endog
return_weights : bool
If True, returns the weights that maximize the
likelihood of mu_array. Default is False.
Returns
-------
test_results : tuple
The log-likelihood ratio and p-value for mu_array | mv_test_mean | python | statsmodels/statsmodels | statsmodels/emplike/descriptive.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/emplike/descriptive.py | BSD-3-Clause |
def test_corr(self, corr0, return_weights=0):
"""
Returns -2 x log-likelihood ratio and p-value for the
correlation coefficient between 2 variables
Parameters
----------
corr0 : float
Hypothesized value to be tested
return_weights : bool
If true, returns the weights that maximize
the log-likelihood at the hypothesized value
"""
nobs = self.nobs
endog = self.endog
if endog.shape[1] != 2:
raise NotImplementedError('Correlation matrix not yet implemented')
nuis0 = np.array([endog[:, 0].mean(),
endog[:, 0].var(),
endog[:, 1].mean(),
endog[:, 1].var()])
x0 = np.zeros(5)
weights0 = np.array([1. / nobs] * int(nobs))
args = (corr0, endog, nobs, x0, weights0)
llr = optimize.fmin(self._opt_correl, nuis0, args=args,
full_output=1, disp=0)[1]
p_val = chi2.sf(llr, 1)
if return_weights:
return llr, p_val, self.new_weights.T
return llr, p_val | Returns -2 x log-likelihood ratio and p-value for the
correlation coefficient between 2 variables
Parameters
----------
corr0 : float
Hypothesized value to be tested
return_weights : bool
If true, returns the weights that maximize
the log-likelihood at the hypothesized value | test_corr | python | statsmodels/statsmodels | statsmodels/emplike/descriptive.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/emplike/descriptive.py | BSD-3-Clause |
def ci_corr(self, sig=.05, upper_bound=None, lower_bound=None):
"""
Returns the confidence intervals for the correlation coefficient
Parameters
----------
sig : float
The significance level. Default is .05
upper_bound : float
Maximum value the upper confidence limit can be.
Default is 99% confidence limit assuming normality.
lower_bound : float
Minimum value the lower confidence limit can be.
Default is 99% confidence limit assuming normality.
Returns
-------
interval : tuple
Confidence interval for the correlation
"""
endog = self.endog
nobs = self.nobs
self.r0 = chi2.ppf(1 - sig, 1)
point_est = np.corrcoef(endog[:, 0], endog[:, 1])[0, 1]
if upper_bound is None:
upper_bound = min(.999, point_est + \
2.5 * ((1. - point_est ** 2.) / \
(nobs - 2.)) ** .5)
if lower_bound is None:
lower_bound = max(- .999, point_est - \
2.5 * (np.sqrt((1. - point_est ** 2.) / \
(nobs - 2.))))
llim = optimize.brenth(self._ci_limits_corr, lower_bound, point_est)
ulim = optimize.brenth(self._ci_limits_corr, point_est, upper_bound)
return llim, ulim | Returns the confidence intervals for the correlation coefficient
Parameters
----------
sig : float
The significance level. Default is .05
upper_bound : float
Maximum value the upper confidence limit can be.
Default is 99% confidence limit assuming normality.
lower_bound : float
Minimum value the lower confidence limit can be.
Default is 99% confidence limit assuming normality.
Returns
-------
interval : tuple
Confidence interval for the correlation | ci_corr | python | statsmodels/statsmodels | statsmodels/emplike/descriptive.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/emplike/descriptive.py | BSD-3-Clause |
def drop_missing(Y, X=None, axis=1):
"""
Returns views on the arrays Y and X where missing observations are dropped.
Y : array_like
X : array_like, optional
axis : int
Axis along which to look for missing observations. Default is 1, ie.,
observations in rows.
Returns
-------
Y : ndarray
All Y where the
X : ndarray
Notes
-----
If either Y or X is 1d, it is reshaped to be 2d.
"""
Y = np.asarray(Y)
if Y.ndim == 1:
Y = Y[:, None]
if X is not None:
X = np.array(X)
if X.ndim == 1:
X = X[:, None]
keepidx = np.logical_and(~np.isnan(Y).any(axis),
~np.isnan(X).any(axis))
return Y[keepidx], X[keepidx]
else:
keepidx = ~np.isnan(Y).any(axis)
return Y[keepidx] | Returns views on the arrays Y and X where missing observations are dropped.
Y : array_like
X : array_like, optional
axis : int
Axis along which to look for missing observations. Default is 1, ie.,
observations in rows.
Returns
-------
Y : ndarray
All Y where the
X : ndarray
Notes
-----
If either Y or X is 1d, it is reshaped to be 2d. | drop_missing | python | statsmodels/statsmodels | statsmodels/tools/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tools.py | BSD-3-Clause |
def categorical(data, col=None, dictnames=False, drop=False):
"""
Construct a dummy matrix from categorical variables
.. deprecated:: 0.12
Use pandas.get_dummies instead.
Parameters
----------
data : array_like
An array, Series or DataFrame. This can be either a 1d vector of
the categorical variable or a 2d array with the column specifying
the categorical variable specified by the col argument.
col : {str, int, None}
If data is a DataFrame col must in a column of data. If data is a
Series, col must be either the name of the Series or None. For arrays,
`col` can be an int that is the (zero-based) column index
number. `col` can only be None for a 1d array. The default is None.
dictnames : bool, optional
If True, a dictionary mapping the column number to the categorical
name is returned. Used to have information about plain arrays.
drop : bool
Whether or not keep the categorical variable in the returned matrix.
Returns
-------
dummy_matrix : array_like
A matrix of dummy (indicator/binary) float variables for the
categorical data.
dictnames : dict[int, str], optional
Mapping between column numbers and categorical names.
Notes
-----
This returns a dummy variable for *each* distinct variable. If a
a DaataFrame is provided, the names for the new variable is the
old variable name - underscore - category name. So if the a variable
'vote' had answers as 'yes' or 'no' then the returned array would have to
new variables-- 'vote_yes' and 'vote_no'. There is currently
no name checking.
Examples
--------
>>> import numpy as np
>>> import statsmodels.api as sm
Univariate examples
>>> import string
>>> string_var = [string.ascii_lowercase[0:5],
... string.ascii_lowercase[5:10],
... string.ascii_lowercase[10:15],
... string.ascii_lowercase[15:20],
... string.ascii_lowercase[20:25]]
>>> string_var *= 5
>>> string_var = np.asarray(sorted(string_var))
>>> design = sm.tools.categorical(string_var, drop=True)
Or for a numerical categorical variable
>>> instr = np.floor(np.arange(10,60, step=2)/10)
>>> design = sm.tools.categorical(instr, drop=True)
With a structured array
>>> num = np.random.randn(25,2)
>>> struct_ar = np.zeros((25,1),
... dtype=[('var1', 'f4'),('var2', 'f4'),
... ('instrument','f4'),('str_instr','a5')])
>>> struct_ar['var1'] = num[:,0][:,None]
>>> struct_ar['var2'] = num[:,1][:,None]
>>> struct_ar['instrument'] = instr[:,None]
>>> struct_ar['str_instr'] = string_var[:,None]
>>> design = sm.tools.categorical(struct_ar, col='instrument', drop=True)
Or
>>> design2 = sm.tools.categorical(struct_ar, col='str_instr', drop=True)
"""
raise NotImplementedError("categorical has been removed") | Construct a dummy matrix from categorical variables
.. deprecated:: 0.12
Use pandas.get_dummies instead.
Parameters
----------
data : array_like
An array, Series or DataFrame. This can be either a 1d vector of
the categorical variable or a 2d array with the column specifying
the categorical variable specified by the col argument.
col : {str, int, None}
If data is a DataFrame col must in a column of data. If data is a
Series, col must be either the name of the Series or None. For arrays,
`col` can be an int that is the (zero-based) column index
number. `col` can only be None for a 1d array. The default is None.
dictnames : bool, optional
If True, a dictionary mapping the column number to the categorical
name is returned. Used to have information about plain arrays.
drop : bool
Whether or not keep the categorical variable in the returned matrix.
Returns
-------
dummy_matrix : array_like
A matrix of dummy (indicator/binary) float variables for the
categorical data.
dictnames : dict[int, str], optional
Mapping between column numbers and categorical names.
Notes
-----
This returns a dummy variable for *each* distinct variable. If a
a DaataFrame is provided, the names for the new variable is the
old variable name - underscore - category name. So if the a variable
'vote' had answers as 'yes' or 'no' then the returned array would have to
new variables-- 'vote_yes' and 'vote_no'. There is currently
no name checking.
Examples
--------
>>> import numpy as np
>>> import statsmodels.api as sm
Univariate examples
>>> import string
>>> string_var = [string.ascii_lowercase[0:5],
... string.ascii_lowercase[5:10],
... string.ascii_lowercase[10:15],
... string.ascii_lowercase[15:20],
... string.ascii_lowercase[20:25]]
>>> string_var *= 5
>>> string_var = np.asarray(sorted(string_var))
>>> design = sm.tools.categorical(string_var, drop=True)
Or for a numerical categorical variable
>>> instr = np.floor(np.arange(10,60, step=2)/10)
>>> design = sm.tools.categorical(instr, drop=True)
With a structured array
>>> num = np.random.randn(25,2)
>>> struct_ar = np.zeros((25,1),
... dtype=[('var1', 'f4'),('var2', 'f4'),
... ('instrument','f4'),('str_instr','a5')])
>>> struct_ar['var1'] = num[:,0][:,None]
>>> struct_ar['var2'] = num[:,1][:,None]
>>> struct_ar['instrument'] = instr[:,None]
>>> struct_ar['str_instr'] = string_var[:,None]
>>> design = sm.tools.categorical(struct_ar, col='instrument', drop=True)
Or
>>> design2 = sm.tools.categorical(struct_ar, col='str_instr', drop=True) | categorical | python | statsmodels/statsmodels | statsmodels/tools/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tools.py | BSD-3-Clause |
def add_constant(data, prepend=True, has_constant='skip'):
"""
Add a column of ones to an array.
Parameters
----------
data : array_like
A column-ordered design matrix.
prepend : bool
If true, the constant is in the first column. Else the constant is
appended (last column).
has_constant : str {'raise', 'add', 'skip'}
Behavior if ``data`` already has a constant. The default will return
data without adding another constant. If 'raise', will raise an
error if any column has a constant value. Using 'add' will add a
column of 1s if a constant column is present.
Returns
-------
array_like
The original values with a constant (column of ones) as the first or
last column. Returned value type depends on input type.
Notes
-----
When the input is a pandas Series or DataFrame, the added column's name
is 'const'.
"""
if _is_using_pandas(data, None):
from statsmodels.tsa.tsatools import add_trend
return add_trend(data, trend='c', prepend=prepend, has_constant=has_constant)
# Special case for NumPy
x = np.asarray(data)
ndim = x.ndim
if ndim == 1:
x = x[:, None]
elif x.ndim > 2:
raise ValueError('Only implemented for 2-dimensional arrays')
is_nonzero_const = np.ptp(x, axis=0) == 0
is_nonzero_const &= np.all(x != 0.0, axis=0)
if is_nonzero_const.any():
if has_constant == 'skip':
return x
elif has_constant == 'raise':
if ndim == 1:
raise ValueError("data is constant.")
else:
columns = np.arange(x.shape[1])
cols = ",".join([str(c) for c in columns[is_nonzero_const]])
raise ValueError(f"Column(s) {cols} are constant.")
x = [np.ones(x.shape[0]), x]
x = x if prepend else x[::-1]
return np.column_stack(x) | Add a column of ones to an array.
Parameters
----------
data : array_like
A column-ordered design matrix.
prepend : bool
If true, the constant is in the first column. Else the constant is
appended (last column).
has_constant : str {'raise', 'add', 'skip'}
Behavior if ``data`` already has a constant. The default will return
data without adding another constant. If 'raise', will raise an
error if any column has a constant value. Using 'add' will add a
column of 1s if a constant column is present.
Returns
-------
array_like
The original values with a constant (column of ones) as the first or
last column. Returned value type depends on input type.
Notes
-----
When the input is a pandas Series or DataFrame, the added column's name
is 'const'. | add_constant | python | statsmodels/statsmodels | statsmodels/tools/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tools.py | BSD-3-Clause |
def isestimable(c, d):
"""
True if (Q, P) contrast `c` is estimable for (N, P) design `d`.
From an Q x P contrast matrix `C` and an N x P design matrix `D`, checks if
the contrast `C` is estimable by looking at the rank of ``vstack([C,D])``
and verifying it is the same as the rank of `D`.
Parameters
----------
c : array_like
A contrast matrix with shape (Q, P). If 1 dimensional assume shape is
(1, P).
d : array_like
The design matrix, (N, P).
Returns
-------
bool
True if the contrast `c` is estimable on design `d`.
Examples
--------
>>> d = np.array([[1, 1, 1, 0, 0, 0],
... [0, 0, 0, 1, 1, 1],
... [1, 1, 1, 1, 1, 1]]).T
>>> isestimable([1, 0, 0], d)
False
>>> isestimable([1, -1, 0], d)
True
"""
c = array_like(c, 'c', maxdim=2)
d = array_like(d, 'd', ndim=2)
c = c[None, :] if c.ndim == 1 else c
if c.shape[1] != d.shape[1]:
raise ValueError('Contrast should have %d columns' % d.shape[1])
new = np.vstack([c, d])
if np.linalg.matrix_rank(new) != np.linalg.matrix_rank(d):
return False
return True | True if (Q, P) contrast `c` is estimable for (N, P) design `d`.
From an Q x P contrast matrix `C` and an N x P design matrix `D`, checks if
the contrast `C` is estimable by looking at the rank of ``vstack([C,D])``
and verifying it is the same as the rank of `D`.
Parameters
----------
c : array_like
A contrast matrix with shape (Q, P). If 1 dimensional assume shape is
(1, P).
d : array_like
The design matrix, (N, P).
Returns
-------
bool
True if the contrast `c` is estimable on design `d`.
Examples
--------
>>> d = np.array([[1, 1, 1, 0, 0, 0],
... [0, 0, 0, 1, 1, 1],
... [1, 1, 1, 1, 1, 1]]).T
>>> isestimable([1, 0, 0], d)
False
>>> isestimable([1, -1, 0], d)
True | isestimable | python | statsmodels/statsmodels | statsmodels/tools/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tools.py | BSD-3-Clause |
def pinv_extended(x, rcond=1e-15):
"""
Return the pinv of an array X as well as the singular values
used in computation.
Code adapted from numpy.
"""
x = np.asarray(x)
x = x.conjugate()
u, s, vt = np.linalg.svd(x, False)
s_orig = np.copy(s)
m = u.shape[0]
n = vt.shape[1]
cutoff = rcond * np.maximum.reduce(s)
for i in range(min(n, m)):
if s[i] > cutoff:
s[i] = 1./s[i]
else:
s[i] = 0.
res = np.dot(np.transpose(vt), np.multiply(s[:, np.newaxis],
np.transpose(u)))
return res, s_orig | Return the pinv of an array X as well as the singular values
used in computation.
Code adapted from numpy. | pinv_extended | python | statsmodels/statsmodels | statsmodels/tools/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tools.py | BSD-3-Clause |
def recipr(x):
"""
Reciprocal of an array with entries less than or equal to 0 set to 0.
Parameters
----------
x : array_like
The input array.
Returns
-------
ndarray
The array with 0-filled reciprocals.
"""
x = np.asarray(x)
out = np.zeros_like(x, dtype=np.float64)
nans = np.isnan(x.flat)
pos = ~nans
pos[pos] = pos[pos] & (x.flat[pos] > 0)
out.flat[pos] = 1.0 / x.flat[pos]
out.flat[nans] = np.nan
return out | Reciprocal of an array with entries less than or equal to 0 set to 0.
Parameters
----------
x : array_like
The input array.
Returns
-------
ndarray
The array with 0-filled reciprocals. | recipr | python | statsmodels/statsmodels | statsmodels/tools/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tools.py | BSD-3-Clause |
def recipr0(x):
"""
Reciprocal of an array with entries less than 0 set to 0.
Parameters
----------
x : array_like
The input array.
Returns
-------
ndarray
The array with 0-filled reciprocals.
"""
x = np.asarray(x)
out = np.zeros_like(x, dtype=np.float64)
nans = np.isnan(x.flat)
non_zero = ~nans
non_zero[non_zero] = non_zero[non_zero] & (x.flat[non_zero] != 0)
out.flat[non_zero] = 1.0 / x.flat[non_zero]
out.flat[nans] = np.nan
return out | Reciprocal of an array with entries less than 0 set to 0.
Parameters
----------
x : array_like
The input array.
Returns
-------
ndarray
The array with 0-filled reciprocals. | recipr0 | python | statsmodels/statsmodels | statsmodels/tools/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tools.py | BSD-3-Clause |
def clean0(matrix):
"""
Erase columns of zeros: can save some time in pseudoinverse.
Parameters
----------
matrix : ndarray
The array to clean.
Returns
-------
ndarray
The cleaned array.
"""
colsum = np.add.reduce(matrix**2, 0)
val = [matrix[:, i] for i in np.flatnonzero(colsum)]
return np.array(np.transpose(val)) | Erase columns of zeros: can save some time in pseudoinverse.
Parameters
----------
matrix : ndarray
The array to clean.
Returns
-------
ndarray
The cleaned array. | clean0 | python | statsmodels/statsmodels | statsmodels/tools/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tools.py | BSD-3-Clause |
def fullrank(x, r=None):
"""
Return an array whose column span is the same as x.
Parameters
----------
x : ndarray
The array to adjust, 2d.
r : int, optional
The rank of x. If not provided, determined by `np.linalg.matrix_rank`.
Returns
-------
ndarray
The array adjusted to have full rank.
Notes
-----
If the rank of x is known it can be specified as r -- no check
is made to ensure that this really is the rank of x.
"""
if r is None:
r = np.linalg.matrix_rank(x)
v, d, u = np.linalg.svd(x, full_matrices=False)
order = np.argsort(d)
order = order[::-1]
value = []
for i in range(r):
value.append(v[:, order[i]])
return np.asarray(np.transpose(value)).astype(np.float64) | Return an array whose column span is the same as x.
Parameters
----------
x : ndarray
The array to adjust, 2d.
r : int, optional
The rank of x. If not provided, determined by `np.linalg.matrix_rank`.
Returns
-------
ndarray
The array adjusted to have full rank.
Notes
-----
If the rank of x is known it can be specified as r -- no check
is made to ensure that this really is the rank of x. | fullrank | python | statsmodels/statsmodels | statsmodels/tools/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tools.py | BSD-3-Clause |
def unsqueeze(data, axis, oldshape):
"""
Unsqueeze a collapsed array.
Parameters
----------
data : ndarray
The data to unsqueeze.
axis : int
The axis to unsqueeze.
oldshape : tuple[int]
The original shape before the squeeze or reduce operation.
Returns
-------
ndarray
The unsqueezed array.
Examples
--------
>>> from numpy import mean
>>> from numpy.random import standard_normal
>>> x = standard_normal((3,4,5))
>>> m = mean(x, axis=1)
>>> m.shape
(3, 5)
>>> m = unsqueeze(m, 1, x.shape)
>>> m.shape
(3, 1, 5)
>>>
"""
newshape = list(oldshape)
newshape[axis] = 1
return data.reshape(newshape) | Unsqueeze a collapsed array.
Parameters
----------
data : ndarray
The data to unsqueeze.
axis : int
The axis to unsqueeze.
oldshape : tuple[int]
The original shape before the squeeze or reduce operation.
Returns
-------
ndarray
The unsqueezed array.
Examples
--------
>>> from numpy import mean
>>> from numpy.random import standard_normal
>>> x = standard_normal((3,4,5))
>>> m = mean(x, axis=1)
>>> m.shape
(3, 5)
>>> m = unsqueeze(m, 1, x.shape)
>>> m.shape
(3, 1, 5)
>>> | unsqueeze | python | statsmodels/statsmodels | statsmodels/tools/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tools.py | BSD-3-Clause |
def nan_dot(A, B):
"""
Returns np.dot(left_matrix, right_matrix) with the convention that
nan * 0 = 0 and nan * x = nan if x != 0.
Parameters
----------
A, B : ndarray
"""
# Find out who should be nan due to nan * nonzero
should_be_nan_1 = np.dot(np.isnan(A), (B != 0))
should_be_nan_2 = np.dot((A != 0), np.isnan(B))
should_be_nan = should_be_nan_1 + should_be_nan_2
# Multiply after setting all nan to 0
# This is what happens if there were no nan * nonzero conflicts
C = np.dot(np.nan_to_num(A), np.nan_to_num(B))
C[should_be_nan] = np.nan
return C | Returns np.dot(left_matrix, right_matrix) with the convention that
nan * 0 = 0 and nan * x = nan if x != 0.
Parameters
----------
A, B : ndarray | nan_dot | python | statsmodels/statsmodels | statsmodels/tools/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tools.py | BSD-3-Clause |
def maybe_unwrap_results(results):
"""
Gets raw results back from wrapped results.
Can be used in plotting functions or other post-estimation type
routines.
"""
return getattr(results, '_results', results) | Gets raw results back from wrapped results.
Can be used in plotting functions or other post-estimation type
routines. | maybe_unwrap_results | python | statsmodels/statsmodels | statsmodels/tools/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tools.py | BSD-3-Clause |
def _ensure_2d(x, ndarray=False):
"""
Parameters
----------
x : ndarray, Series, DataFrame or None
Input to verify dimensions, and to transform as necesary
ndarray : bool
Flag indicating whether to always return a NumPy array. Setting False
will return an pandas DataFrame when the input is a Series or a
DataFrame.
Returns
-------
out : ndarray, DataFrame or None
array or DataFrame with 2 dimensiona. One dimensional arrays are
returned as nobs by 1. None is returned if x is None.
names : list of str or None
list containing variables names when the input is a pandas datatype.
Returns None if the input is an ndarray.
Notes
-----
Accepts None for simplicity
"""
if x is None:
return x
is_pandas = _is_using_pandas(x, None)
if x.ndim == 2:
if is_pandas:
return x, x.columns
else:
return x, None
elif x.ndim > 2:
raise ValueError('x mst be 1 or 2-dimensional.')
name = x.name if is_pandas else None
if ndarray:
return np.asarray(x)[:, None], name
else:
return pd.DataFrame(x), name | Parameters
----------
x : ndarray, Series, DataFrame or None
Input to verify dimensions, and to transform as necesary
ndarray : bool
Flag indicating whether to always return a NumPy array. Setting False
will return an pandas DataFrame when the input is a Series or a
DataFrame.
Returns
-------
out : ndarray, DataFrame or None
array or DataFrame with 2 dimensiona. One dimensional arrays are
returned as nobs by 1. None is returned if x is None.
names : list of str or None
list containing variables names when the input is a pandas datatype.
Returns None if the input is an ndarray.
Notes
-----
Accepts None for simplicity | _ensure_2d | python | statsmodels/statsmodels | statsmodels/tools/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tools.py | BSD-3-Clause |
def matrix_rank(m, tol=None, method="qr"):
"""
Matrix rank calculation using QR or SVD
Parameters
----------
m : array_like
A 2-d array-like object to test
tol : float, optional
The tolerance to use when testing the matrix rank. If not provided
an appropriate value is selected.
method : {"ip", "qr", "svd"}
The method used. "ip" uses the inner-product of a normalized version
of m and then computes the rank using NumPy's matrix_rank.
"qr" uses a QR decomposition and is the default. "svd" defers to
NumPy's matrix_rank.
Returns
-------
int
The rank of m.
Notes
-----
When using a QR factorization, the rank is determined by the number of
elements on the leading diagonal of the R matrix that are above tol
in absolute value.
"""
m = array_like(m, "m", ndim=2)
if method == "ip":
m = m[:, np.any(m != 0, axis=0)]
m = m / np.sqrt((m ** 2).sum(0))
m = m.T @ m
return np.linalg.matrix_rank(m, tol=tol, hermitian=True)
elif method == "qr":
r, = scipy.linalg.qr(m, mode="r")
abs_diag = np.abs(np.diag(r))
if tol is None:
tol = abs_diag[0] * m.shape[1] * np.finfo(float).eps
return int((abs_diag > tol).sum())
else:
return np.linalg.matrix_rank(m, tol=tol) | Matrix rank calculation using QR or SVD
Parameters
----------
m : array_like
A 2-d array-like object to test
tol : float, optional
The tolerance to use when testing the matrix rank. If not provided
an appropriate value is selected.
method : {"ip", "qr", "svd"}
The method used. "ip" uses the inner-product of a normalized version
of m and then computes the rank using NumPy's matrix_rank.
"qr" uses a QR decomposition and is the default. "svd" defers to
NumPy's matrix_rank.
Returns
-------
int
The rank of m.
Notes
-----
When using a QR factorization, the rank is determined by the number of
elements on the leading diagonal of the R matrix that are above tol
in absolute value. | matrix_rank | python | statsmodels/statsmodels | statsmodels/tools/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/tools.py | BSD-3-Clause |
def _generate_url(func, stable):
"""
Parse inputs and return a correctly formatted URL or raises ValueError
if the input is not understandable
"""
url = BASE_URL
if stable:
url += 'stable/'
else:
url += 'devel/'
if func is None:
return url
elif isinstance(func, str):
url += 'search.html?'
url += urlencode({'q': func})
url += '&check_keywords=yes&area=default'
else:
try:
func = func
func_name = func.__name__
func_module = func.__module__
if not func_module.startswith('statsmodels.'):
raise ValueError('Function must be from statsmodels')
url += 'generated/'
url += func_module + '.' + func_name + '.html'
except AttributeError:
raise ValueError('Input not understood')
return url | Parse inputs and return a correctly formatted URL or raises ValueError
if the input is not understandable | _generate_url | python | statsmodels/statsmodels | statsmodels/tools/web.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/web.py | BSD-3-Clause |
def webdoc(func=None, stable=None):
"""
Opens a browser and displays online documentation
Parameters
----------
func : {str, callable}
Either a string to search the documentation or a function
stable : bool
Flag indicating whether to use the stable documentation (True) or
the development documentation (False). If not provided, opens
the stable documentation if the current version of statsmodels is a
release
Examples
--------
>>> import statsmodels.api as sm
Documentation site
>>> sm.webdoc()
Search for glm in docs
>>> sm.webdoc('glm')
Go to current generated help for OLS
>>> sm.webdoc(sm.OLS, stable=False)
Notes
-----
By default, open stable documentation if the current version of
statsmodels is a release. Otherwise opens the development documentation.
Uses the default system browser.
"""
stable = __version__ if 'dev' not in __version__ else stable
url_or_error = _generate_url(func, stable)
webbrowser.open(url_or_error)
return None | Opens a browser and displays online documentation
Parameters
----------
func : {str, callable}
Either a string to search the documentation or a function
stable : bool
Flag indicating whether to use the stable documentation (True) or
the development documentation (False). If not provided, opens
the stable documentation if the current version of statsmodels is a
release
Examples
--------
>>> import statsmodels.api as sm
Documentation site
>>> sm.webdoc()
Search for glm in docs
>>> sm.webdoc('glm')
Go to current generated help for OLS
>>> sm.webdoc(sm.OLS, stable=False)
Notes
-----
By default, open stable documentation if the current version of
statsmodels is a release. Otherwise opens the development documentation.
Uses the default system browser. | webdoc | python | statsmodels/statsmodels | statsmodels/tools/web.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/web.py | BSD-3-Clause |
def bunch_factory(attribute, columns):
"""
Generates a special purpose Bunch class
Parameters
----------
attribute: str
Attribute to access when splitting
columns: List[str]
List of names to use when splitting the columns of attribute
Notes
-----
After the class is initialized as a Bunch, the columne of attribute
are split so that Bunch has the keys in columns and
bunch[column[i]] = bunch[attribute][:, i]
"""
class FactoryBunch(Bunch):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if not hasattr(self, attribute):
raise AttributeError('{} is required and must be passed to '
'the constructor'.format(attribute))
for i, att in enumerate(columns):
self[att] = getattr(self, attribute)[:, i]
return FactoryBunch | Generates a special purpose Bunch class
Parameters
----------
attribute: str
Attribute to access when splitting
columns: List[str]
List of names to use when splitting the columns of attribute
Notes
-----
After the class is initialized as a Bunch, the columne of attribute
are split so that Bunch has the keys in columns and
bunch[column[i]] = bunch[attribute][:, i] | bunch_factory | python | statsmodels/statsmodels | statsmodels/tools/testing.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/testing.py | BSD-3-Clause |
def logdet_symm(m, check_symm=False):
"""
Return log(det(m)) asserting positive definiteness of m.
Parameters
----------
m : array_like
2d array that is positive-definite (and symmetric)
Returns
-------
logdet : float
The log-determinant of m.
"""
from scipy import linalg
if check_symm:
if not np.all(m == m.T): # would be nice to short-circuit check
raise ValueError("m is not symmetric.")
c, _ = linalg.cho_factor(m, lower=True)
return 2*np.sum(np.log(c.diagonal())) | Return log(det(m)) asserting positive definiteness of m.
Parameters
----------
m : array_like
2d array that is positive-definite (and symmetric)
Returns
-------
logdet : float
The log-determinant of m. | logdet_symm | python | statsmodels/statsmodels | statsmodels/tools/linalg.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/linalg.py | BSD-3-Clause |
def stationary_solve(r, b):
"""
Solve a linear system for a Toeplitz correlation matrix.
A Toeplitz correlation matrix represents the covariance of a
stationary series with unit variance.
Parameters
----------
r : array_like
A vector describing the coefficient matrix. r[0] is the first
band next to the diagonal, r[1] is the second band, etc.
b : array_like
The right-hand side for which we are solving, i.e. we solve
Tx = b and return b, where T is the Toeplitz coefficient matrix.
Returns
-------
The solution to the linear system.
"""
db = r[0:1]
dim = b.ndim
if b.ndim == 1:
b = b[:, None]
x = b[0:1, :]
for j in range(1, len(b)):
rf = r[0:j][::-1]
a = (b[j, :] - np.dot(rf, x)) / (1 - np.dot(rf, db[::-1]))
z = x - np.outer(db[::-1], a)
x = np.concatenate((z, a[None, :]), axis=0)
if j == len(b) - 1:
break
rn = r[j]
a = (rn - np.dot(rf, db)) / (1 - np.dot(rf, db[::-1]))
z = db - a*db[::-1]
db = np.concatenate((z, np.r_[a]))
if dim == 1:
x = x[:, 0]
return x | Solve a linear system for a Toeplitz correlation matrix.
A Toeplitz correlation matrix represents the covariance of a
stationary series with unit variance.
Parameters
----------
r : array_like
A vector describing the coefficient matrix. r[0] is the first
band next to the diagonal, r[1] is the second band, etc.
b : array_like
The right-hand side for which we are solving, i.e. we solve
Tx = b and return b, where T is the Toeplitz coefficient matrix.
Returns
-------
The solution to the linear system. | stationary_solve | python | statsmodels/statsmodels | statsmodels/tools/linalg.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/linalg.py | BSD-3-Clause |
def transf_constraints(constraints):
"""use QR to get transformation matrix to impose constraint
Parameters
----------
constraints : ndarray, 2-D
restriction matrix with one constraints in rows
Returns
-------
transf : ndarray
transformation matrix to reparameterize so that constraint is
imposed
Notes
-----
This is currently and internal helper function for GAM.
API not stable and will most likely change.
The code for this function was taken from patsy spline handling, and
corresponds to the reparameterization used by Wood in R's mgcv package.
See Also
--------
statsmodels.base._constraints.TransformRestriction : class to impose
constraints by reparameterization used by `_fit_constrained`.
"""
from scipy import linalg
m = constraints.shape[0]
q, _ = linalg.qr(np.transpose(constraints))
transf = q[:, m:]
return transf | use QR to get transformation matrix to impose constraint
Parameters
----------
constraints : ndarray, 2-D
restriction matrix with one constraints in rows
Returns
-------
transf : ndarray
transformation matrix to reparameterize so that constraint is
imposed
Notes
-----
This is currently and internal helper function for GAM.
API not stable and will most likely change.
The code for this function was taken from patsy spline handling, and
corresponds to the reparameterization used by Wood in R's mgcv package.
See Also
--------
statsmodels.base._constraints.TransformRestriction : class to impose
constraints by reparameterization used by `_fit_constrained`. | transf_constraints | python | statsmodels/statsmodels | statsmodels/tools/linalg.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/linalg.py | BSD-3-Clause |
def matrix_sqrt(mat, inverse=False, full=False, nullspace=False,
threshold=1e-15):
"""matrix square root for symmetric matrices
Usage is for decomposing a covariance function S into a square root R
such that
R' R = S if inverse is False, or
R' R = pinv(S) if inverse is True
Parameters
----------
mat : array_like, 2-d square
symmetric square matrix for which square root or inverse square
root is computed.
There is no checking for whether the matrix is symmetric.
A warning is issued if some singular values are negative, i.e.
below the negative of the threshold.
inverse : bool
If False (default), then the matrix square root is returned.
If inverse is True, then the matrix square root of the inverse
matrix is returned.
full : bool
If full is False (default, then the square root has reduce number
of rows if the matrix is singular, i.e. has singular values below
the threshold.
nullspace : bool
If nullspace is true, then the matrix square root of the null space
of the matrix is returned.
threshold : float
Singular values below the threshold are dropped.
Returns
-------
msqrt : ndarray
matrix square root or square root of inverse matrix.
"""
# see also scipy.linalg null_space
u, s, v = np.linalg.svd(mat)
if np.any(s < -threshold):
import warnings
warnings.warn('some singular values are negative')
if not nullspace:
mask = s > threshold
s[s < threshold] = 0
else:
mask = s < threshold
s[s > threshold] = 0
sqrt_s = np.sqrt(s[mask])
if inverse:
sqrt_s = 1 / np.sqrt(s[mask])
if full:
b = np.dot(u[:, mask], np.dot(np.diag(sqrt_s), v[mask]))
else:
b = np.dot(np.diag(sqrt_s), v[mask])
return b | matrix square root for symmetric matrices
Usage is for decomposing a covariance function S into a square root R
such that
R' R = S if inverse is False, or
R' R = pinv(S) if inverse is True
Parameters
----------
mat : array_like, 2-d square
symmetric square matrix for which square root or inverse square
root is computed.
There is no checking for whether the matrix is symmetric.
A warning is issued if some singular values are negative, i.e.
below the negative of the threshold.
inverse : bool
If False (default), then the matrix square root is returned.
If inverse is True, then the matrix square root of the inverse
matrix is returned.
full : bool
If full is False (default, then the square root has reduce number
of rows if the matrix is singular, i.e. has singular values below
the threshold.
nullspace : bool
If nullspace is true, then the matrix square root of the null space
of the matrix is returned.
threshold : float
Singular values below the threshold are dropped.
Returns
-------
msqrt : ndarray
matrix square root or square root of inverse matrix. | matrix_sqrt | python | statsmodels/statsmodels | statsmodels/tools/linalg.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/linalg.py | BSD-3-Clause |
def add_indep(x, varnames, dtype=None):
'''
construct array with independent columns
x is either iterable (list, tuple) or instance of ndarray or a subclass
of it. If x is an ndarray, then each column is assumed to represent a
variable with observations in rows.
'''
# TODO: this needs tests for subclasses
if isinstance(x, np.ndarray) and x.ndim == 2:
x = x.T
nvars_orig = len(x)
nobs = len(x[0])
if not dtype:
dtype = np.asarray(x[0]).dtype
xout = np.zeros((nobs, nvars_orig), dtype=dtype)
count = 0
rank_old = 0
varnames_new = []
varnames_dropped = []
for (xi, ni) in zip(x, varnames):
xout[:, count] = xi
rank_new = np.linalg.matrix_rank(xout)
if rank_new > rank_old:
varnames_new.append(ni)
rank_old = rank_new
count += 1
else:
varnames_dropped.append(ni)
return xout[:, :count], varnames_new | construct array with independent columns
x is either iterable (list, tuple) or instance of ndarray or a subclass
of it. If x is an ndarray, then each column is assumed to represent a
variable with observations in rows. | add_indep | python | statsmodels/statsmodels | statsmodels/tools/catadd.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/catadd.py | BSD-3-Clause |
def deprecated_alias(old_name, new_name, remove_version=None, msg=None,
warning=FutureWarning):
"""
Deprecate attribute in favor of alternative name.
Parameters
----------
old_name : str
Old, deprecated name
new_name : str
New name
remove_version : str, optional
Version that the alias will be removed
msg : str, optional
Message to show. Default is
`old_name` is a deprecated alias for `new_name`
warning : Warning, optional
Warning class to give. Default is FutureWarning.
Notes
-----
Older or less-used classes may not conform to statsmodels naming
conventions. `deprecated_alias` lets us bring them into conformance
without breaking backward-compatibility.
Example
-------
Instances of the `Foo` class have a `nvars` attribute, but it _should_
be called `neqs`:
class Foo:
nvars = deprecated_alias('nvars', 'neqs')
def __init__(self, neqs):
self.neqs = neqs
>>> foo = Foo(3)
>>> foo.nvars
__main__:1: FutureWarning: nvars is a deprecated alias for neqs
3
"""
if msg is None:
msg = f'{old_name} is a deprecated alias for {new_name}'
if remove_version is not None:
msg += ', will be removed in version %s' % remove_version
def fget(self):
warnings.warn(msg, warning, stacklevel=2)
return getattr(self, new_name)
def fset(self, value):
warnings.warn(msg, warning, stacklevel=2)
setattr(self, new_name, value)
res = property(fget=fget, fset=fset)
return res | Deprecate attribute in favor of alternative name.
Parameters
----------
old_name : str
Old, deprecated name
new_name : str
New name
remove_version : str, optional
Version that the alias will be removed
msg : str, optional
Message to show. Default is
`old_name` is a deprecated alias for `new_name`
warning : Warning, optional
Warning class to give. Default is FutureWarning.
Notes
-----
Older or less-used classes may not conform to statsmodels naming
conventions. `deprecated_alias` lets us bring them into conformance
without breaking backward-compatibility.
Example
-------
Instances of the `Foo` class have a `nvars` attribute, but it _should_
be called `neqs`:
class Foo:
nvars = deprecated_alias('nvars', 'neqs')
def __init__(self, neqs):
self.neqs = neqs
>>> foo = Foo(3)
>>> foo.nvars
__main__:1: FutureWarning: nvars is a deprecated alias for neqs
3 | deprecated_alias | python | statsmodels/statsmodels | statsmodels/tools/decorators.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/decorators.py | BSD-3-Clause |
def show_versions(show_dirs=True):
"""
List the versions of statsmodels and any installed dependencies
Parameters
----------
show_dirs : bool
Flag indicating to show module locations
"""
if not show_dirs:
_show_versions_only()
print("\nINSTALLED VERSIONS")
print("------------------")
print("Python: %d.%d.%d.%s.%s" % sys.version_info[:])
uname = platform.uname()
sysname = uname.system
release = uname.release
version = uname.version
machine = uname.machine
print(f"OS: {sysname} {release} {version} {machine}")
print("byteorder: %s" % sys.byteorder)
print("LC_ALL: %s" % os.environ.get("LC_ALL", "None"))
print("LANG: %s" % os.environ.get("LANG", "None"))
try:
import statsmodels
has_sm = True
except ImportError:
has_sm = False
print("\nstatsmodels\n===========\n")
if has_sm:
print(
"Installed: {} ({})".format(
safe_version(statsmodels), dirname(statsmodels.__file__)
)
)
else:
print("Not installed")
print("\nRequired Dependencies\n=====================\n")
try:
import Cython
print("cython: {} ({})".format(safe_version(Cython), dirname(Cython.__file__)))
except ImportError:
print("cython: Not installed")
try:
import numpy
print("numpy: {} ({})".format(safe_version(numpy), dirname(numpy.__file__)))
except ImportError:
print("numpy: Not installed")
try:
import scipy
print("scipy: {} ({})".format(safe_version(scipy), dirname(scipy.__file__)))
except ImportError:
print("scipy: Not installed")
try:
import pandas
print(
"pandas: {} ({})".format(
safe_version(pandas, "__version__"),
dirname(pandas.__file__),
)
)
except ImportError:
print("pandas: Not installed")
try:
import dateutil
print(
" dateutil: {} ({})".format(
safe_version(dateutil), dirname(dateutil.__file__)
)
)
except ImportError:
print(" dateutil: not installed")
try:
import patsy
print("patsy: {} ({})".format(safe_version(patsy), dirname(patsy.__file__)))
except ImportError:
print("patsy: Not installed")
print("\nOptional Dependencies\n=====================\n")
try:
import matplotlib as mpl
print("matplotlib: {} ({})".format(safe_version(mpl), dirname(mpl.__file__)))
print(" backend: %s " % mpl.rcParams["backend"])
except ImportError:
print("matplotlib: Not installed")
try:
from cvxopt import info
print(
"cvxopt: {} ({})".format(
safe_version(info, "version"), dirname(info.__file__)
)
)
except ImportError:
print("cvxopt: Not installed")
try:
import joblib
print("joblib: {} ({})".format(safe_version(joblib), dirname(joblib.__file__)))
except ImportError:
print("joblib: Not installed")
print("\nDeveloper Tools\n================\n")
try:
import IPython
print(
"IPython: {} ({})".format(safe_version(IPython), dirname(IPython.__file__))
)
except ImportError:
print("IPython: Not installed")
try:
import jinja2
print(
" jinja2: {} ({})".format(safe_version(jinja2), dirname(jinja2.__file__))
)
except ImportError:
print(" jinja2: Not installed")
try:
import sphinx
print("sphinx: {} ({})".format(safe_version(sphinx), dirname(sphinx.__file__)))
except ImportError:
print("sphinx: Not installed")
try:
import pygments
print(
" pygments: {} ({})".format(
safe_version(pygments), dirname(pygments.__file__)
)
)
except ImportError:
print(" pygments: Not installed")
try:
import pytest
print(f"pytest: {safe_version(pytest)} ({dirname(pytest.__file__)})")
except ImportError:
print("pytest: Not installed")
try:
import virtualenv
print(
"virtualenv: {} ({})".format(
safe_version(virtualenv), dirname(virtualenv.__file__)
)
)
except ImportError:
print("virtualenv: Not installed")
print("\n") | List the versions of statsmodels and any installed dependencies
Parameters
----------
show_dirs : bool
Flag indicating to show module locations | show_versions | python | statsmodels/statsmodels | statsmodels/tools/print_version.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/print_version.py | BSD-3-Clause |
def dedent_lines(lines):
"""Deindent a list of lines maximally"""
return textwrap.dedent("\n".join(lines)).split("\n") | Deindent a list of lines maximally | dedent_lines | python | statsmodels/statsmodels | statsmodels/tools/docstring.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/docstring.py | BSD-3-Clause |
def strip_blank_lines(line):
"""Remove leading and trailing blank lines from a list of lines"""
while line and not line[0].strip():
del line[0]
while line and not line[-1].strip():
del line[-1]
return line | Remove leading and trailing blank lines from a list of lines | strip_blank_lines | python | statsmodels/statsmodels | statsmodels/tools/docstring.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/docstring.py | BSD-3-Clause |
def __init__(self, data):
"""
Parameters
----------
data : str
String with lines separated by '\n'.
"""
if isinstance(data, list):
self._str = data
else:
self._str = data.split("\n") # store string as list of lines
self.reset() | Parameters
----------
data : str
String with lines separated by '\n'. | __init__ | python | statsmodels/statsmodels | statsmodels/tools/docstring.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/docstring.py | BSD-3-Clause |
def parse_item_name(text):
"""Match ':role:`name`' or 'name'."""
m = self._func_rgx.match(text)
if not m:
raise ParseError(f"{text} is not a item name")
role = m.group("role")
name = m.group("name") if role else m.group("name2")
return name, role, m.end() | Match ':role:`name`' or 'name'. | _parse_see_also.parse_item_name | python | statsmodels/statsmodels | statsmodels/tools/docstring.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/docstring.py | BSD-3-Clause |
def _parse_see_also(self, content):
"""
func_name : Descriptive text
continued text
another_func_name : Descriptive text
func_name1, func_name2, :meth:`func_name`, func_name3
"""
items = []
def parse_item_name(text):
"""Match ':role:`name`' or 'name'."""
m = self._func_rgx.match(text)
if not m:
raise ParseError(f"{text} is not a item name")
role = m.group("role")
name = m.group("name") if role else m.group("name2")
return name, role, m.end()
rest = []
for line in content:
if not line.strip():
continue
line_match = self._line_rgx.match(line)
description = None
if line_match:
description = line_match.group("desc")
if line_match.group("trailing") and description:
self._error_location(
"Unexpected comma or period after function list at "
"index %d of line "
'"%s"' % (line_match.end("trailing"), line)
)
if not description and line.startswith(" "):
rest.append(line.strip())
elif line_match:
funcs = []
text = line_match.group("allfuncs")
while True:
if not text.strip():
break
name, role, match_end = parse_item_name(text)
funcs.append((name, role))
text = text[match_end:].strip()
if text and text[0] == ",":
text = text[1:].strip()
rest = list(filter(None, [description]))
items.append((funcs, rest))
else:
raise ParseError(f"{line} is not a item name")
return items | func_name : Descriptive text
continued text
another_func_name : Descriptive text
func_name1, func_name2, :meth:`func_name`, func_name3 | _parse_see_also | python | statsmodels/statsmodels | statsmodels/tools/docstring.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/docstring.py | BSD-3-Clause |
def _parse_index(self, section, content):
"""
.. index: default
:refguide: something, else, and more
"""
def strip_each_in(lst):
return [s.strip() for s in lst]
out = {}
section = section.split("::")
if len(section) > 1:
out["default"] = strip_each_in(section[1].split(","))[0]
for line in content:
line = line.split(":")
if len(line) > 2:
out[line[1]] = strip_each_in(line[2].split(","))
return out | .. index: default
:refguide: something, else, and more | _parse_index | python | statsmodels/statsmodels | statsmodels/tools/docstring.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/docstring.py | BSD-3-Clause |
def _parse_summary(self):
"""Grab signature (if given) and summary"""
if self._is_at_section():
return
# If several signatures present, take the last one
while True:
summary = self._doc.read_to_next_empty_line()
summary_str = " ".join([s.strip() for s in summary]).strip()
compiled = re.compile(r"^([\w., ]+=)?\s*[\w\.]+\(.*\)$")
if compiled.match(summary_str):
self["Signature"] = summary_str
if not self._is_at_section():
continue
break
if summary is not None:
self["Summary"] = summary
if not self._is_at_section():
self["Extended Summary"] = self._read_to_next_section() | Grab signature (if given) and summary | _parse_summary | python | statsmodels/statsmodels | statsmodels/tools/docstring.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/docstring.py | BSD-3-Clause |
def remove_parameters(self, parameters):
"""
Parameters
----------
parameters : str, list[str]
The names of the parameters to remove.
"""
if self._docstring is None:
# Protection against -oo execution
return
if isinstance(parameters, str):
parameters = [parameters]
repl = [
param
for param in self._ds["Parameters"]
if param.name not in parameters
]
if len(repl) + len(parameters) != len(self._ds["Parameters"]):
raise ValueError("One or more parameters were not found.")
self._ds["Parameters"] = repl | Parameters
----------
parameters : str, list[str]
The names of the parameters to remove. | remove_parameters | python | statsmodels/statsmodels | statsmodels/tools/docstring.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/docstring.py | BSD-3-Clause |
def insert_parameters(self, after, parameters):
"""
Parameters
----------
after : {None, str}
If None, inset the parameters before the first parameter in the
docstring.
parameters : Parameter, list[Parameter]
A Parameter of a list of Parameters.
"""
if self._docstring is None:
# Protection against -oo execution
return
if isinstance(parameters, Parameter):
parameters = [parameters]
if after is None:
self._ds["Parameters"] = parameters + self._ds["Parameters"]
else:
loc = -1
for i, param in enumerate(self._ds["Parameters"]):
if param.name == after:
loc = i + 1
break
if loc < 0:
raise ValueError()
params = self._ds["Parameters"][:loc] + parameters
params += self._ds["Parameters"][loc:]
self._ds["Parameters"] = params | Parameters
----------
after : {None, str}
If None, inset the parameters before the first parameter in the
docstring.
parameters : Parameter, list[Parameter]
A Parameter of a list of Parameters. | insert_parameters | python | statsmodels/statsmodels | statsmodels/tools/docstring.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/docstring.py | BSD-3-Clause |
def replace_block(self, block_name, block):
"""
Parameters
----------
block_name : str
Name of the block to replace, e.g., 'Summary'.
block : object
The replacement block. The structure of the replacement block must
match how the block is stored by NumpyDocString.
"""
if self._docstring is None:
# Protection against -oo execution
return
block_name = " ".join(map(str.capitalize, block_name.split(" ")))
if block_name not in self._ds:
raise ValueError(
"{} is not a block in the docstring".format(block_name)
)
if not isinstance(block, list) and isinstance(
self._ds[block_name], list
):
block = [block]
self._ds[block_name] = block | Parameters
----------
block_name : str
Name of the block to replace, e.g., 'Summary'.
block : object
The replacement block. The structure of the replacement block must
match how the block is stored by NumpyDocString. | replace_block | python | statsmodels/statsmodels | statsmodels/tools/docstring.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/docstring.py | BSD-3-Clause |
def remove_parameters(docstring, parameters):
"""
Parameters
----------
docstring : str
The docstring to modify.
parameters : str, list[str]
The names of the parameters to remove.
Returns
-------
str
The modified docstring.
"""
if docstring is None:
return
ds = Docstring(docstring)
ds.remove_parameters(parameters)
return str(ds) | Parameters
----------
docstring : str
The docstring to modify.
parameters : str, list[str]
The names of the parameters to remove.
Returns
-------
str
The modified docstring. | remove_parameters | python | statsmodels/statsmodels | statsmodels/tools/docstring.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/docstring.py | BSD-3-Clause |
def indent(text, prefix, predicate=None):
"""
Non-protected indent
Parameters
----------
text : {None, str}
If None, function always returns ""
prefix : str
Prefix to add to the start of each line
predicate : callable, optional
If provided, 'prefix' will only be added to the lines
where 'predicate(line)' is True. If 'predicate' is not provided,
it will default to adding 'prefix' to all non-empty lines that do not
consist solely of whitespace characters.
Returns
-------
"""
if text is None:
return ""
return textwrap.indent(text, prefix, predicate=predicate) | Non-protected indent
Parameters
----------
text : {None, str}
If None, function always returns ""
prefix : str
Prefix to add to the start of each line
predicate : callable, optional
If provided, 'prefix' will only be added to the lines
where 'predicate(line)' is True. If 'predicate' is not provided,
it will default to adding 'prefix' to all non-empty lines that do not
consist solely of whitespace characters.
Returns
------- | indent | python | statsmodels/statsmodels | statsmodels/tools/docstring.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/docstring.py | BSD-3-Clause |
def transform(self, data):
"""standardize the data using the stored transformation
"""
# could use scipy.stats.zscore instead
if self.mean is None:
return np.asarray(data) / self.scale
else:
return (np.asarray(data) - self.mean) / self.scale | standardize the data using the stored transformation | transform | python | statsmodels/statsmodels | statsmodels/tools/transform_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/transform_model.py | BSD-3-Clause |
def transform_params(self, params):
"""Transform parameters of the standardized model to the original model
Parameters
----------
params : ndarray
parameters estimated with the standardized model
Returns
-------
params_new : ndarray
parameters transformed to the parameterization of the original
model
"""
params_new = params / self.scale
if self.const_idx != 'n':
params_new[self.const_idx] -= (params_new * self.mean).sum()
return params_new | Transform parameters of the standardized model to the original model
Parameters
----------
params : ndarray
parameters estimated with the standardized model
Returns
-------
params_new : ndarray
parameters transformed to the parameterization of the original
model | transform_params | python | statsmodels/statsmodels | statsmodels/tools/transform_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/transform_model.py | BSD-3-Clause |
def interpret_data(data, colnames=None, rownames=None):
"""
Convert passed data structure to form required by estimation classes
Parameters
----------
data : array_like
colnames : sequence or None
May be part of data structure
rownames : sequence or None
Returns
-------
(values, colnames, rownames) : (homogeneous ndarray, list)
"""
if isinstance(data, np.ndarray):
values = np.asarray(data)
if colnames is None:
colnames = ['Y_%d' % i for i in range(values.shape[1])]
elif is_data_frame(data):
# XXX: hack
data = data.dropna()
values = data.values
colnames = data.columns
rownames = data.index
else: # pragma: no cover
raise TypeError('Cannot handle input type {typ}'
.format(typ=type(data).__name__))
if not isinstance(colnames, list):
colnames = list(colnames)
# sanity check
if len(colnames) != values.shape[1]:
raise ValueError('length of colnames does not match number '
'of columns in data')
if rownames is not None and len(rownames) != len(values):
raise ValueError('length of rownames does not match number '
'of rows in data')
return values, colnames, rownames | Convert passed data structure to form required by estimation classes
Parameters
----------
data : array_like
colnames : sequence or None
May be part of data structure
rownames : sequence or None
Returns
-------
(values, colnames, rownames) : (homogeneous ndarray, list) | interpret_data | python | statsmodels/statsmodels | statsmodels/tools/data.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/data.py | BSD-3-Clause |
def _is_recarray(data):
"""
Returns true if data is a recarray
"""
if NP_LT_2:
return isinstance(data, np.core.recarray)
else:
return isinstance(data, np.rec.recarray) | Returns true if data is a recarray | _is_recarray | python | statsmodels/statsmodels | statsmodels/tools/data.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/data.py | BSD-3-Clause |
def _as_array_with_name(obj, default_name):
"""
Call np.asarray() on obj and attempt to get the name if its a Series.
Parameters
----------
obj: pd.Series
Series to convert to an array
default_name: str
The default name to return in case the object isn't a pd.Series or has
no name attribute.
Returns
-------
array_and_name: tuple[np.ndarray, str]
The data casted to np.ndarra and the series name or None
"""
if is_series(obj):
return (np.asarray(obj), obj.name)
return (np.asarray(obj), default_name) | Call np.asarray() on obj and attempt to get the name if its a Series.
Parameters
----------
obj: pd.Series
Series to convert to an array
default_name: str
The default name to return in case the object isn't a pd.Series or has
no name attribute.
Returns
-------
array_and_name: tuple[np.ndarray, str]
The data casted to np.ndarra and the series name or None | _as_array_with_name | python | statsmodels/statsmodels | statsmodels/tools/data.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/data.py | BSD-3-Clause |
def parallel_func(func, n_jobs, verbose=5):
"""Return parallel instance with delayed function
Util function to use joblib only if available
Parameters
----------
func : callable
A function
n_jobs : int
Number of jobs to run in parallel
verbose : int
Verbosity level
Returns
-------
parallel : instance of joblib.Parallel or list
The parallel object
my_func : callable
func if not parallel or delayed(func)
n_jobs : int
Number of jobs >= 0
Examples
--------
>>> from math import sqrt
>>> from statsmodels.tools.parallel import parallel_func
>>> parallel, p_func, n_jobs = parallel_func(sqrt, n_jobs=-1, verbose=0)
>>> print(n_jobs)
>>> parallel(p_func(i**2) for i in range(10))
"""
try:
try:
from joblib import Parallel, delayed
except ImportError:
from sklearn.externals.joblib import Parallel, delayed
parallel = Parallel(n_jobs, verbose=verbose)
my_func = delayed(func)
if n_jobs == -1:
try:
import multiprocessing
n_jobs = multiprocessing.cpu_count()
except (ImportError, NotImplementedError):
import warnings
warnings.warn(module_unavailable_doc.format('multiprocessing'),
ModuleUnavailableWarning)
n_jobs = 1
except ImportError:
import warnings
warnings.warn(module_unavailable_doc.format('joblib'),
ModuleUnavailableWarning)
n_jobs = 1
my_func = func
parallel = list
return parallel, my_func, n_jobs | Return parallel instance with delayed function
Util function to use joblib only if available
Parameters
----------
func : callable
A function
n_jobs : int
Number of jobs to run in parallel
verbose : int
Verbosity level
Returns
-------
parallel : instance of joblib.Parallel or list
The parallel object
my_func : callable
func if not parallel or delayed(func)
n_jobs : int
Number of jobs >= 0
Examples
--------
>>> from math import sqrt
>>> from statsmodels.tools.parallel import parallel_func
>>> parallel, p_func, n_jobs = parallel_func(sqrt, n_jobs=-1, verbose=0)
>>> print(n_jobs)
>>> parallel(p_func(i**2) for i in range(10)) | parallel_func | python | statsmodels/statsmodels | statsmodels/tools/parallel.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/parallel.py | BSD-3-Clause |
def discrepancy(sample, bounds=None):
"""Discrepancy.
Compute the centered discrepancy on a given sample.
It is a measure of the uniformity of the points in the parameter space.
The lower the value is, the better the coverage of the parameter space is.
Parameters
----------
sample : array_like (n_samples, k_vars)
The sample to compute the discrepancy from.
bounds : tuple or array_like ([min, k_vars], [max, k_vars])
Desired range of transformed data. The transformation apply the bounds
on the sample and not the theoretical space, unit cube. Thus min and
max values of the sample will coincide with the bounds.
Returns
-------
discrepancy : float
Centered discrepancy.
References
----------
[1] Fang et al. "Design and modeling for computer experiments",
Computer Science and Data Analysis Series Science and Data Analysis
Series, 2006.
"""
sample = np.asarray(sample)
n_sample, dim = sample.shape
# Sample scaling from bounds to unit hypercube
if bounds is not None:
min_ = bounds.min(axis=0)
max_ = bounds.max(axis=0)
sample = (sample - min_) / (max_ - min_)
abs_ = abs(sample - 0.5)
disc1 = np.sum(np.prod(1 + 0.5 * abs_ - 0.5 * abs_ ** 2, axis=1))
prod_arr = 1
for i in range(dim):
s0 = sample[:, i]
prod_arr *= (1 +
0.5 * abs(s0[:, None] - 0.5) + 0.5 * abs(s0 - 0.5) -
0.5 * abs(s0[:, None] - s0))
disc2 = prod_arr.sum()
c2 = ((13.0 / 12.0) ** dim - 2.0 / n_sample * disc1 +
1.0 / (n_sample ** 2) * disc2)
return c2 | Discrepancy.
Compute the centered discrepancy on a given sample.
It is a measure of the uniformity of the points in the parameter space.
The lower the value is, the better the coverage of the parameter space is.
Parameters
----------
sample : array_like (n_samples, k_vars)
The sample to compute the discrepancy from.
bounds : tuple or array_like ([min, k_vars], [max, k_vars])
Desired range of transformed data. The transformation apply the bounds
on the sample and not the theoretical space, unit cube. Thus min and
max values of the sample will coincide with the bounds.
Returns
-------
discrepancy : float
Centered discrepancy.
References
----------
[1] Fang et al. "Design and modeling for computer experiments",
Computer Science and Data Analysis Series Science and Data Analysis
Series, 2006. | discrepancy | python | statsmodels/statsmodels | statsmodels/tools/sequences.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/sequences.py | BSD-3-Clause |
def primes_from_2_to(n):
"""Prime numbers from 2 to *n*.
Parameters
----------
n : int
Sup bound with ``n >= 6``.
Returns
-------
primes : list(int)
Primes in ``2 <= p < n``.
References
----------
[1] `StackOverflow <https://stackoverflow.com/questions/2068372>`_.
"""
sieve = np.ones(n // 3 + (n % 6 == 2), dtype=bool)
for i in range(1, int(n ** 0.5) // 3 + 1):
if sieve[i]:
k = 3 * i + 1 | 1
sieve[k * k // 3::2 * k] = False
sieve[k * (k - 2 * (i & 1) + 4) // 3::2 * k] = False
return np.r_[2, 3, ((3 * np.nonzero(sieve)[0][1:] + 1) | 1)] | Prime numbers from 2 to *n*.
Parameters
----------
n : int
Sup bound with ``n >= 6``.
Returns
-------
primes : list(int)
Primes in ``2 <= p < n``.
References
----------
[1] `StackOverflow <https://stackoverflow.com/questions/2068372>`_. | primes_from_2_to | python | statsmodels/statsmodels | statsmodels/tools/sequences.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/sequences.py | BSD-3-Clause |
def n_primes(n):
"""List of the n-first prime numbers.
Parameters
----------
n : int
Number of prime numbers wanted.
Returns
-------
primes : list(int)
List of primes.
"""
primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59,
61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127,
131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193,
197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269,
271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349,
353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431,
433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503,
509, 521, 523, 541, 547, 557, 563, 569, 571, 577, 587, 593, 599,
601, 607, 613, 617, 619, 631, 641, 643, 647, 653, 659, 661, 673,
677, 683, 691, 701, 709, 719, 727, 733, 739, 743, 751, 757, 761,
769, 773, 787, 797, 809, 811, 821, 823, 827, 829, 839, 853, 857,
859, 863, 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 947,
953, 967, 971, 977, 983, 991, 997][:n]
if len(primes) < n:
big_number = 10
while 'Not enought primes':
primes = primes_from_2_to(big_number)[:n]
if len(primes) == n:
break
big_number += 1000
return primes | List of the n-first prime numbers.
Parameters
----------
n : int
Number of prime numbers wanted.
Returns
-------
primes : list(int)
List of primes. | n_primes | python | statsmodels/statsmodels | statsmodels/tools/sequences.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/sequences.py | BSD-3-Clause |
def van_der_corput(n_sample, base=2, start_index=0):
"""Van der Corput sequence.
Pseudo-random number generator based on a b-adic expansion.
Parameters
----------
n_sample : int
Number of element of the sequence.
base : int
Base of the sequence.
start_index : int
Index to start the sequence from.
Returns
-------
sequence : list (n_samples,)
Sequence of Van der Corput.
"""
sequence = []
for i in range(start_index, start_index + n_sample):
n_th_number, denom = 0., 1.
quotient = i
while quotient > 0:
quotient, remainder = divmod(quotient, base)
denom *= base
n_th_number += remainder / denom
sequence.append(n_th_number)
return sequence | Van der Corput sequence.
Pseudo-random number generator based on a b-adic expansion.
Parameters
----------
n_sample : int
Number of element of the sequence.
base : int
Base of the sequence.
start_index : int
Index to start the sequence from.
Returns
-------
sequence : list (n_samples,)
Sequence of Van der Corput. | van_der_corput | python | statsmodels/statsmodels | statsmodels/tools/sequences.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/sequences.py | BSD-3-Clause |
def halton(dim, n_sample, bounds=None, start_index=0):
"""Halton sequence.
Pseudo-random number generator that generalize the Van der Corput sequence
for multiple dimensions. Halton sequence use base-two Van der Corput
sequence for the first dimension, base-three for its second and base-n for
its n-dimension.
Parameters
----------
dim : int
Dimension of the parameter space.
n_sample : int
Number of samples to generate in the parametr space.
bounds : tuple or array_like ([min, k_vars], [max, k_vars])
Desired range of transformed data. The transformation apply the bounds
on the sample and not the theoretical space, unit cube. Thus min and
max values of the sample will coincide with the bounds.
start_index : int
Index to start the sequence from.
Returns
-------
sequence : array_like (n_samples, k_vars)
Sequence of Halton.
References
----------
[1] Halton, "On the efficiency of certain quasi-random sequences of points
in evaluating multi-dimensional integrals", Numerische Mathematik, 1960.
Examples
--------
Generate samples from a low discrepancy sequence of Halton.
>>> from statsmodels.tools import sequences
>>> sample = sequences.halton(dim=2, n_sample=5)
Compute the quality of the sample using the discrepancy criterion.
>>> uniformity = sequences.discrepancy(sample)
If some wants to continue an existing design, extra points can be obtained.
>>> sample_continued = sequences.halton(dim=2, n_sample=5, start_index=5)
"""
base = n_primes(dim)
# Generate a sample using a Van der Corput sequence per dimension.
sample = [van_der_corput(n_sample + 1, bdim, start_index) for bdim in base]
sample = np.array(sample).T[1:]
# Sample scaling from unit hypercube to feature range
if bounds is not None:
min_ = bounds.min(axis=0)
max_ = bounds.max(axis=0)
sample = sample * (max_ - min_) + min_
return sample | Halton sequence.
Pseudo-random number generator that generalize the Van der Corput sequence
for multiple dimensions. Halton sequence use base-two Van der Corput
sequence for the first dimension, base-three for its second and base-n for
its n-dimension.
Parameters
----------
dim : int
Dimension of the parameter space.
n_sample : int
Number of samples to generate in the parametr space.
bounds : tuple or array_like ([min, k_vars], [max, k_vars])
Desired range of transformed data. The transformation apply the bounds
on the sample and not the theoretical space, unit cube. Thus min and
max values of the sample will coincide with the bounds.
start_index : int
Index to start the sequence from.
Returns
-------
sequence : array_like (n_samples, k_vars)
Sequence of Halton.
References
----------
[1] Halton, "On the efficiency of certain quasi-random sequences of points
in evaluating multi-dimensional integrals", Numerische Mathematik, 1960.
Examples
--------
Generate samples from a low discrepancy sequence of Halton.
>>> from statsmodels.tools import sequences
>>> sample = sequences.halton(dim=2, n_sample=5)
Compute the quality of the sample using the discrepancy criterion.
>>> uniformity = sequences.discrepancy(sample)
If some wants to continue an existing design, extra points can be obtained.
>>> sample_continued = sequences.halton(dim=2, n_sample=5, start_index=5) | halton | python | statsmodels/statsmodels | statsmodels/tools/sequences.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/sequences.py | BSD-3-Clause |
def combine_indices(groups, prefix='', sep='.', return_labels=False):
"""use np.unique to get integer group indices for product, intersection
"""
if isinstance(groups, tuple):
groups = np.column_stack(groups)
else:
groups = np.asarray(groups)
dt = groups.dtype
is2d = (groups.ndim == 2) # need to store
if is2d:
ncols = groups.shape[1]
if not groups.flags.c_contiguous:
groups = np.array(groups, order='C')
groups_ = groups.view([('', groups.dtype)] * groups.shape[1])
else:
groups_ = groups
uni, uni_idx, uni_inv = np.unique(groups_, return_index=True,
return_inverse=True)
if is2d:
uni = uni.view(dt).reshape(-1, ncols)
# avoiding a view would be
# for t in uni.dtype.fields.values():
# assert (t[0] == dt)
#
# uni.dtype = dt
# uni.shape = (uni.size//ncols, ncols)
if return_labels:
label = [(prefix+sep.join(['%s']*len(uni[0]))) % tuple(ii)
for ii in uni]
return uni_inv, uni_idx, uni, label
else:
return uni_inv, uni_idx, uni | use np.unique to get integer group indices for product, intersection | combine_indices | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def group_sums(x, group, use_bincount=True):
"""simple bincount version, again
group : ndarray, integer
assumed to be consecutive integers
no dtype checking because I want to raise in that case
uses loop over columns of x
for comparison, simple python loop
"""
x = np.asarray(x)
group = np.asarray(group).squeeze()
if x.ndim == 1:
x = x[:, None]
elif x.ndim > 2 and use_bincount:
raise ValueError('not implemented yet')
if use_bincount:
# re-label groups or bincount takes too much memory
if np.max(group) > 2 * x.shape[0]:
group = pd.factorize(group)[0]
return np.array(
[
np.bincount(group, weights=x[:, col])
for col in range(x.shape[1])
]
)
else:
uniques = np.unique(group)
result = np.zeros([len(uniques)] + list(x.shape[1:]))
for ii, cat in enumerate(uniques):
result[ii] = x[group == cat].sum(0)
return result | simple bincount version, again
group : ndarray, integer
assumed to be consecutive integers
no dtype checking because I want to raise in that case
uses loop over columns of x
for comparison, simple python loop | group_sums | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def group_sums_dummy(x, group_dummy):
"""sum by groups given group dummy variable
group_dummy can be either ndarray or sparse matrix
"""
if data_util._is_using_ndarray_type(group_dummy, None):
return np.dot(x.T, group_dummy)
else: # check for sparse
return x.T * group_dummy | sum by groups given group dummy variable
group_dummy can be either ndarray or sparse matrix | group_sums_dummy | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def dummy_sparse(groups):
"""create a sparse indicator from a group array with integer labels
Parameters
----------
groups : ndarray, int, 1d (nobs,)
an array of group indicators for each observation. Group levels are
assumed to be defined as consecutive integers, i.e. range(n_groups)
where n_groups is the number of group levels. A group level with no
observations for it will still produce a column of zeros.
Returns
-------
indi : ndarray, int8, 2d (nobs, n_groups)
an indicator array with one row per observation, that has 1 in the
column of the group level for that observation
Examples
--------
>>> g = np.array([0, 0, 2, 1, 1, 2, 0])
>>> indi = dummy_sparse(g)
>>> indi
<7x3 sparse matrix of type '<type 'numpy.int8'>'
with 7 stored elements in Compressed Sparse Row format>
>>> indi.todense()
matrix([[1, 0, 0],
[1, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]], dtype=int8)
current behavior with missing groups
>>> g = np.array([0, 0, 2, 0, 2, 0])
>>> indi = dummy_sparse(g)
>>> indi.todense()
matrix([[1, 0, 0],
[1, 0, 0],
[0, 0, 1],
[1, 0, 0],
[0, 0, 1],
[1, 0, 0]], dtype=int8)
"""
from scipy import sparse
indptr = np.arange(len(groups)+1)
data = np.ones(len(groups), dtype=np.int8)
indi = sparse.csr_matrix((data, groups, indptr))
return indi | create a sparse indicator from a group array with integer labels
Parameters
----------
groups : ndarray, int, 1d (nobs,)
an array of group indicators for each observation. Group levels are
assumed to be defined as consecutive integers, i.e. range(n_groups)
where n_groups is the number of group levels. A group level with no
observations for it will still produce a column of zeros.
Returns
-------
indi : ndarray, int8, 2d (nobs, n_groups)
an indicator array with one row per observation, that has 1 in the
column of the group level for that observation
Examples
--------
>>> g = np.array([0, 0, 2, 1, 1, 2, 0])
>>> indi = dummy_sparse(g)
>>> indi
<7x3 sparse matrix of type '<type 'numpy.int8'>'
with 7 stored elements in Compressed Sparse Row format>
>>> indi.todense()
matrix([[1, 0, 0],
[1, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]], dtype=int8)
current behavior with missing groups
>>> g = np.array([0, 0, 2, 0, 2, 0])
>>> indi = dummy_sparse(g)
>>> indi.todense()
matrix([[1, 0, 0],
[1, 0, 0],
[0, 0, 1],
[1, 0, 0],
[0, 0, 1],
[1, 0, 0]], dtype=int8) | dummy_sparse | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def dummy(self, drop_idx=None, sparse=False, dtype=int):
"""
drop_idx is only available if sparse=False
drop_idx is supposed to index into uni
"""
uni = self.uni
if drop_idx is not None:
idx = lrange(len(uni))
del idx[drop_idx]
uni = uni[idx]
group = self.group
if not sparse:
return (group[:, None] == uni[None, :]).astype(dtype)
else:
return dummy_sparse(self.group_int) | drop_idx is only available if sparse=False
drop_idx is supposed to index into uni | dummy | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def lag_indices(self, lag):
"""return the index array for lagged values
Warning: if k is larger then the number of observations for an
individual, then no values for that individual are returned.
TODO: for the unbalanced case, I should get the same truncation for
the array with lag=0. From the return of lag_idx we would not know
which individual is missing.
TODO: do I want the full equivalent of lagmat in tsa?
maxlag or lag or lags.
not tested yet
"""
lag_idx = np.asarray(self.groupidx)[:, 1] - lag # asarray or already?
mask_ok = (lag <= lag_idx)
# still an observation that belongs to the same individual
return lag_idx[mask_ok] | return the index array for lagged values
Warning: if k is larger then the number of observations for an
individual, then no values for that individual are returned.
TODO: for the unbalanced case, I should get the same truncation for
the array with lag=0. From the return of lag_idx we would not know
which individual is missing.
TODO: do I want the full equivalent of lagmat in tsa?
maxlag or lag or lags.
not tested yet | lag_indices | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def _is_hierarchical(x):
"""
Checks if the first item of an array-like object is also array-like
If so, we have a MultiIndex and returns True. Else returns False.
"""
item = x[0]
# is there a better way to do this?
if isinstance(item, (list, tuple, np.ndarray, pd.Series, pd.DataFrame)):
return True
else:
return False | Checks if the first item of an array-like object is also array-like
If so, we have a MultiIndex and returns True. Else returns False. | _is_hierarchical | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def __init__(self, index, names=None):
"""
index : index-like
Can be pandas MultiIndex or Index or array-like. If array-like
and is a MultipleIndex (more than one grouping variable),
groups are expected to be in each row. E.g., [('red', 1),
('red', 2), ('green', 1), ('green', 2)]
names : list or str, optional
The names to use for the groups. Should be a str if only
one grouping variable is used.
Notes
-----
If index is already a pandas Index then there is no copy.
"""
if isinstance(index, (Index, MultiIndex)):
if names is not None:
if hasattr(index, 'set_names'): # newer pandas
index.set_names(names, inplace=True)
else:
index.names = names
self.index = index
else: # array_like
if _is_hierarchical(index):
self.index = _make_hierarchical_index(index, names)
else:
self.index = Index(index, name=names)
if names is None:
names = _make_generic_names(self.index)
if hasattr(self.index, 'set_names'):
self.index.set_names(names, inplace=True)
else:
self.index.names = names
self.nobs = len(self.index)
self.nlevels = len(self.index.names)
self.slices = None | index : index-like
Can be pandas MultiIndex or Index or array-like. If array-like
and is a MultipleIndex (more than one grouping variable),
groups are expected to be in each row. E.g., [('red', 1),
('red', 2), ('green', 1), ('green', 2)]
names : list or str, optional
The names to use for the groups. Should be a str if only
one grouping variable is used.
Notes
-----
If index is already a pandas Index then there is no copy. | __init__ | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def reindex(self, index=None, names=None):
"""
Resets the index in-place.
"""
# NOTE: this is not of much use if the rest of the data does not change
# This needs to reset cache
if names is None:
names = self.group_names
self = Grouping(index, names) | Resets the index in-place. | reindex | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def get_slices(self, level=0):
"""
Sets the slices attribute to be a list of indices of the sorted
groups for the first index level. I.e., self.slices[0] is the
index where each observation is in the first (sorted) group.
"""
# TODO: refactor this
groups = self.index.get_level_values(level).unique()
groups = np.sort(np.array(groups))
if isinstance(self.index, MultiIndex):
self.slices = [self.index.get_loc_level(x, level=level)[0]
for x in groups]
else:
self.slices = [self.index.get_loc(x) for x in groups] | Sets the slices attribute to be a list of indices of the sorted
groups for the first index level. I.e., self.slices[0] is the
index where each observation is in the first (sorted) group. | get_slices | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def count_categories(self, level=0):
"""
Sets the attribute counts to equal the bincount of the (integer-valued)
labels.
"""
# TODO: refactor this not to set an attribute. Why would we do this?
self.counts = np.bincount(self.labels[level]) | Sets the attribute counts to equal the bincount of the (integer-valued)
labels. | count_categories | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def check_index(self, is_sorted=True, unique=True, index=None):
"""Sanity checks"""
if not index:
index = self.index
if is_sorted:
test = pd.DataFrame(lrange(len(index)), index=index)
test_sorted = test.sort()
if not test.index.equals(test_sorted.index):
raise Exception('Data is not be sorted')
if unique:
if len(index) != len(index.unique()):
raise Exception('Duplicate index entries') | Sanity checks | check_index | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def sort(self, data, index=None):
"""Applies a (potentially hierarchical) sort operation on a numpy array
or pandas series/dataframe based on the grouping index or a
user-supplied index. Returns an object of the same type as the
original data as well as the matching (sorted) Pandas index.
"""
if index is None:
index = self.index
if data_util._is_using_ndarray_type(data, None):
if data.ndim == 1:
out = pd.Series(data, index=index, copy=True)
out = out.sort_index()
else:
out = pd.DataFrame(data, index=index)
out = out.sort_index(inplace=False) # copies
return np.array(out), out.index
elif data_util._is_using_pandas(data, None):
out = data
out = out.reindex(index) # copies?
out = out.sort_index()
return out, out.index
else:
msg = 'data must be a Numpy array or a Pandas Series/DataFrame'
raise ValueError(msg) | Applies a (potentially hierarchical) sort operation on a numpy array
or pandas series/dataframe based on the grouping index or a
user-supplied index. Returns an object of the same type as the
original data as well as the matching (sorted) Pandas index. | sort | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def transform_dataframe(self, dataframe, function, level=0, **kwargs):
"""Apply function to each column, by group
Assumes that the dataframe already has a proper index"""
if dataframe.shape[0] != self.nobs:
raise Exception('dataframe does not have the same shape as index')
out = dataframe.groupby(level=level).apply(function, **kwargs)
if 1 in out.shape:
return np.ravel(out)
else:
return np.array(out) | Apply function to each column, by group
Assumes that the dataframe already has a proper index | transform_dataframe | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def transform_array(self, array, function, level=0, **kwargs):
"""Apply function to each column, by group
"""
if array.shape[0] != self.nobs:
raise Exception('array does not have the same shape as index')
dataframe = pd.DataFrame(array, index=self.index)
return self.transform_dataframe(dataframe, function, level=level,
**kwargs) | Apply function to each column, by group | transform_array | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def transform_slices(self, array, function, level=0, **kwargs):
"""Apply function to each group. Similar to transform_array but does
not coerce array to a DataFrame and back and only works on a 1D or 2D
numpy array. function is called function(group, group_idx, **kwargs).
"""
array = np.asarray(array)
if array.shape[0] != self.nobs:
raise Exception('array does not have the same shape as index')
# always reset because level is given. need to refactor this.
self.get_slices(level=level)
processed = []
for s in self.slices:
if array.ndim == 2:
subset = array[s, :]
elif array.ndim == 1:
subset = array[s]
processed.append(function(subset, s, **kwargs))
processed = np.array(processed)
return processed.reshape(-1, processed.shape[-1]) | Apply function to each group. Similar to transform_array but does
not coerce array to a DataFrame and back and only works on a 1D or 2D
numpy array. function is called function(group, group_idx, **kwargs). | transform_slices | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def dummy_sparse(self, level=0):
"""create a sparse indicator from a group array with integer labels
Parameters
----------
groups : ndarray, int, 1d (nobs,)
An array of group indicators for each observation. Group levels
are assumed to be defined as consecutive integers, i.e.
range(n_groups) where n_groups is the number of group levels.
A group level with no observations for it will still produce a
column of zeros.
Returns
-------
indi : ndarray, int8, 2d (nobs, n_groups)
an indicator array with one row per observation, that has 1 in the
column of the group level for that observation
Examples
--------
>>> g = np.array([0, 0, 2, 1, 1, 2, 0])
>>> indi = dummy_sparse(g)
>>> indi
<7x3 sparse matrix of type '<type 'numpy.int8'>'
with 7 stored elements in Compressed Sparse Row format>
>>> indi.todense()
matrix([[1, 0, 0],
[1, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]], dtype=int8)
current behavior with missing groups
>>> g = np.array([0, 0, 2, 0, 2, 0])
>>> indi = dummy_sparse(g)
>>> indi.todense()
matrix([[1, 0, 0],
[1, 0, 0],
[0, 0, 1],
[1, 0, 0],
[0, 0, 1],
[1, 0, 0]], dtype=int8)
"""
indi = dummy_sparse(self.labels[level])
self._dummies = indi | create a sparse indicator from a group array with integer labels
Parameters
----------
groups : ndarray, int, 1d (nobs,)
An array of group indicators for each observation. Group levels
are assumed to be defined as consecutive integers, i.e.
range(n_groups) where n_groups is the number of group levels.
A group level with no observations for it will still produce a
column of zeros.
Returns
-------
indi : ndarray, int8, 2d (nobs, n_groups)
an indicator array with one row per observation, that has 1 in the
column of the group level for that observation
Examples
--------
>>> g = np.array([0, 0, 2, 1, 1, 2, 0])
>>> indi = dummy_sparse(g)
>>> indi
<7x3 sparse matrix of type '<type 'numpy.int8'>'
with 7 stored elements in Compressed Sparse Row format>
>>> indi.todense()
matrix([[1, 0, 0],
[1, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 0],
[0, 0, 1],
[1, 0, 0]], dtype=int8)
current behavior with missing groups
>>> g = np.array([0, 0, 2, 0, 2, 0])
>>> indi = dummy_sparse(g)
>>> indi.todense()
matrix([[1, 0, 0],
[1, 0, 0],
[0, 0, 1],
[1, 0, 0],
[0, 0, 1],
[1, 0, 0]], dtype=int8) | dummy_sparse | python | statsmodels/statsmodels | statsmodels/tools/grouputils.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/grouputils.py | BSD-3-Clause |
def check_ftest_pvalues(results):
"""
Check that the outputs of `res.wald_test` produces pvalues that
match res.pvalues.
Check that the string representations of `res.summary()` and (possibly)
`res.summary2()` correctly label either the t or z-statistic.
Parameters
----------
results : Results
Raises
------
AssertionError
"""
res = results
use_t = res.use_t
k_vars = len(res.params)
# check default use_t
pvals = [
res.wald_test(np.eye(k_vars)[k], use_f=use_t, scalar=True).pvalue
for k in range(k_vars)
]
assert_allclose(pvals, res.pvalues, rtol=5e-10, atol=1e-25)
# automatic use_f based on results class use_t
pvals = [
res.wald_test(np.eye(k_vars)[k], scalar=True).pvalue
for k in range(k_vars)
]
assert_allclose(pvals, res.pvalues, rtol=5e-10, atol=1e-25)
# TODO: Separate these out into summary/summary2 tests?
# label for pvalues in summary
string_use_t = "P>|z|" if use_t is False else "P>|t|"
summ = str(res.summary())
assert_(string_use_t in summ)
# try except for models that do not have summary2
try:
summ2 = str(res.summary2())
except AttributeError:
pass
else:
assert_(string_use_t in summ2) | Check that the outputs of `res.wald_test` produces pvalues that
match res.pvalues.
Check that the string representations of `res.summary()` and (possibly)
`res.summary2()` correctly label either the t or z-statistic.
Parameters
----------
results : Results
Raises
------
AssertionError | check_ftest_pvalues | python | statsmodels/statsmodels | statsmodels/tools/_testing.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/_testing.py | BSD-3-Clause |
def check_predict_types(results):
"""
Check that the `predict` method of the given results object produces the
correct output type.
Parameters
----------
results : Results
Raises
------
AssertionError
"""
res = results
# squeeze to make 1d for single regressor test case
p_exog = np.squeeze(np.asarray(res.model.exog[:2]))
# ignore wrapper for isinstance check
from statsmodels.genmod.generalized_linear_model import GLMResults
from statsmodels.discrete.discrete_model import DiscreteResults
from statsmodels.compat.pandas import (
assert_frame_equal,
assert_series_equal,
)
# possibly unwrap -- GEE has no wrapper
results = getattr(results, "_results", results)
if isinstance(results, (GLMResults, DiscreteResults)):
# SMOKE test only TODO: mark this somehow
res.predict(p_exog)
res.predict(p_exog.tolist())
res.predict(p_exog[0].tolist())
else:
fitted = res.fittedvalues[:2]
assert_allclose(fitted, res.predict(p_exog), rtol=1e-12)
# this needs reshape to column-vector:
assert_allclose(
fitted, res.predict(np.squeeze(p_exog).tolist()), rtol=1e-12
)
# only one prediction:
assert_allclose(
fitted[:1], res.predict(p_exog[0].tolist()), rtol=1e-12
)
assert_allclose(fitted[:1], res.predict(p_exog[0]), rtol=1e-12)
# Check that pandas wrapping works as expected
exog_index = range(len(p_exog))
predicted = res.predict(p_exog)
cls = pd.Series if p_exog.ndim == 1 else pd.DataFrame
predicted_pandas = res.predict(cls(p_exog, index=exog_index))
# predicted.ndim may not match p_exog.ndim because it may be squeezed
# if p_exog has only one column
cls = pd.Series if predicted.ndim == 1 else pd.DataFrame
predicted_expected = cls(predicted, index=exog_index)
if isinstance(predicted_expected, pd.Series):
assert_series_equal(predicted_expected, predicted_pandas)
else:
assert_frame_equal(predicted_expected, predicted_pandas) | Check that the `predict` method of the given results object produces the
correct output type.
Parameters
----------
results : Results
Raises
------
AssertionError | check_predict_types | python | statsmodels/statsmodels | statsmodels/tools/_testing.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/_testing.py | BSD-3-Clause |
def check_random_state(seed=None):
"""
Turn `seed` into a random number generator.
Parameters
----------
seed : {None, int, array_like[ints], `numpy.random.Generator`,
`numpy.random.RandomState`, `scipy.stats.qmc.QMCEngine`}, optional
If `seed` is None fresh, unpredictable entropy will be pulled
from the OS and `numpy.random.Generator` is used.
If `seed` is an int or ``array_like[ints]``, a new ``Generator``
instance is used, seeded with `seed`.
If `seed` is already a ``Generator``, ``RandomState`` or
`scipy.stats.qmc.QMCEngine` instance then
that instance is used.
`scipy.stats.qmc.QMCEngine` requires SciPy >=1.7. It also means
that the generator only have the method ``random``.
Returns
-------
seed : {`numpy.random.Generator`, `numpy.random.RandomState`,
`scipy.stats.qmc.QMCEngine`}
Random number generator.
"""
if hasattr(stats, "qmc") and \
isinstance(seed, stats.qmc.QMCEngine):
return seed
elif isinstance(seed, np.random.RandomState):
return seed
elif isinstance(seed, np.random.Generator):
return seed
elif seed is not None:
return np.random.default_rng(seed)
else:
import warnings
warnings.warn(_future_warn, FutureWarning)
return np.random.mtrand._rand | Turn `seed` into a random number generator.
Parameters
----------
seed : {None, int, array_like[ints], `numpy.random.Generator`,
`numpy.random.RandomState`, `scipy.stats.qmc.QMCEngine`}, optional
If `seed` is None fresh, unpredictable entropy will be pulled
from the OS and `numpy.random.Generator` is used.
If `seed` is an int or ``array_like[ints]``, a new ``Generator``
instance is used, seeded with `seed`.
If `seed` is already a ``Generator``, ``RandomState`` or
`scipy.stats.qmc.QMCEngine` instance then
that instance is used.
`scipy.stats.qmc.QMCEngine` requires SciPy >=1.7. It also means
that the generator only have the method ``random``.
Returns
-------
seed : {`numpy.random.Generator`, `numpy.random.RandomState`,
`scipy.stats.qmc.QMCEngine`}
Random number generator. | check_random_state | python | statsmodels/statsmodels | statsmodels/tools/rng_qrng.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/rng_qrng.py | BSD-3-Clause |
def mse(x1, x2, axis=0):
"""mean squared error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
mse : ndarray or float
mean squared error along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass, for example
numpy matrices will silently produce an incorrect result.
"""
x1 = np.asanyarray(x1)
x2 = np.asanyarray(x2)
return np.mean((x1 - x2) ** 2, axis=axis) | mean squared error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
mse : ndarray or float
mean squared error along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass, for example
numpy matrices will silently produce an incorrect result. | mse | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def rmse(x1, x2, axis=0):
"""root mean squared error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
rmse : ndarray or float
root mean squared error along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass, for example
numpy matrices will silently produce an incorrect result.
"""
x1 = np.asanyarray(x1)
x2 = np.asanyarray(x2)
return np.sqrt(mse(x1, x2, axis=axis)) | root mean squared error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
rmse : ndarray or float
root mean squared error along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass, for example
numpy matrices will silently produce an incorrect result. | rmse | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def rmspe(y, y_hat, axis=0, zeros=np.nan):
"""
Root Mean Squared Percentage Error
Parameters
----------
y : array_like
The actual value.
y_hat : array_like
The predicted value.
axis : int
Axis along which the summary statistic is calculated
zeros : float
Value to assign to error where y is zero
Returns
-------
rmspe : ndarray or float
Root Mean Squared Percentage Error along given axis.
"""
y_hat = np.asarray(y_hat)
y = np.asarray(y)
error = y - y_hat
loc = y != 0
loc = loc.ravel()
percentage_error = np.full_like(error, zeros)
percentage_error.flat[loc] = error.flat[loc] / y.flat[loc]
mspe = np.nanmean(percentage_error ** 2, axis=axis) * 100
return np.sqrt(mspe) | Root Mean Squared Percentage Error
Parameters
----------
y : array_like
The actual value.
y_hat : array_like
The predicted value.
axis : int
Axis along which the summary statistic is calculated
zeros : float
Value to assign to error where y is zero
Returns
-------
rmspe : ndarray or float
Root Mean Squared Percentage Error along given axis. | rmspe | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def maxabs(x1, x2, axis=0):
"""maximum absolute error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
maxabs : ndarray or float
maximum absolute difference along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass.
"""
x1 = np.asanyarray(x1)
x2 = np.asanyarray(x2)
return np.max(np.abs(x1 - x2), axis=axis) | maximum absolute error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
maxabs : ndarray or float
maximum absolute difference along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass. | maxabs | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def meanabs(x1, x2, axis=0):
"""mean absolute error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
meanabs : ndarray or float
mean absolute difference along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass.
"""
x1 = np.asanyarray(x1)
x2 = np.asanyarray(x2)
return np.mean(np.abs(x1 - x2), axis=axis) | mean absolute error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
meanabs : ndarray or float
mean absolute difference along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass. | meanabs | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def medianabs(x1, x2, axis=0):
"""median absolute error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
medianabs : ndarray or float
median absolute difference along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass.
"""
x1 = np.asanyarray(x1)
x2 = np.asanyarray(x2)
return np.median(np.abs(x1 - x2), axis=axis) | median absolute error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
medianabs : ndarray or float
median absolute difference along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass. | medianabs | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def bias(x1, x2, axis=0):
"""bias, mean error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
bias : ndarray or float
bias, or mean difference along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass.
"""
x1 = np.asanyarray(x1)
x2 = np.asanyarray(x2)
return np.mean(x1 - x2, axis=axis) | bias, mean error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
bias : ndarray or float
bias, or mean difference along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass. | bias | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def medianbias(x1, x2, axis=0):
"""median bias, median error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
medianbias : ndarray or float
median bias, or median difference along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass.
"""
x1 = np.asanyarray(x1)
x2 = np.asanyarray(x2)
return np.median(x1 - x2, axis=axis) | median bias, median error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
medianbias : ndarray or float
median bias, or median difference along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass. | medianbias | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def vare(x1, x2, ddof=0, axis=0):
"""variance of error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
vare : ndarray or float
variance of difference along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass.
"""
x1 = np.asanyarray(x1)
x2 = np.asanyarray(x2)
return np.var(x1 - x2, ddof=ddof, axis=axis) | variance of error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
vare : ndarray or float
variance of difference along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass. | vare | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def stde(x1, x2, ddof=0, axis=0):
"""standard deviation of error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
stde : ndarray or float
standard deviation of difference along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass.
"""
x1 = np.asanyarray(x1)
x2 = np.asanyarray(x2)
return np.std(x1 - x2, ddof=ddof, axis=axis) | standard deviation of error
Parameters
----------
x1, x2 : array_like
The performance measure depends on the difference between these two
arrays.
axis : int
axis along which the summary statistic is calculated
Returns
-------
stde : ndarray or float
standard deviation of difference along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they need to broadcast.
This uses ``numpy.asanyarray`` to convert the input. Whether this is the
desired result or not depends on the array subclass. | stde | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def iqr(x1, x2, axis=0):
"""
Interquartile range of error
Parameters
----------
x1 : array_like
One of the inputs into the IQR calculation.
x2 : array_like
The other input into the IQR calculation.
axis : {None, int}
axis along which the summary statistic is calculated
Returns
-------
irq : {float, ndarray}
Interquartile range along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they must broadcast.
"""
x1 = array_like(x1, "x1", dtype=None, ndim=None)
x2 = array_like(x2, "x1", dtype=None, ndim=None)
if axis is None:
x1 = x1.ravel()
x2 = x2.ravel()
axis = 0
xdiff = np.sort(x1 - x2, axis=axis)
nobs = x1.shape[axis]
idx = np.round((nobs - 1) * np.array([0.25, 0.75])).astype(int)
sl = [slice(None)] * xdiff.ndim
sl[axis] = idx
iqr = np.diff(xdiff[tuple(sl)], axis=axis)
iqr = np.squeeze(iqr) # drop reduced dimension
return iqr | Interquartile range of error
Parameters
----------
x1 : array_like
One of the inputs into the IQR calculation.
x2 : array_like
The other input into the IQR calculation.
axis : {None, int}
axis along which the summary statistic is calculated
Returns
-------
irq : {float, ndarray}
Interquartile range along given axis.
Notes
-----
If ``x1`` and ``x2`` have different shapes, then they must broadcast. | iqr | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def aic(llf, nobs, df_modelwc):
"""
Akaike information criterion
Parameters
----------
llf : {float, array_like}
value of the loglikelihood
nobs : int
number of observations
df_modelwc : int
number of parameters including constant
Returns
-------
aic : float
information criterion
References
----------
https://en.wikipedia.org/wiki/Akaike_information_criterion
"""
return -2.0 * llf + 2.0 * df_modelwc | Akaike information criterion
Parameters
----------
llf : {float, array_like}
value of the loglikelihood
nobs : int
number of observations
df_modelwc : int
number of parameters including constant
Returns
-------
aic : float
information criterion
References
----------
https://en.wikipedia.org/wiki/Akaike_information_criterion | aic | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def aicc(llf, nobs, df_modelwc):
"""
Akaike information criterion (AIC) with small sample correction
Parameters
----------
llf : {float, array_like}
value of the loglikelihood
nobs : int
number of observations
df_modelwc : int
number of parameters including constant
Returns
-------
aicc : float
information criterion
References
----------
https://en.wikipedia.org/wiki/Akaike_information_criterion#AICc
Notes
-----
Returns +inf if the effective degrees of freedom, defined as
``nobs - df_modelwc - 1.0``, is <= 0.
"""
dof_eff = nobs - df_modelwc - 1.0
if dof_eff > 0:
return -2.0 * llf + 2.0 * df_modelwc * nobs / dof_eff
else:
return np.inf | Akaike information criterion (AIC) with small sample correction
Parameters
----------
llf : {float, array_like}
value of the loglikelihood
nobs : int
number of observations
df_modelwc : int
number of parameters including constant
Returns
-------
aicc : float
information criterion
References
----------
https://en.wikipedia.org/wiki/Akaike_information_criterion#AICc
Notes
-----
Returns +inf if the effective degrees of freedom, defined as
``nobs - df_modelwc - 1.0``, is <= 0. | aicc | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def bic(llf, nobs, df_modelwc):
"""
Bayesian information criterion (BIC) or Schwarz criterion
Parameters
----------
llf : {float, array_like}
value of the loglikelihood
nobs : int
number of observations
df_modelwc : int
number of parameters including constant
Returns
-------
bic : float
information criterion
References
----------
https://en.wikipedia.org/wiki/Bayesian_information_criterion
"""
return -2.0 * llf + np.log(nobs) * df_modelwc | Bayesian information criterion (BIC) or Schwarz criterion
Parameters
----------
llf : {float, array_like}
value of the loglikelihood
nobs : int
number of observations
df_modelwc : int
number of parameters including constant
Returns
-------
bic : float
information criterion
References
----------
https://en.wikipedia.org/wiki/Bayesian_information_criterion | bic | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def hqic(llf, nobs, df_modelwc):
"""
Hannan-Quinn information criterion (HQC)
Parameters
----------
llf : {float, array_like}
value of the loglikelihood
nobs : int
number of observations
df_modelwc : int
number of parameters including constant
Returns
-------
hqic : float
information criterion
References
----------
Wikipedia does not say much
"""
return -2.0 * llf + 2 * np.log(np.log(nobs)) * df_modelwc | Hannan-Quinn information criterion (HQC)
Parameters
----------
llf : {float, array_like}
value of the loglikelihood
nobs : int
number of observations
df_modelwc : int
number of parameters including constant
Returns
-------
hqic : float
information criterion
References
----------
Wikipedia does not say much | hqic | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def aicc_sigma(sigma2, nobs, df_modelwc, islog=False):
"""
Akaike information criterion (AIC) with small sample correction
Parameters
----------
sigma2 : float
estimate of the residual variance or determinant of Sigma_hat in the
multivariate case. If islog is true, then it is assumed that sigma
is already log-ed, for example logdetSigma.
nobs : int
number of observations
df_modelwc : int
number of parameters including constant
Returns
-------
aicc : float
information criterion
Notes
-----
A constant has been dropped in comparison to the loglikelihood base
information criteria. These should be used to compare for comparable
models.
References
----------
https://en.wikipedia.org/wiki/Akaike_information_criterion#AICc
"""
if not islog:
sigma2 = np.log(sigma2)
return sigma2 + aicc(0, nobs, df_modelwc) / nobs | Akaike information criterion (AIC) with small sample correction
Parameters
----------
sigma2 : float
estimate of the residual variance or determinant of Sigma_hat in the
multivariate case. If islog is true, then it is assumed that sigma
is already log-ed, for example logdetSigma.
nobs : int
number of observations
df_modelwc : int
number of parameters including constant
Returns
-------
aicc : float
information criterion
Notes
-----
A constant has been dropped in comparison to the loglikelihood base
information criteria. These should be used to compare for comparable
models.
References
----------
https://en.wikipedia.org/wiki/Akaike_information_criterion#AICc | aicc_sigma | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def bic_sigma(sigma2, nobs, df_modelwc, islog=False):
"""Bayesian information criterion (BIC) or Schwarz criterion
Parameters
----------
sigma2 : float
estimate of the residual variance or determinant of Sigma_hat in the
multivariate case. If islog is true, then it is assumed that sigma
is already log-ed, for example logdetSigma.
nobs : int
number of observations
df_modelwc : int
number of parameters including constant
Returns
-------
bic : float
information criterion
Notes
-----
A constant has been dropped in comparison to the loglikelihood base
information criteria. These should be used to compare for comparable
models.
References
----------
https://en.wikipedia.org/wiki/Bayesian_information_criterion
"""
if not islog:
sigma2 = np.log(sigma2)
return sigma2 + bic(0, nobs, df_modelwc) / nobs | Bayesian information criterion (BIC) or Schwarz criterion
Parameters
----------
sigma2 : float
estimate of the residual variance or determinant of Sigma_hat in the
multivariate case. If islog is true, then it is assumed that sigma
is already log-ed, for example logdetSigma.
nobs : int
number of observations
df_modelwc : int
number of parameters including constant
Returns
-------
bic : float
information criterion
Notes
-----
A constant has been dropped in comparison to the loglikelihood base
information criteria. These should be used to compare for comparable
models.
References
----------
https://en.wikipedia.org/wiki/Bayesian_information_criterion | bic_sigma | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def hqic_sigma(sigma2, nobs, df_modelwc, islog=False):
"""Hannan-Quinn information criterion (HQC)
Parameters
----------
sigma2 : float
estimate of the residual variance or determinant of Sigma_hat in the
multivariate case. If islog is true, then it is assumed that sigma
is already log-ed, for example logdetSigma.
nobs : int
number of observations
df_modelwc : int
number of parameters including constant
Returns
-------
hqic : float
information criterion
Notes
-----
A constant has been dropped in comparison to the loglikelihood base
information criteria. These should be used to compare for comparable
models.
References
----------
xxx
"""
if not islog:
sigma2 = np.log(sigma2)
return sigma2 + hqic(0, nobs, df_modelwc) / nobs | Hannan-Quinn information criterion (HQC)
Parameters
----------
sigma2 : float
estimate of the residual variance or determinant of Sigma_hat in the
multivariate case. If islog is true, then it is assumed that sigma
is already log-ed, for example logdetSigma.
nobs : int
number of observations
df_modelwc : int
number of parameters including constant
Returns
-------
hqic : float
information criterion
Notes
-----
A constant has been dropped in comparison to the loglikelihood base
information criteria. These should be used to compare for comparable
models.
References
----------
xxx | hqic_sigma | python | statsmodels/statsmodels | statsmodels/tools/eval_measures.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/eval_measures.py | BSD-3-Clause |
def approx_fprime(x, f, epsilon=None, args=(), kwargs={}, centered=False):
'''
Gradient of function, or Jacobian if function f returns 1d array
Parameters
----------
x : ndarray
parameters at which the derivative is evaluated
f : function
`f(*((x,)+args), **kwargs)` returning either one value or 1d array
epsilon : float, optional
Stepsize, if None, optimal stepsize is used. This is EPS**(1/2)*x for
`centered` == False and EPS**(1/3)*x for `centered` == True.
args : tuple
Tuple of additional arguments for function `f`.
kwargs : dict
Dictionary of additional keyword arguments for function `f`.
centered : bool
Whether central difference should be returned. If not, does forward
differencing.
Returns
-------
grad : ndarray
gradient or Jacobian
Notes
-----
If f returns a 1d array, it returns a Jacobian. If a 2d array is returned
by f (e.g., with a value for each observation), it returns a 3d array
with the Jacobian of each observation with shape xk x nobs x xk. I.e.,
the Jacobian of the first observation would be [:, 0, :]
'''
n = len(x)
f0 = f(*((x,)+args), **kwargs)
dim = np.atleast_1d(f0).shape # it could be a scalar
grad = np.zeros((n,) + dim, np.promote_types(float, x.dtype))
ei = np.zeros((n,), float)
if not centered:
epsilon = _get_epsilon(x, 2, epsilon, n)
for k in range(n):
ei[k] = epsilon[k]
grad[k, :] = (f(*((x+ei,) + args), **kwargs) - f0)/epsilon[k]
ei[k] = 0.0
else:
epsilon = _get_epsilon(x, 3, epsilon, n) / 2.
for k in range(n):
ei[k] = epsilon[k]
grad[k, :] = (f(*((x+ei,)+args), **kwargs) -
f(*((x-ei,)+args), **kwargs))/(2 * epsilon[k])
ei[k] = 0.0
if n == 1:
return grad.T
else:
return grad.squeeze().T | Gradient of function, or Jacobian if function f returns 1d array
Parameters
----------
x : ndarray
parameters at which the derivative is evaluated
f : function
`f(*((x,)+args), **kwargs)` returning either one value or 1d array
epsilon : float, optional
Stepsize, if None, optimal stepsize is used. This is EPS**(1/2)*x for
`centered` == False and EPS**(1/3)*x for `centered` == True.
args : tuple
Tuple of additional arguments for function `f`.
kwargs : dict
Dictionary of additional keyword arguments for function `f`.
centered : bool
Whether central difference should be returned. If not, does forward
differencing.
Returns
-------
grad : ndarray
gradient or Jacobian
Notes
-----
If f returns a 1d array, it returns a Jacobian. If a 2d array is returned
by f (e.g., with a value for each observation), it returns a 3d array
with the Jacobian of each observation with shape xk x nobs x xk. I.e.,
the Jacobian of the first observation would be [:, 0, :] | approx_fprime | python | statsmodels/statsmodels | statsmodels/tools/numdiff.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/numdiff.py | BSD-3-Clause |
def approx_fprime_cs(x, f, epsilon=None, args=(), kwargs={}):
'''
Calculate gradient or Jacobian with complex step derivative approximation
Parameters
----------
x : ndarray
parameters at which the derivative is evaluated
f : function
`f(*((x,)+args), **kwargs)` returning either one value or 1d array
epsilon : float, optional
Stepsize, if None, optimal stepsize is used. Optimal step-size is
EPS*x. See note.
args : tuple
Tuple of additional arguments for function `f`.
kwargs : dict
Dictionary of additional keyword arguments for function `f`.
Returns
-------
partials : ndarray
array of partial derivatives, Gradient or Jacobian
Notes
-----
The complex-step derivative has truncation error O(epsilon**2), so
truncation error can be eliminated by choosing epsilon to be very small.
The complex-step derivative avoids the problem of round-off error with
small epsilon because there is no subtraction.
'''
# From Guilherme P. de Freitas, numpy mailing list
# May 04 2010 thread "Improvement of performance"
# http://mail.scipy.org/pipermail/numpy-discussion/2010-May/050250.html
n = len(x)
epsilon = _get_epsilon(x, 1, epsilon, n)
increments = np.identity(n) * 1j * epsilon
# TODO: see if this can be vectorized, but usually dim is small
partials = [f(x+ih, *args, **kwargs).imag / epsilon[i]
for i, ih in enumerate(increments)]
return np.array(partials).T | Calculate gradient or Jacobian with complex step derivative approximation
Parameters
----------
x : ndarray
parameters at which the derivative is evaluated
f : function
`f(*((x,)+args), **kwargs)` returning either one value or 1d array
epsilon : float, optional
Stepsize, if None, optimal stepsize is used. Optimal step-size is
EPS*x. See note.
args : tuple
Tuple of additional arguments for function `f`.
kwargs : dict
Dictionary of additional keyword arguments for function `f`.
Returns
-------
partials : ndarray
array of partial derivatives, Gradient or Jacobian
Notes
-----
The complex-step derivative has truncation error O(epsilon**2), so
truncation error can be eliminated by choosing epsilon to be very small.
The complex-step derivative avoids the problem of round-off error with
small epsilon because there is no subtraction. | approx_fprime_cs | python | statsmodels/statsmodels | statsmodels/tools/numdiff.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/numdiff.py | BSD-3-Clause |
def _approx_fprime_cs_scalar(x, f, epsilon=None, args=(), kwargs={}):
'''
Calculate gradient for scalar parameter with complex step derivatives.
This assumes that the function ``f`` is vectorized for a scalar parameter.
The function value ``f(x)`` has then the same shape as the input ``x``.
The derivative returned by this function also has the same shape as ``x``.
Parameters
----------
x : ndarray
Parameters at which the derivative is evaluated.
f : function
`f(*((x,)+args), **kwargs)` returning either one value or 1d array.
epsilon : float, optional
Stepsize, if None, optimal stepsize is used. Optimal step-size is
EPS*x. See note.
args : tuple
Tuple of additional arguments for function `f`.
kwargs : dict
Dictionary of additional keyword arguments for function `f`.
Returns
-------
partials : ndarray
Array of derivatives, gradient evaluated for parameters ``x``.
Notes
-----
The complex-step derivative has truncation error O(epsilon**2), so
truncation error can be eliminated by choosing epsilon to be very small.
The complex-step derivative avoids the problem of round-off error with
small epsilon because there is no subtraction.
'''
# From Guilherme P. de Freitas, numpy mailing list
# May 04 2010 thread "Improvement of performance"
# http://mail.scipy.org/pipermail/numpy-discussion/2010-May/050250.html
x = np.asarray(x)
n = x.shape[-1]
epsilon = _get_epsilon(x, 1, epsilon, n)
eps = 1j * epsilon
partials = f(x + eps, *args, **kwargs).imag / epsilon
return np.array(partials) | Calculate gradient for scalar parameter with complex step derivatives.
This assumes that the function ``f`` is vectorized for a scalar parameter.
The function value ``f(x)`` has then the same shape as the input ``x``.
The derivative returned by this function also has the same shape as ``x``.
Parameters
----------
x : ndarray
Parameters at which the derivative is evaluated.
f : function
`f(*((x,)+args), **kwargs)` returning either one value or 1d array.
epsilon : float, optional
Stepsize, if None, optimal stepsize is used. Optimal step-size is
EPS*x. See note.
args : tuple
Tuple of additional arguments for function `f`.
kwargs : dict
Dictionary of additional keyword arguments for function `f`.
Returns
-------
partials : ndarray
Array of derivatives, gradient evaluated for parameters ``x``.
Notes
-----
The complex-step derivative has truncation error O(epsilon**2), so
truncation error can be eliminated by choosing epsilon to be very small.
The complex-step derivative avoids the problem of round-off error with
small epsilon because there is no subtraction. | _approx_fprime_cs_scalar | python | statsmodels/statsmodels | statsmodels/tools/numdiff.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tools/numdiff.py | BSD-3-Clause |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.