code
stringlengths
26
870k
docstring
stringlengths
1
65.6k
func_name
stringlengths
1
194
language
stringclasses
1 value
repo
stringlengths
8
68
path
stringlengths
5
194
url
stringlengths
46
254
license
stringclasses
4 values
def loglike(self, params): """ Evaluate the log-likelihood Parameters ---------- params : array_like The projection matrix used to reduce the covariances, flattened to 1d. Returns the log-likelihood. """ p = self.covm.shape[0] proj = params.reshape((p, self.dim)) c = np.dot(proj.T, np.dot(self.covm, proj)) _, ldet = np.linalg.slogdet(c) f = self.nobs * ldet / 2 for j, c in enumerate(self.covs): c = np.dot(proj.T, np.dot(c, proj)) _, ldet = np.linalg.slogdet(c) f -= self.ns[j] * ldet / 2 return f
Evaluate the log-likelihood Parameters ---------- params : array_like The projection matrix used to reduce the covariances, flattened to 1d. Returns the log-likelihood.
loglike
python
statsmodels/statsmodels
statsmodels/regression/dimred.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/dimred.py
BSD-3-Clause
def score(self, params): """ Evaluate the score function. Parameters ---------- params : array_like The projection matrix used to reduce the covariances, flattened to 1d. Returns the score function evaluated at 'params'. """ p = self.covm.shape[0] proj = params.reshape((p, self.dim)) c0 = np.dot(proj.T, np.dot(self.covm, proj)) cP = np.dot(self.covm, proj) g = self.nobs * np.linalg.solve(c0, cP.T).T for j, c in enumerate(self.covs): c0 = np.dot(proj.T, np.dot(c, proj)) cP = np.dot(c, proj) g -= self.ns[j] * np.linalg.solve(c0, cP.T).T return g.ravel()
Evaluate the score function. Parameters ---------- params : array_like The projection matrix used to reduce the covariances, flattened to 1d. Returns the score function evaluated at 'params'.
score
python
statsmodels/statsmodels
statsmodels/regression/dimred.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/dimred.py
BSD-3-Clause
def fit(self, start_params=None, maxiter=200, gtol=1e-4): """ Fit the covariance reduction model. Parameters ---------- start_params : array_like Starting value for the projection matrix. May be rectangular, or flattened. maxiter : int The maximum number of gradient steps to take. gtol : float Convergence criterion for the gradient norm. Returns ------- A results instance that can be used to access the fitted parameters. """ p = self.covm.shape[0] d = self.dim # Starting value for params if start_params is None: params = np.zeros((p, d)) params[0:d, 0:d] = np.eye(d) params = params else: params = start_params # _grass_opt is designed for minimization, we are doing maximization # here so everything needs to be flipped. params, llf, cnvrg = _grass_opt(params, lambda x: -self.loglike(x), lambda x: -self.score(x), maxiter, gtol) llf *= -1 if not cnvrg: g = self.score(params.ravel()) gn = np.sqrt(np.sum(g * g)) msg = "CovReduce optimization did not converge, |g|=%f" % gn warnings.warn(msg, ConvergenceWarning) results = DimReductionResults(self, params, eigs=None) results.llf = llf return DimReductionResultsWrapper(results)
Fit the covariance reduction model. Parameters ---------- start_params : array_like Starting value for the projection matrix. May be rectangular, or flattened. maxiter : int The maximum number of gradient steps to take. gtol : float Convergence criterion for the gradient norm. Returns ------- A results instance that can be used to access the fitted parameters.
fit
python
statsmodels/statsmodels
statsmodels/regression/dimred.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/dimred.py
BSD-3-Clause
def get_cov(self, time, sc, sm): """ Returns the covariance matrix for given time values. Parameters ---------- time : array_like The time points for the observations. If len(time) = p, a pxp covariance matrix is returned. sc : array_like The scaling parameters for the observations. sm : array_like The smoothness parameters for the observation. See class docstring for details. """ raise NotImplementedError
Returns the covariance matrix for given time values. Parameters ---------- time : array_like The time points for the observations. If len(time) = p, a pxp covariance matrix is returned. sc : array_like The scaling parameters for the observations. sm : array_like The smoothness parameters for the observation. See class docstring for details.
get_cov
python
statsmodels/statsmodels
statsmodels/regression/process_regression.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/process_regression.py
BSD-3-Clause
def jac(self, time, sc, sm): """ The Jacobian of the covariance with respect to the parameters. See get_cov for parameters. Returns ------- jsc : list-like jsc[i] is the derivative of the covariance matrix with respect to the i^th scaling parameter. jsm : list-like jsm[i] is the derivative of the covariance matrix with respect to the i^th smoothness parameter. """ raise NotImplementedError
The Jacobian of the covariance with respect to the parameters. See get_cov for parameters. Returns ------- jsc : list-like jsc[i] is the derivative of the covariance matrix with respect to the i^th scaling parameter. jsm : list-like jsm[i] is the derivative of the covariance matrix with respect to the i^th smoothness parameter.
jac
python
statsmodels/statsmodels
statsmodels/regression/process_regression.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/process_regression.py
BSD-3-Clause
def unpack(self, z): """ Split the packed parameter vector into blocks. """ # Mean parameters pm = self.exog.shape[1] mnpar = z[0:pm] # Standard deviation parameters pv = self.exog_scale.shape[1] scpar = z[pm:pm + pv] # Smoothness parameters ps = self.exog_smooth.shape[1] smpar = z[pm + pv:pm + pv + ps] # Observation white noise standard deviation. # Empty if has_noise = False. nopar = z[pm + pv + ps:] return mnpar, scpar, smpar, nopar
Split the packed parameter vector into blocks.
unpack
python
statsmodels/statsmodels
statsmodels/regression/process_regression.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/process_regression.py
BSD-3-Clause
def loglike(self, params): """ Calculate the log-likelihood function for the model. Parameters ---------- params : array_like The packed parameters for the model. Returns ------- The log-likelihood value at the given parameter point. Notes ----- The mean, scaling, and smoothing parameters are packed into a vector. Use `unpack` to access the component vectors. """ mnpar, scpar, smpar, nopar = self.unpack(params) # Residuals resid = self.endog - np.dot(self.exog, mnpar) # Scaling parameters sc = np.exp(np.dot(self.exog_scale, scpar)) # Smoothness parameters sm = np.exp(np.dot(self.exog_smooth, smpar)) # White noise standard deviation if self._has_noise: no = np.exp(np.dot(self.exog_noise, nopar)) # Get the log-likelihood ll = 0. for _, ix in self._groups_ix.items(): # Get the covariance matrix for this person. cm = self.cov.get_cov(self.time[ix], sc[ix], sm[ix]) # The variance of the additive noise, if present. if self._has_noise: cm.flat[::cm.shape[0] + 1] += no[ix]**2 re = resid[ix] ll -= 0.5 * np.linalg.slogdet(cm)[1] ll -= 0.5 * np.dot(re, np.linalg.solve(cm, re)) if self.verbose: print("L=", ll) return ll
Calculate the log-likelihood function for the model. Parameters ---------- params : array_like The packed parameters for the model. Returns ------- The log-likelihood value at the given parameter point. Notes ----- The mean, scaling, and smoothing parameters are packed into a vector. Use `unpack` to access the component vectors.
loglike
python
statsmodels/statsmodels
statsmodels/regression/process_regression.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/process_regression.py
BSD-3-Clause
def score(self, params): """ Calculate the score function for the model. Parameters ---------- params : array_like The packed parameters for the model. Returns ------- The score vector at the given parameter point. Notes ----- The mean, scaling, and smoothing parameters are packed into a vector. Use `unpack` to access the component vectors. """ mnpar, scpar, smpar, nopar = self.unpack(params) pm, pv, ps = len(mnpar), len(scpar), len(smpar) # Residuals resid = self.endog - np.dot(self.exog, mnpar) # Scaling sc = np.exp(np.dot(self.exog_scale, scpar)) # Smoothness sm = np.exp(np.dot(self.exog_smooth, smpar)) # White noise standard deviation if self._has_noise: no = np.exp(np.dot(self.exog_noise, nopar)) # Get the log-likelihood score = np.zeros(len(mnpar) + len(scpar) + len(smpar) + len(nopar)) for _, ix in self._groups_ix.items(): sc_i = sc[ix] sm_i = sm[ix] resid_i = resid[ix] time_i = self.time[ix] exog_i = self.exog[ix, :] exog_scale_i = self.exog_scale[ix, :] exog_smooth_i = self.exog_smooth[ix, :] # Get the covariance matrix for this person. cm = self.cov.get_cov(time_i, sc_i, sm_i) if self._has_noise: no_i = no[ix] exog_noise_i = self.exog_noise[ix, :] cm.flat[::cm.shape[0] + 1] += no[ix]**2 cmi = np.linalg.inv(cm) jacv, jacs = self.cov.jac(time_i, sc_i, sm_i) # The derivatives for the mean parameters. dcr = np.linalg.solve(cm, resid_i) score[0:pm] += np.dot(exog_i.T, dcr) # The derivatives for the scaling parameters. rx = np.outer(resid_i, resid_i) qm = np.linalg.solve(cm, rx) qm = 0.5 * np.linalg.solve(cm, qm.T) scx = sc_i[:, None] * exog_scale_i for i, _ in enumerate(ix): jq = np.sum(jacv[i] * qm) score[pm:pm + pv] += jq * scx[i, :] score[pm:pm + pv] -= 0.5 * np.sum(jacv[i] * cmi) * scx[i, :] # The derivatives for the smoothness parameters. smx = sm_i[:, None] * exog_smooth_i for i, _ in enumerate(ix): jq = np.sum(jacs[i] * qm) score[pm + pv:pm + pv + ps] += jq * smx[i, :] score[pm + pv:pm + pv + ps] -= ( 0.5 * np.sum(jacs[i] * cmi) * smx[i, :]) # The derivatives with respect to the standard deviation parameters if self._has_noise: sno = no_i[:, None]**2 * exog_noise_i score[pm + pv + ps:] -= np.dot(cmi.flat[::cm.shape[0] + 1], sno) bm = np.dot(cmi, np.dot(rx, cmi)) score[pm + pv + ps:] += np.dot(bm.flat[::bm.shape[0] + 1], sno) if self.verbose: print("|G|=", np.sqrt(np.sum(score * score))) return score
Calculate the score function for the model. Parameters ---------- params : array_like The packed parameters for the model. Returns ------- The score vector at the given parameter point. Notes ----- The mean, scaling, and smoothing parameters are packed into a vector. Use `unpack` to access the component vectors.
score
python
statsmodels/statsmodels
statsmodels/regression/process_regression.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/process_regression.py
BSD-3-Clause
def fit(self, start_params=None, method=None, maxiter=None, **kwargs): """ Fit a grouped Gaussian process regression using MLE. Parameters ---------- start_params : array_like Optional starting values. method : str or array of str Method or sequence of methods for scipy optimize. maxiter : int The maximum number of iterations in the optimization. Returns ------- An instance of ProcessMLEResults. """ if "verbose" in kwargs: self.verbose = kwargs["verbose"] minim_opts = {} if "minim_opts" in kwargs: minim_opts = kwargs["minim_opts"] if start_params is None: start_params = self._get_start() if isinstance(method, str): method = [method] elif method is None: method = ["powell", "bfgs"] for j, meth in enumerate(method): if meth not in ("powell",): def jac(x): return -self.score(x) else: jac = None if maxiter is not None: if np.isscalar(maxiter): minim_opts["maxiter"] = maxiter else: minim_opts["maxiter"] = maxiter[j % len(maxiter)] f = minimize( lambda x: -self.loglike(x), method=meth, x0=start_params, jac=jac, options=minim_opts) if not f.success: msg = "Fitting did not converge" if jac is not None: msg += ", |gradient|=%.6f" % np.sqrt(np.sum(f.jac**2)) if j < len(method) - 1: msg += ", trying %s next..." % method[j+1] warnings.warn(msg) if np.isfinite(f.x).all(): start_params = f.x hess = self.hessian(f.x) try: cov_params = -np.linalg.inv(hess) except Exception: cov_params = None class rslt: pass r = rslt() r.params = f.x r.normalized_cov_params = cov_params r.optim_retvals = f r.scale = 1 rslt = ProcessMLEResults(self, r) return rslt
Fit a grouped Gaussian process regression using MLE. Parameters ---------- start_params : array_like Optional starting values. method : str or array of str Method or sequence of methods for scipy optimize. maxiter : int The maximum number of iterations in the optimization. Returns ------- An instance of ProcessMLEResults.
fit
python
statsmodels/statsmodels
statsmodels/regression/process_regression.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/process_regression.py
BSD-3-Clause
def covariance(self, time, scale_params, smooth_params, scale_data, smooth_data): """ Returns a Gaussian process covariance matrix. Parameters ---------- time : array_like The time points at which the fitted covariance matrix is calculated. scale_params : array_like The regression parameters for the scaling part of the covariance structure. smooth_params : array_like The regression parameters for the smoothing part of the covariance structure. scale_data : DataFrame The data used to determine the scale parameter, must have len(time) rows. smooth_data : DataFrame The data used to determine the smoothness parameter, must have len(time) rows. Returns ------- A covariance matrix. Notes ----- If the model was fit using formulas, `scale` and `smooth` should be Dataframes, containing all variables that were present in the respective scaling and smoothing formulas used to fit the model. Otherwise, `scale` and `smooth` should contain data arrays whose columns align with the fitted scaling and smoothing parameters. The covariance is only for the Gaussian process and does not include the white noise variance. """ if not hasattr(self.data, "scale_model_spec"): sca = np.dot(scale_data, scale_params) smo = np.dot(smooth_data, smooth_params) else: mgr = FormulaManager() sc = mgr.get_matrices(self.data.scale_model_spec, scale_data, pandas=False) sm = mgr.get_matrices( self.data.smooth_model_spec, smooth_data, pandas=False ) sca = np.exp(np.dot(sc, scale_params)) smo = np.exp(np.dot(sm, smooth_params)) return self.cov.get_cov(time, sca, smo)
Returns a Gaussian process covariance matrix. Parameters ---------- time : array_like The time points at which the fitted covariance matrix is calculated. scale_params : array_like The regression parameters for the scaling part of the covariance structure. smooth_params : array_like The regression parameters for the smoothing part of the covariance structure. scale_data : DataFrame The data used to determine the scale parameter, must have len(time) rows. smooth_data : DataFrame The data used to determine the smoothness parameter, must have len(time) rows. Returns ------- A covariance matrix. Notes ----- If the model was fit using formulas, `scale` and `smooth` should be Dataframes, containing all variables that were present in the respective scaling and smoothing formulas used to fit the model. Otherwise, `scale` and `smooth` should contain data arrays whose columns align with the fitted scaling and smoothing parameters. The covariance is only for the Gaussian process and does not include the white noise variance.
covariance
python
statsmodels/statsmodels
statsmodels/regression/process_regression.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/process_regression.py
BSD-3-Clause
def predict(self, params, exog=None, *args, **kwargs): """ Obtain predictions of the mean structure. Parameters ---------- params : array_like The model parameters, may be truncated to include only mean parameters. exog : array_like The design matrix for the mean structure. If not provided, the model's design matrix is used. """ if exog is None: exog = self.exog elif hasattr(self.data, "model_spec"): # Run the provided data through the formula if present mgr = FormulaManager() exog = mgr.get_matrices(self.data.model_spec, exog) if len(params) > exog.shape[1]: params = params[0:exog.shape[1]] return np.dot(exog, params)
Obtain predictions of the mean structure. Parameters ---------- params : array_like The model parameters, may be truncated to include only mean parameters. exog : array_like The design matrix for the mean structure. If not provided, the model's design matrix is used.
predict
python
statsmodels/statsmodels
statsmodels/regression/process_regression.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/process_regression.py
BSD-3-Clause
def covariance(self, time, scale, smooth): """ Returns a fitted covariance matrix. Parameters ---------- time : array_like The time points at which the fitted covariance matrix is calculated. scale : array_like The data used to determine the scale parameter, must have len(time) rows. smooth : array_like The data used to determine the smoothness parameter, must have len(time) rows. Returns ------- A covariance matrix. Notes ----- If the model was fit using formulas, `scale` and `smooth` should be Dataframes, containing all variables that were present in the respective scaling and smoothing formulas used to fit the model. Otherwise, `scale` and `smooth` should be data arrays whose columns align with the fitted scaling and smoothing parameters. """ return self.model.covariance(time, self.scale_params, self.smooth_params, scale, smooth)
Returns a fitted covariance matrix. Parameters ---------- time : array_like The time points at which the fitted covariance matrix is calculated. scale : array_like The data used to determine the scale parameter, must have len(time) rows. smooth : array_like The data used to determine the smoothness parameter, must have len(time) rows. Returns ------- A covariance matrix. Notes ----- If the model was fit using formulas, `scale` and `smooth` should be Dataframes, containing all variables that were present in the respective scaling and smoothing formulas used to fit the model. Otherwise, `scale` and `smooth` should be data arrays whose columns align with the fitted scaling and smoothing parameters.
covariance
python
statsmodels/statsmodels
statsmodels/regression/process_regression.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/process_regression.py
BSD-3-Clause
def _reset(self, idx): """Compute xpx and xpy using a single dot product""" _, wy, wx, _, not_missing = self._get_data(idx) nobs = not_missing.sum() xpx = wx.T @ wx xpy = wx.T @ wy return xpx, xpy, nobs
Compute xpx and xpy using a single dot product
_reset
python
statsmodels/statsmodels
statsmodels/regression/rolling.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/rolling.py
BSD-3-Clause
def fit( self, method="inv", cov_type="nonrobust", cov_kwds=None, reset=None, use_t=False, params_only=False, ): """ Estimate model parameters. Parameters ---------- method : {'inv', 'lstsq', 'pinv'} Method to use when computing the the model parameters. * 'inv' - use moving windows inner-products and matrix inversion. This method is the fastest, but may be less accurate than the other methods. * 'lstsq' - Use numpy.linalg.lstsq * 'pinv' - Use numpy.linalg.pinv. This method matches the default estimator in non-moving regression estimators. cov_type : {'nonrobust', 'HCCM', 'HC0'} Covariance estimator: * nonrobust - The classic OLS covariance estimator * HCCM, HC0 - White heteroskedasticity robust covariance cov_kwds : dict Unused reset : int, optional Interval to recompute the moving window inner products used to estimate the model parameters. Smaller values improve accuracy, although in practice this setting is not required to be set. use_t : bool, optional Flag indicating to use the Student's t distribution when computing p-values. params_only : bool, optional Flag indicating that only parameters should be computed. Avoids calculating all other statistics or performing inference. Returns ------- RollingRegressionResults Estimation results where all pre-sample values are nan-filled. """ method = string_like( method, "method", options=("inv", "lstsq", "pinv") ) reset = int_like(reset, "reset", optional=True) reset = self._y.shape[0] if reset is None else reset if reset < 1: raise ValueError("reset must be a positive integer") nobs, k = self._x.shape store = RollingStore( params=np.full((nobs, k), np.nan), ssr=np.full(nobs, np.nan), llf=np.full(nobs, np.nan), nobs=np.zeros(nobs, dtype=int), s2=np.full(nobs, np.nan), xpxi=np.full((nobs, k, k), np.nan), xeex=np.full((nobs, k, k), np.nan), centered_tss=np.full(nobs, np.nan), uncentered_tss=np.full(nobs, np.nan), ) w = self._window first = self._min_nobs if self._expanding else w xpx, xpy, nobs = self._reset(first) if not (self._has_nan[first - 1] and self._skip_missing): self._fit_single(first, xpx, xpy, nobs, store, params_only, method) wx, wy = self._wx, self._wy for i in range(first + 1, self._x.shape[0] + 1): if self._has_nan[i - 1] and self._skip_missing: continue if i % reset == 0: xpx, xpy, nobs = self._reset(i) else: if not self._is_nan[i - w - 1] and i > w: remove_x = wx[i - w - 1 : i - w] xpx -= remove_x.T @ remove_x xpy -= remove_x.T @ wy[i - w - 1 : i - w] nobs -= 1 if not self._is_nan[i - 1]: add_x = wx[i - 1 : i] xpx += add_x.T @ add_x xpy += add_x.T @ wy[i - 1 : i] nobs += 1 self._fit_single(i, xpx, xpy, nobs, store, params_only, method) return RollingRegressionResults( self, store, self.k_constant, use_t, cov_type )
Estimate model parameters. Parameters ---------- method : {'inv', 'lstsq', 'pinv'} Method to use when computing the the model parameters. * 'inv' - use moving windows inner-products and matrix inversion. This method is the fastest, but may be less accurate than the other methods. * 'lstsq' - Use numpy.linalg.lstsq * 'pinv' - Use numpy.linalg.pinv. This method matches the default estimator in non-moving regression estimators. cov_type : {'nonrobust', 'HCCM', 'HC0'} Covariance estimator: * nonrobust - The classic OLS covariance estimator * HCCM, HC0 - White heteroskedasticity robust covariance cov_kwds : dict Unused reset : int, optional Interval to recompute the moving window inner products used to estimate the model parameters. Smaller values improve accuracy, although in practice this setting is not required to be set. use_t : bool, optional Flag indicating to use the Student's t distribution when computing p-values. params_only : bool, optional Flag indicating that only parameters should be computed. Avoids calculating all other statistics or performing inference. Returns ------- RollingRegressionResults Estimation results where all pre-sample values are nan-filled.
fit
python
statsmodels/statsmodels
statsmodels/regression/rolling.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/rolling.py
BSD-3-Clause
def _wrap(self, val): """Wrap output as pandas Series or DataFrames as needed""" if not self._use_pandas: return val col_names = self.model.data.param_names row_names = self.model.data.row_labels if val.ndim == 1: return Series(val, index=row_names) if val.ndim == 2: return DataFrame(val, columns=col_names, index=row_names) else: # ndim == 3 mi = MultiIndex.from_product((row_names, col_names)) val = np.reshape(val, (-1, val.shape[-1])) return DataFrame(val, columns=col_names, index=mi)
Wrap output as pandas Series or DataFrames as needed
_wrap
python
statsmodels/statsmodels
statsmodels/regression/rolling.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/rolling.py
BSD-3-Clause
def params(self): """Estimated model parameters""" return self._wrap(self._params)
Estimated model parameters
params
python
statsmodels/statsmodels
statsmodels/regression/rolling.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/rolling.py
BSD-3-Clause
def k_constant(self): """Flag indicating whether the model contains a constant""" return self._k_constant
Flag indicating whether the model contains a constant
k_constant
python
statsmodels/statsmodels
statsmodels/regression/rolling.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/rolling.py
BSD-3-Clause
def cov_params(self): """ Estimated parameter covariance Returns ------- array_like The estimated model covariances. If the original input is a numpy array, the returned covariance is a 3-d array with shape (nobs, nvar, nvar). If the original inputs are pandas types, then the returned covariance is a DataFrame with a MultiIndex with key (observation, variable), so that the covariance for observation with index i is cov.loc[i]. """ return self._wrap(self._cov_params)
Estimated parameter covariance Returns ------- array_like The estimated model covariances. If the original input is a numpy array, the returned covariance is a 3-d array with shape (nobs, nvar, nvar). If the original inputs are pandas types, then the returned covariance is a DataFrame with a MultiIndex with key (observation, variable), so that the covariance for observation with index i is cov.loc[i].
cov_params
python
statsmodels/statsmodels
statsmodels/regression/rolling.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/rolling.py
BSD-3-Clause
def cov_type(self): """Name of covariance estimator""" return self._cov_type
Name of covariance estimator
cov_type
python
statsmodels/statsmodels
statsmodels/regression/rolling.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/rolling.py
BSD-3-Clause
def iterative_fit(self, maxiter=3): """ Perform an iterative two-step procedure to estimate a WLS model. The model is assumed to have heteroskedastic errors. The variance is estimated by OLS regression of the link transformed squared residuals on Z, i.e.:: link(sigma_i) = x_i*gamma. Parameters ---------- maxiter : int, optional the number of iterations Notes ----- maxiter=1: returns the estimated based on given weights maxiter=2: performs a second estimation with the updated weights, this is 2-step estimation maxiter>2: iteratively estimate and update the weights TODO: possible extension stop iteration if change in parameter estimates is smaller than x_tol Repeated calls to fit_iterative, will do one redundant pinv_wexog calculation. Calling fit_iterative(maxiter) ones does not do any redundant recalculations (whitening or calculating pinv_wexog). """ import collections self.history = collections.defaultdict(list) #not really necessary res_resid = None #if maxiter < 2 no updating for i in range(maxiter): #pinv_wexog is cached if hasattr(self, 'pinv_wexog'): del self.pinv_wexog #self.initialize() #print 'wls self', results = self.fit() self.history['self_params'].append(results.params) if not i == maxiter-1: #skip for last iteration, could break instead #print 'ols', self.results_old = results #for debugging #estimate heteroscedasticity res_resid = OLS(self.link(results.resid**2), self.exog_var).fit() self.history['ols_params'].append(res_resid.params) #update weights self.weights = 1./self.linkinv(res_resid.fittedvalues) self.weights /= self.weights.max() #not required self.weights[self.weights < 1e-14] = 1e-14 #clip #print 'in iter', i, self.weights.var() #debug, do weights change self.initialize() #note results is the wrapper, results._results is the results instance results._results.results_residual_regression = res_resid return results
Perform an iterative two-step procedure to estimate a WLS model. The model is assumed to have heteroskedastic errors. The variance is estimated by OLS regression of the link transformed squared residuals on Z, i.e.:: link(sigma_i) = x_i*gamma. Parameters ---------- maxiter : int, optional the number of iterations Notes ----- maxiter=1: returns the estimated based on given weights maxiter=2: performs a second estimation with the updated weights, this is 2-step estimation maxiter>2: iteratively estimate and update the weights TODO: possible extension stop iteration if change in parameter estimates is smaller than x_tol Repeated calls to fit_iterative, will do one redundant pinv_wexog calculation. Calling fit_iterative(maxiter) ones does not do any redundant recalculations (whitening or calculating pinv_wexog).
iterative_fit
python
statsmodels/statsmodels
statsmodels/regression/feasible_gls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/feasible_gls.py
BSD-3-Clause
def _dot(x, y): """ Returns the dot product of the arrays, works for sparse and dense. """ if isinstance(x, np.ndarray) and isinstance(y, np.ndarray): return np.dot(x, y) elif sparse.issparse(x): return x.dot(y) elif sparse.issparse(y): return y.T.dot(x.T).T
Returns the dot product of the arrays, works for sparse and dense.
_dot
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def _multi_dot_three(A, B, C): """ Find best ordering for three arrays and do the multiplication. Doing in manually instead of using dynamic programing is approximately 15 times faster. """ # cost1 = cost((AB)C) cost1 = (A.shape[0] * A.shape[1] * B.shape[1] + # (AB) A.shape[0] * B.shape[1] * C.shape[1]) # (--)C # cost2 = cost((AB)C) cost2 = (B.shape[0] * B.shape[1] * C.shape[1] + # (BC) A.shape[0] * A.shape[1] * C.shape[1]) # A(--) if cost1 < cost2: return _dot(_dot(A, B), C) else: return _dot(A, _dot(B, C))
Find best ordering for three arrays and do the multiplication. Doing in manually instead of using dynamic programing is approximately 15 times faster.
_multi_dot_three
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def _dotsum(x, y): """ Returns sum(x * y), where '*' is the pointwise product, computed efficiently for dense and sparse matrices. """ if sparse.issparse(x): return x.multiply(y).sum() else: # This way usually avoids allocating a temporary. return np.dot(x.ravel(), y.ravel())
Returns sum(x * y), where '*' is the pointwise product, computed efficiently for dense and sparse matrices.
_dotsum
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def _get_exog_re_names(self, exog_re): """ Passes through if given a list of names. Otherwise, gets pandas names or creates some generic variable names as needed. """ if self.k_re == 0: return [] if isinstance(exog_re, pd.DataFrame): return exog_re.columns.tolist() elif isinstance(exog_re, pd.Series) and exog_re.name is not None: return [exog_re.name] elif isinstance(exog_re, list): return exog_re # Default names defnames = [f"x_re{k + 1:1d}" for k in range(exog_re.shape[1])] return defnames
Passes through if given a list of names. Otherwise, gets pandas names or creates some generic variable names as needed.
_get_exog_re_names
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def from_packed(params, k_fe, k_re, use_sqrt, has_fe): """ Create a MixedLMParams object from packed parameter vector. Parameters ---------- params : array_like The mode parameters packed into a single vector. k_fe : int The number of covariates with fixed effects k_re : int The number of covariates with random effects (excluding variance components). use_sqrt : bool If True, the random effects covariance matrix is provided as its Cholesky factor, otherwise the lower triangle of the covariance matrix is stored. has_fe : bool If True, `params` contains fixed effects parameters. Otherwise, the fixed effects parameters are set to zero. Returns ------- A MixedLMParams object. """ k_re2 = int(k_re * (k_re + 1) / 2) # The number of covariance parameters. if has_fe: k_vc = len(params) - k_fe - k_re2 else: k_vc = len(params) - k_re2 pa = MixedLMParams(k_fe, k_re, k_vc) cov_re = np.zeros((k_re, k_re)) ix = pa._ix if has_fe: pa.fe_params = params[0:k_fe] cov_re[ix] = params[k_fe:k_fe+k_re2] else: pa.fe_params = np.zeros(k_fe) cov_re[ix] = params[0:k_re2] if use_sqrt: cov_re = np.dot(cov_re, cov_re.T) else: cov_re = (cov_re + cov_re.T) - np.diag(np.diag(cov_re)) pa.cov_re = cov_re if k_vc > 0: if use_sqrt: pa.vcomp = params[-k_vc:]**2 else: pa.vcomp = params[-k_vc:] else: pa.vcomp = np.array([]) return pa
Create a MixedLMParams object from packed parameter vector. Parameters ---------- params : array_like The mode parameters packed into a single vector. k_fe : int The number of covariates with fixed effects k_re : int The number of covariates with random effects (excluding variance components). use_sqrt : bool If True, the random effects covariance matrix is provided as its Cholesky factor, otherwise the lower triangle of the covariance matrix is stored. has_fe : bool If True, `params` contains fixed effects parameters. Otherwise, the fixed effects parameters are set to zero. Returns ------- A MixedLMParams object.
from_packed
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def from_components(fe_params=None, cov_re=None, cov_re_sqrt=None, vcomp=None): """ Create a MixedLMParams object from each parameter component. Parameters ---------- fe_params : array_like The fixed effects parameter (a 1-dimensional array). If None, there are no fixed effects. cov_re : array_like The random effects covariance matrix (a square, symmetric 2-dimensional array). cov_re_sqrt : array_like The Cholesky (lower triangular) square root of the random effects covariance matrix. vcomp : array_like The variance component parameters. If None, there are no variance components. Returns ------- A MixedLMParams object. """ if vcomp is None: vcomp = np.empty(0) if fe_params is None: fe_params = np.empty(0) if cov_re is None and cov_re_sqrt is None: cov_re = np.empty((0, 0)) k_fe = len(fe_params) k_vc = len(vcomp) k_re = cov_re.shape[0] if cov_re is not None else cov_re_sqrt.shape[0] pa = MixedLMParams(k_fe, k_re, k_vc) pa.fe_params = fe_params if cov_re_sqrt is not None: pa.cov_re = np.dot(cov_re_sqrt, cov_re_sqrt.T) elif cov_re is not None: pa.cov_re = cov_re pa.vcomp = vcomp return pa
Create a MixedLMParams object from each parameter component. Parameters ---------- fe_params : array_like The fixed effects parameter (a 1-dimensional array). If None, there are no fixed effects. cov_re : array_like The random effects covariance matrix (a square, symmetric 2-dimensional array). cov_re_sqrt : array_like The Cholesky (lower triangular) square root of the random effects covariance matrix. vcomp : array_like The variance component parameters. If None, there are no variance components. Returns ------- A MixedLMParams object.
from_components
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def copy(self): """ Returns a copy of the object. """ obj = MixedLMParams(self.k_fe, self.k_re, self.k_vc) obj.fe_params = self.fe_params.copy() obj.cov_re = self.cov_re.copy() obj.vcomp = self.vcomp.copy() return obj
Returns a copy of the object.
copy
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def get_packed(self, use_sqrt, has_fe=False): """ Return the model parameters packed into a single vector. Parameters ---------- use_sqrt : bool If True, the Cholesky square root of `cov_re` is included in the packed result. Otherwise the lower triangle of `cov_re` is included. has_fe : bool If True, the fixed effects parameters are included in the packed result, otherwise they are omitted. """ if self.k_re > 0: if use_sqrt: try: L = np.linalg.cholesky(self.cov_re) except np.linalg.LinAlgError: L = np.diag(np.sqrt(np.diag(self.cov_re))) cpa = L[self._ix] else: cpa = self.cov_re[self._ix] else: cpa = np.zeros(0) if use_sqrt: vcomp = np.sqrt(self.vcomp) else: vcomp = self.vcomp if has_fe: pa = np.concatenate((self.fe_params, cpa, vcomp)) else: pa = np.concatenate((cpa, vcomp)) return pa
Return the model parameters packed into a single vector. Parameters ---------- use_sqrt : bool If True, the Cholesky square root of `cov_re` is included in the packed result. Otherwise the lower triangle of `cov_re` is included. has_fe : bool If True, the fixed effects parameters are included in the packed result, otherwise they are omitted.
get_packed
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def _make_param_names(self, exog_re): """ Returns the full parameter names list, just the exogenous random effects variables, and the exogenous random effects variables with the interaction terms. """ exog_names = list(self.exog_names) exog_re_names = _get_exog_re_names(self, exog_re) param_names = [] jj = self.k_fe for i in range(len(exog_re_names)): for j in range(i + 1): if i == j: param_names.append(exog_re_names[i] + " Var") else: param_names.append(exog_re_names[j] + " x " + exog_re_names[i] + " Cov") jj += 1 vc_names = [x + " Var" for x in self.exog_vc.names] return exog_names + param_names + vc_names, exog_re_names, param_names
Returns the full parameter names list, just the exogenous random effects variables, and the exogenous random effects variables with the interaction terms.
_make_param_names
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def from_formula(cls, formula, data, re_formula=None, vc_formula=None, subset=None, use_sparse=False, missing='none', *args, **kwargs): """ Create a Model from a formula and dataframe. Parameters ---------- formula : str or generic Formula object The formula specifying the model data : array_like The data for the model. See Notes. re_formula : str A one-sided formula defining the variance structure of the model. The default gives a random intercept for each group. vc_formula : dict-like Formulas describing variance components. `vc_formula[vc]` is the formula for the component with variance parameter named `vc`. The formula is processed into a matrix, and the columns of this matrix are linearly combined with independent random coefficients having mean zero and a common variance. subset : array_like An array-like object of booleans, integers, or index values that indicate the subset of df to use in the model. Assumes df is a `pandas.DataFrame` missing : str Either 'none' or 'drop' args : extra arguments These are passed to the model kwargs : extra keyword arguments These are passed to the model with one exception. The ``eval_env`` keyword is passed to patsy. It can be either a :class:`patsy:patsy.EvalEnvironment` object or an integer indicating the depth of the namespace to use. For example, the default ``eval_env=0`` uses the calling namespace. If you wish to use a "clean" environment set ``eval_env=-1``. Returns ------- model : Model instance Notes ----- `data` must define __getitem__ with the keys in the formula terms args and kwargs are passed on to the model instantiation. E.g., a numpy structured or rec array, a dictionary, or a pandas DataFrame. If the variance component is intended to produce random intercepts for disjoint subsets of a group, specified by string labels or a categorical data value, always use '0 +' in the formula so that no overall intercept is included. If the variance components specify random slopes and you do not also want a random group-level intercept in the model, then use '0 +' in the formula to exclude the intercept. The variance components formulas are processed separately for each group. If a variable is categorical the results will not be affected by whether the group labels are distinct or re-used over the top-level groups. Examples -------- Suppose we have data from an educational study with students nested in classrooms nested in schools. The students take a test, and we want to relate the test scores to the students' ages, while accounting for the effects of classrooms and schools. The school will be the top-level group, and the classroom is a nested group that is specified as a variance component. Note that the schools may have different number of classrooms, and the classroom labels may (but need not be) different across the schools. >>> vc = {'classroom': '0 + C(classroom)'} >>> MixedLM.from_formula('test_score ~ age', vc_formula=vc, \ re_formula='1', groups='school', data=data) Now suppose we also have a previous test score called 'pretest'. If we want the relationship between pretest scores and the current test to vary by classroom, we can specify a random slope for the pretest score >>> vc = {'classroom': '0 + C(classroom)', 'pretest': '0 + pretest'} >>> MixedLM.from_formula('test_score ~ age + pretest', vc_formula=vc, \ re_formula='1', groups='school', data=data) The following model is almost equivalent to the previous one, but here the classroom random intercept and pretest slope may be correlated. >>> vc = {'classroom': '0 + C(classroom)'} >>> MixedLM.from_formula('test_score ~ age + pretest', vc_formula=vc, \ re_formula='1 + pretest', groups='school', \ data=data) """ if "groups" not in kwargs.keys(): raise AttributeError("'groups' is a required keyword argument " + "in MixedLM.from_formula") groups = kwargs["groups"] # If `groups` is a variable name, retrieve the data for the # groups variable. group_name = "Group" if isinstance(groups, str): group_name = groups groups = np.asarray(data[groups]) else: groups = np.asarray(groups) del kwargs["groups"] # Bypass all upstream missing data handling to properly handle # variance components if missing == 'drop': data, groups = _handle_missing(data, groups, formula, re_formula, vc_formula) missing = 'none' if re_formula is not None: if re_formula.strip() == "1": # Work around Patsy bug, fixed by 0.3. exog_re = np.ones((data.shape[0], 1)) exog_re_names = [group_name] else: eval_env = kwargs.get('eval_env', None) if eval_env is None: eval_env = 1 elif eval_env == -1: mgr = FormulaManager() eval_env = mgr.get_empty_eval_env() mgr = FormulaManager() exog_re = mgr.get_matrices(re_formula, data, eval_env=eval_env) exog_re_names = mgr.get_column_names(exog_re) exog_re_names = [x.replace("Intercept", group_name) for x in exog_re_names] exog_re = np.asarray(exog_re) if exog_re.ndim == 1: exog_re = exog_re[:, None] else: exog_re = None if vc_formula is None: exog_re_names = [group_name] else: exog_re_names = [] if vc_formula is not None: eval_env = kwargs.get('eval_env', None) if eval_env is None: eval_env = 1 elif eval_env == -1: mgr = FormulaManager() eval_env = mgr.get_empty_eval_env() vc_mats = [] vc_colnames = [] vc_names = [] gb = data.groupby(groups) kylist = sorted(gb.groups.keys()) vcf = sorted(vc_formula.keys()) mgr = FormulaManager() for vc_name in vcf: model_spec = mgr.get_spec(vc_formula[vc_name]) vc_names.append(vc_name) evc_mats, evc_colnames = [], [] for group_ix, group in enumerate(kylist): ii = gb.groups[group] mat = mgr.get_matrices( model_spec, data.loc[ii, :], eval_env=eval_env, pandas=True ) evc_colnames.append(mat.columns.tolist()) if use_sparse: evc_mats.append(sparse.csr_matrix(mat)) else: evc_mats.append(np.asarray(mat)) vc_mats.append(evc_mats) vc_colnames.append(evc_colnames) exog_vc = VCSpec(vc_names, vc_colnames, vc_mats) else: exog_vc = VCSpec([], [], []) kwargs["subset"] = None kwargs["exog_re"] = exog_re kwargs["exog_vc"] = exog_vc kwargs["groups"] = groups advance_eval_env(kwargs) mod = super().from_formula( formula, data, *args, **kwargs) # expand re names to account for pairs of RE (param_names, exog_re_names, exog_re_names_full) = mod._make_param_names(exog_re_names) mod.data.param_names = param_names mod.data.exog_re_names = exog_re_names mod.data.exog_re_names_full = exog_re_names_full if vc_formula is not None: mod.data.vcomp_names = mod.exog_vc.names return mod
Create a Model from a formula and dataframe. Parameters ---------- formula : str or generic Formula object The formula specifying the model data : array_like The data for the model. See Notes. re_formula : str A one-sided formula defining the variance structure of the model. The default gives a random intercept for each group. vc_formula : dict-like Formulas describing variance components. `vc_formula[vc]` is the formula for the component with variance parameter named `vc`. The formula is processed into a matrix, and the columns of this matrix are linearly combined with independent random coefficients having mean zero and a common variance. subset : array_like An array-like object of booleans, integers, or index values that indicate the subset of df to use in the model. Assumes df is a `pandas.DataFrame` missing : str Either 'none' or 'drop' args : extra arguments These are passed to the model kwargs : extra keyword arguments These are passed to the model with one exception. The ``eval_env`` keyword is passed to patsy. It can be either a :class:`patsy:patsy.EvalEnvironment` object or an integer indicating the depth of the namespace to use. For example, the default ``eval_env=0`` uses the calling namespace. If you wish to use a "clean" environment set ``eval_env=-1``. Returns ------- model : Model instance Notes ----- `data` must define __getitem__ with the keys in the formula terms args and kwargs are passed on to the model instantiation. E.g., a numpy structured or rec array, a dictionary, or a pandas DataFrame. If the variance component is intended to produce random intercepts for disjoint subsets of a group, specified by string labels or a categorical data value, always use '0 +' in the formula so that no overall intercept is included. If the variance components specify random slopes and you do not also want a random group-level intercept in the model, then use '0 +' in the formula to exclude the intercept. The variance components formulas are processed separately for each group. If a variable is categorical the results will not be affected by whether the group labels are distinct or re-used over the top-level groups. Examples -------- Suppose we have data from an educational study with students nested in classrooms nested in schools. The students take a test, and we want to relate the test scores to the students' ages, while accounting for the effects of classrooms and schools. The school will be the top-level group, and the classroom is a nested group that is specified as a variance component. Note that the schools may have different number of classrooms, and the classroom labels may (but need not be) different across the schools. >>> vc = {'classroom': '0 + C(classroom)'} >>> MixedLM.from_formula('test_score ~ age', vc_formula=vc, \ re_formula='1', groups='school', data=data) Now suppose we also have a previous test score called 'pretest'. If we want the relationship between pretest scores and the current test to vary by classroom, we can specify a random slope for the pretest score >>> vc = {'classroom': '0 + C(classroom)', 'pretest': '0 + pretest'} >>> MixedLM.from_formula('test_score ~ age + pretest', vc_formula=vc, \ re_formula='1', groups='school', data=data) The following model is almost equivalent to the previous one, but here the classroom random intercept and pretest slope may be correlated. >>> vc = {'classroom': '0 + C(classroom)'} >>> MixedLM.from_formula('test_score ~ age + pretest', vc_formula=vc, \ re_formula='1 + pretest', groups='school', \ data=data)
from_formula
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def predict(self, params, exog=None): """ Return predicted values from a design matrix. Parameters ---------- params : array_like Parameters of a mixed linear model. Can be either a MixedLMParams instance, or a vector containing the packed model parameters in which the fixed effects parameters are at the beginning of the vector, or a vector containing only the fixed effects parameters. exog : array_like, optional Design / exogenous data for the fixed effects. Model exog is used if None. Returns ------- An array of fitted values. Note that these predicted values only reflect the fixed effects mean structure of the model. """ if exog is None: exog = self.exog if isinstance(params, MixedLMParams): params = params.fe_params else: params = params[0:self.k_fe] return np.dot(exog, params)
Return predicted values from a design matrix. Parameters ---------- params : array_like Parameters of a mixed linear model. Can be either a MixedLMParams instance, or a vector containing the packed model parameters in which the fixed effects parameters are at the beginning of the vector, or a vector containing only the fixed effects parameters. exog : array_like, optional Design / exogenous data for the fixed effects. Model exog is used if None. Returns ------- An array of fitted values. Note that these predicted values only reflect the fixed effects mean structure of the model.
predict
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def group_list(self, array): """ Returns `array` split into subarrays corresponding to the grouping structure. """ if array is None: return None if array.ndim == 1: return [np.array(array[self.row_indices[k]]) for k in self.group_labels] else: return [np.array(array[self.row_indices[k], :]) for k in self.group_labels]
Returns `array` split into subarrays corresponding to the grouping structure.
group_list
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def fit_regularized(self, start_params=None, method='l1', alpha=0, ceps=1e-4, ptol=1e-6, maxit=200, **fit_kwargs): """ Fit a model in which the fixed effects parameters are penalized. The dependence parameters are held fixed at their estimated values in the unpenalized model. Parameters ---------- method : str of Penalty object Method for regularization. If a string, must be 'l1'. alpha : array_like Scalar or vector of penalty weights. If a scalar, the same weight is applied to all coefficients; if a vector, it contains a weight for each coefficient. If method is a Penalty object, the weights are scaled by alpha. For L1 regularization, the weights are used directly. ceps : positive real scalar Fixed effects parameters smaller than this value in magnitude are treated as being zero. ptol : positive real scalar Convergence occurs when the sup norm difference between successive values of `fe_params` is less than `ptol`. maxit : int The maximum number of iterations. **fit_kwargs Additional keyword arguments passed to fit. Returns ------- A MixedLMResults instance containing the results. Notes ----- The covariance structure is not updated as the fixed effects parameters are varied. The algorithm used here for L1 regularization is a"shooting" or cyclic coordinate descent algorithm. If method is 'l1', then `fe_pen` and `cov_pen` are used to obtain the covariance structure, but are ignored during the L1-penalized fitting. References ---------- Friedman, J. H., Hastie, T. and Tibshirani, R. Regularized Paths for Generalized Linear Models via Coordinate Descent. Journal of Statistical Software, 33(1) (2008) http://www.jstatsoft.org/v33/i01/paper http://statweb.stanford.edu/~tibs/stat315a/Supplements/fuse.pdf """ if isinstance(method, str) and (method.lower() != 'l1'): raise ValueError("Invalid regularization method") # If method is a smooth penalty just optimize directly. if isinstance(method, Penalty): # Scale the penalty weights by alpha method.alpha = alpha fit_kwargs.update({"fe_pen": method}) return self.fit(**fit_kwargs) if np.isscalar(alpha): alpha = alpha * np.ones(self.k_fe, dtype=np.float64) # Fit the unpenalized model to get the dependence structure. mdf = self.fit(**fit_kwargs) fe_params = mdf.fe_params cov_re = mdf.cov_re vcomp = mdf.vcomp scale = mdf.scale try: cov_re_inv = np.linalg.inv(cov_re) except np.linalg.LinAlgError: cov_re_inv = None for itr in range(maxit): fe_params_s = fe_params.copy() for j in range(self.k_fe): if abs(fe_params[j]) < ceps: continue # The residuals fe_params[j] = 0. expval = np.dot(self.exog, fe_params) resid_all = self.endog - expval # The loss function has the form # a*x^2 + b*x + pwt*|x| a, b = 0., 0. for group_ix, group in enumerate(self.group_labels): vc_var = self._expand_vcomp(vcomp, group_ix) exog = self.exog_li[group_ix] ex_r, ex2_r = self._aex_r[group_ix], self._aex_r2[group_ix] resid = resid_all[self.row_indices[group]] solver = _smw_solver(scale, ex_r, ex2_r, cov_re_inv, 1 / vc_var) x = exog[:, j] u = solver(x) a += np.dot(u, x) b -= 2 * np.dot(u, resid) pwt1 = alpha[j] if b > pwt1: fe_params[j] = -(b - pwt1) / (2 * a) elif b < -pwt1: fe_params[j] = -(b + pwt1) / (2 * a) if np.abs(fe_params_s - fe_params).max() < ptol: break # Replace the fixed effects estimates with their penalized # values, leave the dependence parameters in their unpenalized # state. params_prof = mdf.params.copy() params_prof[0:self.k_fe] = fe_params scale = self.get_scale(fe_params, mdf.cov_re_unscaled, mdf.vcomp) # Get the Hessian including only the nonzero fixed effects, # then blow back up to the full size after inverting. hess, sing = self.hessian(params_prof) if sing: warnings.warn(_warn_cov_sing) pcov = np.nan * np.ones_like(hess) ii = np.abs(params_prof) > ceps ii[self.k_fe:] = True ii = np.flatnonzero(ii) hess1 = hess[ii, :][:, ii] pcov[np.ix_(ii, ii)] = np.linalg.inv(-hess1) params_object = MixedLMParams.from_components(fe_params, cov_re=cov_re) results = MixedLMResults(self, params_prof, pcov / scale) results.params_object = params_object results.fe_params = fe_params results.cov_re = cov_re results.vcomp = vcomp results.scale = scale results.cov_re_unscaled = mdf.cov_re_unscaled results.method = mdf.method results.converged = True results.cov_pen = self.cov_pen results.k_fe = self.k_fe results.k_re = self.k_re results.k_re2 = self.k_re2 results.k_vc = self.k_vc return MixedLMResultsWrapper(results)
Fit a model in which the fixed effects parameters are penalized. The dependence parameters are held fixed at their estimated values in the unpenalized model. Parameters ---------- method : str of Penalty object Method for regularization. If a string, must be 'l1'. alpha : array_like Scalar or vector of penalty weights. If a scalar, the same weight is applied to all coefficients; if a vector, it contains a weight for each coefficient. If method is a Penalty object, the weights are scaled by alpha. For L1 regularization, the weights are used directly. ceps : positive real scalar Fixed effects parameters smaller than this value in magnitude are treated as being zero. ptol : positive real scalar Convergence occurs when the sup norm difference between successive values of `fe_params` is less than `ptol`. maxit : int The maximum number of iterations. **fit_kwargs Additional keyword arguments passed to fit. Returns ------- A MixedLMResults instance containing the results. Notes ----- The covariance structure is not updated as the fixed effects parameters are varied. The algorithm used here for L1 regularization is a"shooting" or cyclic coordinate descent algorithm. If method is 'l1', then `fe_pen` and `cov_pen` are used to obtain the covariance structure, but are ignored during the L1-penalized fitting. References ---------- Friedman, J. H., Hastie, T. and Tibshirani, R. Regularized Paths for Generalized Linear Models via Coordinate Descent. Journal of Statistical Software, 33(1) (2008) http://www.jstatsoft.org/v33/i01/paper http://statweb.stanford.edu/~tibs/stat315a/Supplements/fuse.pdf
fit_regularized
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def get_fe_params(self, cov_re, vcomp, tol=1e-10): """ Use GLS to update the fixed effects parameter estimates. Parameters ---------- cov_re : array_like (2d) The covariance matrix of the random effects. vcomp : array_like (1d) The variance components. tol : float A tolerance parameter to determine when covariances are singular. Returns ------- params : ndarray The GLS estimates of the fixed effects parameters. singular : bool True if the covariance is singular """ if self.k_fe == 0: return np.array([]), False sing = False if self.k_re == 0: cov_re_inv = np.empty((0, 0)) else: w, v = np.linalg.eigh(cov_re) if w.min() < tol: # Singular, use pseudo-inverse sing = True ii = np.flatnonzero(w >= tol) if len(ii) == 0: cov_re_inv = np.zeros_like(cov_re) else: vi = v[:, ii] wi = w[ii] cov_re_inv = np.dot(vi / wi, vi.T) else: cov_re_inv = np.linalg.inv(cov_re) # Cache these quantities that do not change. if not hasattr(self, "_endex_li"): self._endex_li = [] for group_ix, _ in enumerate(self.group_labels): mat = np.concatenate( (self.exog_li[group_ix], self.endog_li[group_ix][:, None]), axis=1) self._endex_li.append(mat) xtxy = 0. for group_ix, group in enumerate(self.group_labels): vc_var = self._expand_vcomp(vcomp, group_ix) if vc_var.size > 0: if vc_var.min() < tol: # Pseudo-inverse sing = True ii = np.flatnonzero(vc_var >= tol) vc_vari = np.zeros_like(vc_var) vc_vari[ii] = 1 / vc_var[ii] else: vc_vari = 1 / vc_var else: vc_vari = np.empty(0) exog = self.exog_li[group_ix] ex_r, ex2_r = self._aex_r[group_ix], self._aex_r2[group_ix] solver = _smw_solver(1., ex_r, ex2_r, cov_re_inv, vc_vari) u = solver(self._endex_li[group_ix]) xtxy += np.dot(exog.T, u) if sing: fe_params = np.dot(np.linalg.pinv(xtxy[:, 0:-1]), xtxy[:, -1]) else: fe_params = np.linalg.solve(xtxy[:, 0:-1], xtxy[:, -1]) return fe_params, sing
Use GLS to update the fixed effects parameter estimates. Parameters ---------- cov_re : array_like (2d) The covariance matrix of the random effects. vcomp : array_like (1d) The variance components. tol : float A tolerance parameter to determine when covariances are singular. Returns ------- params : ndarray The GLS estimates of the fixed effects parameters. singular : bool True if the covariance is singular
get_fe_params
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def _reparam(self): """ Returns parameters of the map converting parameters from the form used in optimization to the form returned to the user. Returns ------- lin : list-like Linear terms of the map quad : list-like Quadratic terms of the map Notes ----- If P are the standard form parameters and R are the transformed parameters (i.e. with the Cholesky square root covariance and square root transformed variance components), then P[i] = lin[i] * R + R' * quad[i] * R """ k_fe, k_re, k_re2, k_vc = self.k_fe, self.k_re, self.k_re2, self.k_vc k_tot = k_fe + k_re2 + k_vc ix = np.tril_indices(self.k_re) lin = [] for k in range(k_fe): e = np.zeros(k_tot) e[k] = 1 lin.append(e) for k in range(k_re2): lin.append(np.zeros(k_tot)) for k in range(k_vc): lin.append(np.zeros(k_tot)) quad = [] # Quadratic terms for fixed effects. for k in range(k_tot): quad.append(np.zeros((k_tot, k_tot))) # Quadratic terms for random effects covariance. ii = np.tril_indices(k_re) ix = [(a, b) for a, b in zip(ii[0], ii[1])] for i1 in range(k_re2): for i2 in range(k_re2): ix1 = ix[i1] ix2 = ix[i2] if (ix1[1] == ix2[1]) and (ix1[0] <= ix2[0]): ii = (ix2[0], ix1[0]) k = ix.index(ii) quad[k_fe+k][k_fe+i2, k_fe+i1] += 1 for k in range(k_tot): quad[k] = 0.5*(quad[k] + quad[k].T) # Quadratic terms for variance components. km = k_fe + k_re2 for k in range(km, km+k_vc): quad[k][k, k] = 1 return lin, quad
Returns parameters of the map converting parameters from the form used in optimization to the form returned to the user. Returns ------- lin : list-like Linear terms of the map quad : list-like Quadratic terms of the map Notes ----- If P are the standard form parameters and R are the transformed parameters (i.e. with the Cholesky square root covariance and square root transformed variance components), then P[i] = lin[i] * R + R' * quad[i] * R
_reparam
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def _expand_vcomp(self, vcomp, group_ix): """ Replicate variance parameters to match a group's design. Parameters ---------- vcomp : array_like The variance parameters for the variance components. group_ix : int The group index Returns an expanded version of vcomp, in which each variance parameter is copied as many times as there are independent realizations of the variance component in the given group. """ if len(vcomp) == 0: return np.empty(0) vc_var = [] for j in range(len(self.exog_vc.names)): d = self.exog_vc.mats[j][group_ix].shape[1] vc_var.append(vcomp[j] * np.ones(d)) if len(vc_var) > 0: return np.concatenate(vc_var) else: # Cannot reach here? return np.empty(0)
Replicate variance parameters to match a group's design. Parameters ---------- vcomp : array_like The variance parameters for the variance components. group_ix : int The group index Returns an expanded version of vcomp, in which each variance parameter is copied as many times as there are independent realizations of the variance component in the given group.
_expand_vcomp
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def _augment_exog(self, group_ix): """ Concatenate the columns for variance components to the columns for other random effects to obtain a single random effects exog matrix for a given group. """ ex_r = self.exog_re_li[group_ix] if self.k_re > 0 else None if self.k_vc == 0: return ex_r ex = [ex_r] if self.k_re > 0 else [] any_sparse = False for j, _ in enumerate(self.exog_vc.names): ex.append(self.exog_vc.mats[j][group_ix]) any_sparse |= sparse.issparse(ex[-1]) if any_sparse: for j, x in enumerate(ex): if not sparse.issparse(x): ex[j] = sparse.csr_matrix(x) ex = sparse.hstack(ex) ex = sparse.csr_matrix(ex) else: ex = np.concatenate(ex, axis=1) return ex
Concatenate the columns for variance components to the columns for other random effects to obtain a single random effects exog matrix for a given group.
_augment_exog
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def loglike(self, params, profile_fe=True): """ Evaluate the (profile) log-likelihood of the linear mixed effects model. Parameters ---------- params : MixedLMParams, or array_like. The parameter value. If array-like, must be a packed parameter vector containing only the covariance parameters. profile_fe : bool If True, replace the provided value of `fe_params` with the GLS estimates. Returns ------- The log-likelihood value at `params`. Notes ----- The scale parameter `scale` is always profiled out of the log-likelihood. In addition, if `profile_fe` is true the fixed effects parameters are also profiled out. """ if type(params) is not MixedLMParams: params = MixedLMParams.from_packed(params, self.k_fe, self.k_re, self.use_sqrt, has_fe=False) cov_re = params.cov_re vcomp = params.vcomp # Move to the profile set if profile_fe: fe_params, sing = self.get_fe_params(cov_re, vcomp) if sing: self._cov_sing += 1 else: fe_params = params.fe_params if self.k_re > 0: try: cov_re_inv = np.linalg.inv(cov_re) except np.linalg.LinAlgError: cov_re_inv = np.linalg.pinv(cov_re) self._cov_sing += 1 _, cov_re_logdet = np.linalg.slogdet(cov_re) else: cov_re_inv = np.zeros((0, 0)) cov_re_logdet = 0 # The residuals expval = np.dot(self.exog, fe_params) resid_all = self.endog - expval likeval = 0. # Handle the covariance penalty if (self.cov_pen is not None) and (self.k_re > 0): likeval -= self.cov_pen.func(cov_re, cov_re_inv) # Handle the fixed effects penalty if (self.fe_pen is not None): likeval -= self.fe_pen.func(fe_params) xvx, qf = 0., 0. for group_ix, group in enumerate(self.group_labels): vc_var = self._expand_vcomp(vcomp, group_ix) cov_aug_logdet = cov_re_logdet + np.sum(np.log(vc_var)) exog = self.exog_li[group_ix] ex_r, ex2_r = self._aex_r[group_ix], self._aex_r2[group_ix] solver = _smw_solver(1., ex_r, ex2_r, cov_re_inv, 1 / vc_var) resid = resid_all[self.row_indices[group]] # Part 1 of the log likelihood (for both ML and REML) ld = _smw_logdet(1., ex_r, ex2_r, cov_re_inv, 1 / vc_var, cov_aug_logdet) likeval -= ld / 2. # Part 2 of the log likelihood (for both ML and REML) u = solver(resid) qf += np.dot(resid, u) # Adjustment for REML if self.reml: mat = solver(exog) xvx += np.dot(exog.T, mat) if self.reml: likeval -= (self.n_totobs - self.k_fe) * np.log(qf) / 2. _, ld = np.linalg.slogdet(xvx) likeval -= ld / 2. likeval -= (self.n_totobs - self.k_fe) * np.log(2 * np.pi) / 2. likeval += ((self.n_totobs - self.k_fe) * np.log(self.n_totobs - self.k_fe) / 2.) likeval -= (self.n_totobs - self.k_fe) / 2. else: likeval -= self.n_totobs * np.log(qf) / 2. likeval -= self.n_totobs * np.log(2 * np.pi) / 2. likeval += self.n_totobs * np.log(self.n_totobs) / 2. likeval -= self.n_totobs / 2. return likeval
Evaluate the (profile) log-likelihood of the linear mixed effects model. Parameters ---------- params : MixedLMParams, or array_like. The parameter value. If array-like, must be a packed parameter vector containing only the covariance parameters. profile_fe : bool If True, replace the provided value of `fe_params` with the GLS estimates. Returns ------- The log-likelihood value at `params`. Notes ----- The scale parameter `scale` is always profiled out of the log-likelihood. In addition, if `profile_fe` is true the fixed effects parameters are also profiled out.
loglike
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def _gen_dV_dPar(self, ex_r, solver, group_ix, max_ix=None): """ A generator that yields the element-wise derivative of the marginal covariance matrix with respect to the random effects variance and covariance parameters. ex_r : array_like The random effects design matrix solver : function A function that given x returns V^{-1}x, where V is the group's marginal covariance matrix. group_ix : int The group index max_ix : {int, None} If not None, the generator ends when this index is reached. """ axr = solver(ex_r) # Regular random effects jj = 0 for j1 in range(self.k_re): for j2 in range(j1 + 1): if max_ix is not None and jj > max_ix: return # Need 2d mat_l, mat_r = ex_r[:, j1:j1+1], ex_r[:, j2:j2+1] vsl, vsr = axr[:, j1:j1+1], axr[:, j2:j2+1] yield jj, mat_l, mat_r, vsl, vsr, j1 == j2 jj += 1 # Variance components for j, _ in enumerate(self.exog_vc.names): if max_ix is not None and jj > max_ix: return mat = self.exog_vc.mats[j][group_ix] axmat = solver(mat) yield jj, mat, mat, axmat, axmat, True jj += 1
A generator that yields the element-wise derivative of the marginal covariance matrix with respect to the random effects variance and covariance parameters. ex_r : array_like The random effects design matrix solver : function A function that given x returns V^{-1}x, where V is the group's marginal covariance matrix. group_ix : int The group index max_ix : {int, None} If not None, the generator ends when this index is reached.
_gen_dV_dPar
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def score(self, params, profile_fe=True): """ Returns the score vector of the profile log-likelihood. Notes ----- The score vector that is returned is computed with respect to the parameterization defined by this model instance's `use_sqrt` attribute. """ if type(params) is not MixedLMParams: params = MixedLMParams.from_packed( params, self.k_fe, self.k_re, self.use_sqrt, has_fe=False) if profile_fe: params.fe_params, sing = \ self.get_fe_params(params.cov_re, params.vcomp) if sing: msg = "Random effects covariance is singular" warnings.warn(msg) if self.use_sqrt: score_fe, score_re, score_vc = self.score_sqrt( params, calc_fe=not profile_fe) else: score_fe, score_re, score_vc = self.score_full( params, calc_fe=not profile_fe) if self._freepat is not None: score_fe *= self._freepat.fe_params score_re *= self._freepat.cov_re[self._freepat._ix] score_vc *= self._freepat.vcomp if profile_fe: return np.concatenate((score_re, score_vc)) else: return np.concatenate((score_fe, score_re, score_vc))
Returns the score vector of the profile log-likelihood. Notes ----- The score vector that is returned is computed with respect to the parameterization defined by this model instance's `use_sqrt` attribute.
score
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def score_full(self, params, calc_fe): """ Returns the score with respect to untransformed parameters. Calculates the score vector for the profiled log-likelihood of the mixed effects model with respect to the parameterization in which the random effects covariance matrix is represented in its full form (not using the Cholesky factor). Parameters ---------- params : MixedLMParams or array_like The parameter at which the score function is evaluated. If array-like, must contain the packed random effects parameters (cov_re and vcomp) without fe_params. calc_fe : bool If True, calculate the score vector for the fixed effects parameters. If False, this vector is not calculated, and a vector of zeros is returned in its place. Returns ------- score_fe : array_like The score vector with respect to the fixed effects parameters. score_re : array_like The score vector with respect to the random effects parameters (excluding variance components parameters). score_vc : array_like The score vector with respect to variance components parameters. Notes ----- `score_re` is taken with respect to the parameterization in which `cov_re` is represented through its lower triangle (without taking the Cholesky square root). """ fe_params = params.fe_params cov_re = params.cov_re vcomp = params.vcomp try: cov_re_inv = np.linalg.inv(cov_re) except np.linalg.LinAlgError: cov_re_inv = np.linalg.pinv(cov_re) self._cov_sing += 1 score_fe = np.zeros(self.k_fe) score_re = np.zeros(self.k_re2) score_vc = np.zeros(self.k_vc) # Handle the covariance penalty. if self.cov_pen is not None: score_re -= self.cov_pen.deriv(cov_re, cov_re_inv) # Handle the fixed effects penalty. if calc_fe and (self.fe_pen is not None): score_fe -= self.fe_pen.deriv(fe_params) # resid' V^{-1} resid, summed over the groups (a scalar) rvir = 0. # exog' V^{-1} resid, summed over the groups (a k_fe # dimensional vector) xtvir = 0. # exog' V^{_1} exog, summed over the groups (a k_fe x k_fe # matrix) xtvix = 0. # V^{-1} exog' dV/dQ_jj exog V^{-1}, where Q_jj is the jj^th # covariance parameter. xtax = [0., ] * (self.k_re2 + self.k_vc) # Temporary related to the gradient of log |V| dlv = np.zeros(self.k_re2 + self.k_vc) # resid' V^{-1} dV/dQ_jj V^{-1} resid (a scalar) rvavr = np.zeros(self.k_re2 + self.k_vc) for group_ix, group in enumerate(self.group_labels): vc_var = self._expand_vcomp(vcomp, group_ix) exog = self.exog_li[group_ix] ex_r, ex2_r = self._aex_r[group_ix], self._aex_r2[group_ix] solver = _smw_solver(1., ex_r, ex2_r, cov_re_inv, 1 / vc_var) # The residuals resid = self.endog_li[group_ix] if self.k_fe > 0: expval = np.dot(exog, fe_params) resid = resid - expval if self.reml: viexog = solver(exog) xtvix += np.dot(exog.T, viexog) # Contributions to the covariance parameter gradient vir = solver(resid) for (jj, matl, matr, vsl, vsr, sym) in\ self._gen_dV_dPar(ex_r, solver, group_ix): dlv[jj] = _dotsum(matr, vsl) if not sym: dlv[jj] += _dotsum(matl, vsr) ul = _dot(vir, matl) ur = ul.T if sym else _dot(matr.T, vir) ulr = np.dot(ul, ur) rvavr[jj] += ulr if not sym: rvavr[jj] += ulr.T if self.reml: ul = _dot(viexog.T, matl) ur = ul.T if sym else _dot(matr.T, viexog) ulr = np.dot(ul, ur) xtax[jj] += ulr if not sym: xtax[jj] += ulr.T # Contribution of log|V| to the covariance parameter # gradient. if self.k_re > 0: score_re -= 0.5 * dlv[0:self.k_re2] if self.k_vc > 0: score_vc -= 0.5 * dlv[self.k_re2:] rvir += np.dot(resid, vir) if calc_fe: xtvir += np.dot(exog.T, vir) fac = self.n_totobs if self.reml: fac -= self.k_fe if calc_fe and self.k_fe > 0: score_fe += fac * xtvir / rvir if self.k_re > 0: score_re += 0.5 * fac * rvavr[0:self.k_re2] / rvir if self.k_vc > 0: score_vc += 0.5 * fac * rvavr[self.k_re2:] / rvir if self.reml: xtvixi = np.linalg.inv(xtvix) for j in range(self.k_re2): score_re[j] += 0.5 * _dotsum(xtvixi.T, xtax[j]) for j in range(self.k_vc): score_vc[j] += 0.5 * _dotsum(xtvixi.T, xtax[self.k_re2 + j]) return score_fe, score_re, score_vc
Returns the score with respect to untransformed parameters. Calculates the score vector for the profiled log-likelihood of the mixed effects model with respect to the parameterization in which the random effects covariance matrix is represented in its full form (not using the Cholesky factor). Parameters ---------- params : MixedLMParams or array_like The parameter at which the score function is evaluated. If array-like, must contain the packed random effects parameters (cov_re and vcomp) without fe_params. calc_fe : bool If True, calculate the score vector for the fixed effects parameters. If False, this vector is not calculated, and a vector of zeros is returned in its place. Returns ------- score_fe : array_like The score vector with respect to the fixed effects parameters. score_re : array_like The score vector with respect to the random effects parameters (excluding variance components parameters). score_vc : array_like The score vector with respect to variance components parameters. Notes ----- `score_re` is taken with respect to the parameterization in which `cov_re` is represented through its lower triangle (without taking the Cholesky square root).
score_full
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def score_sqrt(self, params, calc_fe=True): """ Returns the score with respect to transformed parameters. Calculates the score vector with respect to the parameterization in which the random effects covariance matrix is represented through its Cholesky square root. Parameters ---------- params : MixedLMParams or array_like The model parameters. If array-like must contain packed parameters that are compatible with this model instance. calc_fe : bool If True, calculate the score vector for the fixed effects parameters. If False, this vector is not calculated, and a vector of zeros is returned in its place. Returns ------- score_fe : array_like The score vector with respect to the fixed effects parameters. score_re : array_like The score vector with respect to the random effects parameters (excluding variance components parameters). score_vc : array_like The score vector with respect to variance components parameters. """ score_fe, score_re, score_vc = self.score_full(params, calc_fe=calc_fe) params_vec = params.get_packed(use_sqrt=True, has_fe=True) score_full = np.concatenate((score_fe, score_re, score_vc)) scr = 0. for i in range(len(params_vec)): v = self._lin[i] + 2 * np.dot(self._quad[i], params_vec) scr += score_full[i] * v score_fe = scr[0:self.k_fe] score_re = scr[self.k_fe:self.k_fe + self.k_re2] score_vc = scr[self.k_fe + self.k_re2:] return score_fe, score_re, score_vc
Returns the score with respect to transformed parameters. Calculates the score vector with respect to the parameterization in which the random effects covariance matrix is represented through its Cholesky square root. Parameters ---------- params : MixedLMParams or array_like The model parameters. If array-like must contain packed parameters that are compatible with this model instance. calc_fe : bool If True, calculate the score vector for the fixed effects parameters. If False, this vector is not calculated, and a vector of zeros is returned in its place. Returns ------- score_fe : array_like The score vector with respect to the fixed effects parameters. score_re : array_like The score vector with respect to the random effects parameters (excluding variance components parameters). score_vc : array_like The score vector with respect to variance components parameters.
score_sqrt
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def hessian(self, params): """ Returns the model's Hessian matrix. Calculates the Hessian matrix for the linear mixed effects model with respect to the parameterization in which the covariance matrix is represented directly (without square-root transformation). Parameters ---------- params : MixedLMParams or array_like The model parameters at which the Hessian is calculated. If array-like, must contain the packed parameters in a form that is compatible with this model instance. Returns ------- hess : 2d ndarray The Hessian matrix, evaluated at `params`. sing : boolean If True, the covariance matrix is singular and a pseudo-inverse is returned. """ if type(params) is not MixedLMParams: params = MixedLMParams.from_packed(params, self.k_fe, self.k_re, use_sqrt=self.use_sqrt, has_fe=True) fe_params = params.fe_params vcomp = params.vcomp cov_re = params.cov_re sing = False if self.k_re > 0: try: cov_re_inv = np.linalg.inv(cov_re) except np.linalg.LinAlgError: cov_re_inv = np.linalg.pinv(cov_re) sing = True else: cov_re_inv = np.empty((0, 0)) # Blocks for the fixed and random effects parameters. hess_fe = 0. hess_re = np.zeros((self.k_re2 + self.k_vc, self.k_re2 + self.k_vc)) hess_fere = np.zeros((self.k_re2 + self.k_vc, self.k_fe)) fac = self.n_totobs if self.reml: fac -= self.exog.shape[1] rvir = 0. xtvix = 0. xtax = [0., ] * (self.k_re2 + self.k_vc) m = self.k_re2 + self.k_vc B = np.zeros(m) D = np.zeros((m, m)) F = [[0.] * m for k in range(m)] for group_ix, group in enumerate(self.group_labels): vc_var = self._expand_vcomp(vcomp, group_ix) vc_vari = np.zeros_like(vc_var) ii = np.flatnonzero(vc_var >= 1e-10) if len(ii) > 0: vc_vari[ii] = 1 / vc_var[ii] if len(ii) < len(vc_var): sing = True exog = self.exog_li[group_ix] ex_r, ex2_r = self._aex_r[group_ix], self._aex_r2[group_ix] solver = _smw_solver(1., ex_r, ex2_r, cov_re_inv, vc_vari) # The residuals resid = self.endog_li[group_ix] if self.k_fe > 0: expval = np.dot(exog, fe_params) resid = resid - expval viexog = solver(exog) xtvix += np.dot(exog.T, viexog) vir = solver(resid) rvir += np.dot(resid, vir) for (jj1, matl1, matr1, vsl1, vsr1, sym1) in\ self._gen_dV_dPar(ex_r, solver, group_ix): ul = _dot(viexog.T, matl1) ur = _dot(matr1.T, vir) hess_fere[jj1, :] += np.dot(ul, ur) if not sym1: ul = _dot(viexog.T, matr1) ur = _dot(matl1.T, vir) hess_fere[jj1, :] += np.dot(ul, ur) if self.reml: ul = _dot(viexog.T, matl1) ur = ul if sym1 else np.dot(viexog.T, matr1) ulr = _dot(ul, ur.T) xtax[jj1] += ulr if not sym1: xtax[jj1] += ulr.T ul = _dot(vir, matl1) ur = ul if sym1 else _dot(vir, matr1) B[jj1] += np.dot(ul, ur) * (1 if sym1 else 2) # V^{-1} * dV/d_theta E = [(vsl1, matr1)] if not sym1: E.append((vsr1, matl1)) for (jj2, matl2, matr2, vsl2, vsr2, sym2) in\ self._gen_dV_dPar(ex_r, solver, group_ix, jj1): re = sum([_multi_dot_three(matr2.T, x[0], x[1].T) for x in E]) vt = 2 * _dot(_multi_dot_three(vir[None, :], matl2, re), vir[:, None]) if not sym2: le = sum([_multi_dot_three(matl2.T, x[0], x[1].T) for x in E]) vt += 2 * _dot(_multi_dot_three( vir[None, :], matr2, le), vir[:, None]) D[jj1, jj2] += np.squeeze(vt) if jj1 != jj2: D[jj2, jj1] += np.squeeze(vt) rt = _dotsum(vsl2, re.T) / 2 if not sym2: rt += _dotsum(vsr2, le.T) / 2 hess_re[jj1, jj2] += rt if jj1 != jj2: hess_re[jj2, jj1] += rt if self.reml: ev = sum([_dot(x[0], _dot(x[1].T, viexog)) for x in E]) u1 = _dot(viexog.T, matl2) u2 = _dot(matr2.T, ev) um = np.dot(u1, u2) F[jj1][jj2] += um + um.T if not sym2: u1 = np.dot(viexog.T, matr2) u2 = np.dot(matl2.T, ev) um = np.dot(u1, u2) F[jj1][jj2] += um + um.T hess_fe -= fac * xtvix / rvir hess_re = hess_re - 0.5 * fac * (D/rvir - np.outer(B, B) / rvir**2) hess_fere = -fac * hess_fere / rvir if self.reml: QL = [np.linalg.solve(xtvix, x) for x in xtax] for j1 in range(self.k_re2 + self.k_vc): for j2 in range(j1 + 1): a = _dotsum(QL[j1].T, QL[j2]) a -= np.trace(np.linalg.solve(xtvix, F[j1][j2])) a *= 0.5 hess_re[j1, j2] += a if j1 > j2: hess_re[j2, j1] += a # Put the blocks together to get the Hessian. m = self.k_fe + self.k_re2 + self.k_vc hess = np.zeros((m, m)) hess[0:self.k_fe, 0:self.k_fe] = hess_fe hess[0:self.k_fe, self.k_fe:] = hess_fere.T hess[self.k_fe:, 0:self.k_fe] = hess_fere hess[self.k_fe:, self.k_fe:] = hess_re return hess, sing
Returns the model's Hessian matrix. Calculates the Hessian matrix for the linear mixed effects model with respect to the parameterization in which the covariance matrix is represented directly (without square-root transformation). Parameters ---------- params : MixedLMParams or array_like The model parameters at which the Hessian is calculated. If array-like, must contain the packed parameters in a form that is compatible with this model instance. Returns ------- hess : 2d ndarray The Hessian matrix, evaluated at `params`. sing : boolean If True, the covariance matrix is singular and a pseudo-inverse is returned.
hessian
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def get_scale(self, fe_params, cov_re, vcomp): """ Returns the estimated error variance based on given estimates of the slopes and random effects covariance matrix. Parameters ---------- fe_params : array_like The regression slope estimates cov_re : 2d array_like Estimate of the random effects covariance matrix vcomp : array_like Estimate of the variance components Returns ------- scale : float The estimated error variance. """ try: cov_re_inv = np.linalg.inv(cov_re) except np.linalg.LinAlgError: cov_re_inv = np.linalg.pinv(cov_re) warnings.warn(_warn_cov_sing) qf = 0. for group_ix, group in enumerate(self.group_labels): vc_var = self._expand_vcomp(vcomp, group_ix) exog = self.exog_li[group_ix] ex_r, ex2_r = self._aex_r[group_ix], self._aex_r2[group_ix] solver = _smw_solver(1., ex_r, ex2_r, cov_re_inv, 1 / vc_var) # The residuals resid = self.endog_li[group_ix] if self.k_fe > 0: expval = np.dot(exog, fe_params) resid = resid - expval mat = solver(resid) qf += np.dot(resid, mat) if self.reml: qf /= (self.n_totobs - self.k_fe) else: qf /= self.n_totobs return qf
Returns the estimated error variance based on given estimates of the slopes and random effects covariance matrix. Parameters ---------- fe_params : array_like The regression slope estimates cov_re : 2d array_like Estimate of the random effects covariance matrix vcomp : array_like Estimate of the variance components Returns ------- scale : float The estimated error variance.
get_scale
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def fit(self, start_params=None, reml=True, niter_sa=0, do_cg=True, fe_pen=None, cov_pen=None, free=None, full_output=False, method=None, **fit_kwargs): """ Fit a linear mixed model to the data. Parameters ---------- start_params : array_like or MixedLMParams Starting values for the profile log-likelihood. If not a `MixedLMParams` instance, this should be an array containing the packed parameters for the profile log-likelihood, including the fixed effects parameters. reml : bool If true, fit according to the REML likelihood, else fit the standard likelihood using ML. niter_sa : int Currently this argument is ignored and has no effect on the results. cov_pen : CovariancePenalty object A penalty for the random effects covariance matrix do_cg : bool, defaults to True If False, the optimization is skipped and a results object at the given (or default) starting values is returned. fe_pen : Penalty object A penalty on the fixed effects free : MixedLMParams object If not `None`, this is a mask that allows parameters to be held fixed at specified values. A 1 indicates that the corresponding parameter is estimated, a 0 indicates that it is fixed at its starting value. Setting the `cov_re` component to the identity matrix fits a model with independent random effects. Note that some optimization methods do not respect this constraint (bfgs and lbfgs both work). full_output : bool If true, attach iteration history to results method : str Optimization method. Can be a scipy.optimize method name, or a list of such names to be tried in sequence. **fit_kwargs Additional keyword arguments passed to fit. Returns ------- A MixedLMResults instance. """ _allowed_kwargs = ['gtol', 'maxiter', 'eps', 'maxcor', 'ftol', 'tol', 'disp', 'maxls'] for x in fit_kwargs.keys(): if x not in _allowed_kwargs: warnings.warn("Argument %s not used by MixedLM.fit" % x) if method is None: method = ['bfgs', 'lbfgs', 'cg'] elif isinstance(method, str): method = [method] for meth in method: if meth.lower() in ["newton", "ncg"]: raise ValueError( "method %s not available for MixedLM" % meth) self.reml = reml self.cov_pen = cov_pen self.fe_pen = fe_pen self._cov_sing = 0 self._freepat = free if full_output: hist = [] else: hist = None if start_params is None: params = MixedLMParams(self.k_fe, self.k_re, self.k_vc) params.fe_params = np.zeros(self.k_fe) params.cov_re = np.eye(self.k_re) params.vcomp = np.ones(self.k_vc) else: if isinstance(start_params, MixedLMParams): params = start_params else: # It's a packed array if len(start_params) == self.k_fe + self.k_re2 + self.k_vc: params = MixedLMParams.from_packed( start_params, self.k_fe, self.k_re, self.use_sqrt, has_fe=True) elif len(start_params) == self.k_re2 + self.k_vc: params = MixedLMParams.from_packed( start_params, self.k_fe, self.k_re, self.use_sqrt, has_fe=False) else: raise ValueError("invalid start_params") if do_cg: fit_kwargs["retall"] = hist is not None if "disp" not in fit_kwargs: fit_kwargs["disp"] = False packed = params.get_packed(use_sqrt=self.use_sqrt, has_fe=False) if niter_sa > 0: warnings.warn("niter_sa is currently ignored") # Try optimizing one or more times for j in range(len(method)): rslt = super().fit(start_params=packed, skip_hessian=True, method=method[j], **fit_kwargs) if rslt.mle_retvals['converged']: break packed = rslt.params if j + 1 < len(method): next_method = method[j + 1] warnings.warn( "Retrying MixedLM optimization with %s" % next_method, ConvergenceWarning) else: msg = ("MixedLM optimization failed, " + "trying a different optimizer may help.") warnings.warn(msg, ConvergenceWarning) # The optimization succeeded params = np.atleast_1d(rslt.params) if hist is not None: hist.append(rslt.mle_retvals) converged = rslt.mle_retvals['converged'] if not converged: gn = self.score(rslt.params) gn = np.sqrt(np.sum(gn**2)) msg = "Gradient optimization failed, |grad| = %f" % gn warnings.warn(msg, ConvergenceWarning) # Convert to the final parameterization (i.e. undo the square # root transform of the covariance matrix, and the profiling # over the error variance). params = MixedLMParams.from_packed( params, self.k_fe, self.k_re, use_sqrt=self.use_sqrt, has_fe=False) cov_re_unscaled = params.cov_re vcomp_unscaled = params.vcomp fe_params, sing = self.get_fe_params(cov_re_unscaled, vcomp_unscaled) params.fe_params = fe_params scale = self.get_scale(fe_params, cov_re_unscaled, vcomp_unscaled) cov_re = scale * cov_re_unscaled vcomp = scale * vcomp_unscaled f1 = (self.k_re > 0) and (np.min(np.abs(np.diag(cov_re))) < 0.01) f2 = (self.k_vc > 0) and (np.min(np.abs(vcomp)) < 0.01) if f1 or f2: msg = "The MLE may be on the boundary of the parameter space." warnings.warn(msg, ConvergenceWarning) # Compute the Hessian at the MLE. Note that this is the # Hessian with respect to the random effects covariance matrix # (not its square root). It is used for obtaining standard # errors, not for optimization. hess, sing = self.hessian(params) if sing: warnings.warn(_warn_cov_sing) hess_diag = np.diag(hess) if free is not None: pcov = np.zeros_like(hess) pat = self._freepat.get_packed(use_sqrt=False, has_fe=True) ii = np.flatnonzero(pat) hess_diag = hess_diag[ii] if len(ii) > 0: hess1 = hess[np.ix_(ii, ii)] pcov[np.ix_(ii, ii)] = np.linalg.inv(-hess1) else: pcov = np.linalg.inv(-hess) if np.any(hess_diag >= 0): msg = ("The Hessian matrix at the estimated parameter values " + "is not positive definite.") warnings.warn(msg, ConvergenceWarning) # Prepare a results class instance params_packed = params.get_packed(use_sqrt=False, has_fe=True) results = MixedLMResults(self, params_packed, pcov / scale) results.params_object = params results.fe_params = fe_params results.cov_re = cov_re results.vcomp = vcomp results.scale = scale results.cov_re_unscaled = cov_re_unscaled results.method = "REML" if self.reml else "ML" results.converged = converged results.hist = hist results.reml = self.reml results.cov_pen = self.cov_pen results.k_fe = self.k_fe results.k_re = self.k_re results.k_re2 = self.k_re2 results.k_vc = self.k_vc results.use_sqrt = self.use_sqrt results.freepat = self._freepat return MixedLMResultsWrapper(results)
Fit a linear mixed model to the data. Parameters ---------- start_params : array_like or MixedLMParams Starting values for the profile log-likelihood. If not a `MixedLMParams` instance, this should be an array containing the packed parameters for the profile log-likelihood, including the fixed effects parameters. reml : bool If true, fit according to the REML likelihood, else fit the standard likelihood using ML. niter_sa : int Currently this argument is ignored and has no effect on the results. cov_pen : CovariancePenalty object A penalty for the random effects covariance matrix do_cg : bool, defaults to True If False, the optimization is skipped and a results object at the given (or default) starting values is returned. fe_pen : Penalty object A penalty on the fixed effects free : MixedLMParams object If not `None`, this is a mask that allows parameters to be held fixed at specified values. A 1 indicates that the corresponding parameter is estimated, a 0 indicates that it is fixed at its starting value. Setting the `cov_re` component to the identity matrix fits a model with independent random effects. Note that some optimization methods do not respect this constraint (bfgs and lbfgs both work). full_output : bool If true, attach iteration history to results method : str Optimization method. Can be a scipy.optimize method name, or a list of such names to be tried in sequence. **fit_kwargs Additional keyword arguments passed to fit. Returns ------- A MixedLMResults instance.
fit
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def rvs(self, n): """ Return a vector of simulated values from a mixed linear model. The parameter n is ignored, but required by the interface """ model = self.model # Fixed effects y = np.dot(self.exog, self.fe_params) # Random effects u = np.random.normal(size=(model.n_groups, model.k_re)) u = np.dot(u, np.linalg.cholesky(self.cov_re).T) y += (u[self.group_idx, :] * model.exog_re).sum(1) # Variance components for j, _ in enumerate(model.exog_vc.names): ex = model.exog_vc.mats[j] v = self.vcomp[j] for i, g in enumerate(model.group_labels): exg = ex[i] ii = model.row_indices[g] u = np.random.normal(size=exg.shape[1]) y[ii] += np.sqrt(v) * np.dot(exg, u) # Residual variance y += np.sqrt(self.scale) * np.random.normal(size=len(y)) return y
Return a vector of simulated values from a mixed linear model. The parameter n is ignored, but required by the interface
rvs
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def fittedvalues(self): """ Returns the fitted values for the model. The fitted values reflect the mean structure specified by the fixed effects and the predicted random effects. """ fit = np.dot(self.model.exog, self.fe_params) re = self.random_effects for group_ix, group in enumerate(self.model.group_labels): ix = self.model.row_indices[group] mat = [] if self.model.exog_re_li is not None: mat.append(self.model.exog_re_li[group_ix]) for j in range(self.k_vc): mat.append(self.model.exog_vc.mats[j][group_ix]) mat = np.concatenate(mat, axis=1) fit[ix] += np.dot(mat, re[group]) return fit
Returns the fitted values for the model. The fitted values reflect the mean structure specified by the fixed effects and the predicted random effects.
fittedvalues
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def resid(self): """ Returns the residuals for the model. The residuals reflect the mean structure specified by the fixed effects and the predicted random effects. """ return self.model.endog - self.fittedvalues
Returns the residuals for the model. The residuals reflect the mean structure specified by the fixed effects and the predicted random effects.
resid
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def bse_fe(self): """ Returns the standard errors of the fixed effect regression coefficients. """ p = self.model.exog.shape[1] return np.sqrt(np.diag(self.cov_params())[0:p])
Returns the standard errors of the fixed effect regression coefficients.
bse_fe
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def bse_re(self): """ Returns the standard errors of the variance parameters. The first `k_re x (k_re + 1)` elements of the returned array are the standard errors of the lower triangle of `cov_re`. The remaining elements are the standard errors of the variance components. Note that the sampling distribution of variance parameters is strongly skewed unless the sample size is large, so these standard errors may not give meaningful confidence intervals or p-values if used in the usual way. """ p = self.model.exog.shape[1] return np.sqrt(self.scale * np.diag(self.cov_params())[p:])
Returns the standard errors of the variance parameters. The first `k_re x (k_re + 1)` elements of the returned array are the standard errors of the lower triangle of `cov_re`. The remaining elements are the standard errors of the variance components. Note that the sampling distribution of variance parameters is strongly skewed unless the sample size is large, so these standard errors may not give meaningful confidence intervals or p-values if used in the usual way.
bse_re
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def random_effects(self): """ The conditional means of random effects given the data. Returns ------- random_effects : dict A dictionary mapping the distinct `group` values to the conditional means of the random effects for the group given the data. """ try: cov_re_inv = np.linalg.inv(self.cov_re) except np.linalg.LinAlgError: raise ValueError("Cannot predict random effects from " + "singular covariance structure.") vcomp = self.vcomp k_re = self.k_re ranef_dict = {} for group_ix, group in enumerate(self.model.group_labels): endog = self.model.endog_li[group_ix] exog = self.model.exog_li[group_ix] ex_r = self.model._aex_r[group_ix] ex2_r = self.model._aex_r2[group_ix] vc_var = self.model._expand_vcomp(vcomp, group_ix) # Get the residuals relative to fixed effects resid = endog if self.k_fe > 0: expval = np.dot(exog, self.fe_params) resid = resid - expval solver = _smw_solver(self.scale, ex_r, ex2_r, cov_re_inv, 1 / vc_var) vir = solver(resid) xtvir = _dot(ex_r.T, vir) xtvir[0:k_re] = np.dot(self.cov_re, xtvir[0:k_re]) xtvir[k_re:] *= vc_var ranef_dict[group] = pd.Series( xtvir, index=self._expand_re_names(group_ix)) return ranef_dict
The conditional means of random effects given the data. Returns ------- random_effects : dict A dictionary mapping the distinct `group` values to the conditional means of the random effects for the group given the data.
random_effects
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def random_effects_cov(self): """ Returns the conditional covariance matrix of the random effects for each group given the data. Returns ------- random_effects_cov : dict A dictionary mapping the distinct values of the `group` variable to the conditional covariance matrix of the random effects given the data. """ try: cov_re_inv = np.linalg.inv(self.cov_re) except np.linalg.LinAlgError: cov_re_inv = None vcomp = self.vcomp ranef_dict = {} for group_ix in range(self.model.n_groups): ex_r = self.model._aex_r[group_ix] ex2_r = self.model._aex_r2[group_ix] label = self.model.group_labels[group_ix] vc_var = self.model._expand_vcomp(vcomp, group_ix) solver = _smw_solver(self.scale, ex_r, ex2_r, cov_re_inv, 1 / vc_var) n = ex_r.shape[0] m = self.cov_re.shape[0] mat1 = np.empty((n, m + len(vc_var))) mat1[:, 0:m] = np.dot(ex_r[:, 0:m], self.cov_re) mat1[:, m:] = np.dot(ex_r[:, m:], np.diag(vc_var)) mat2 = solver(mat1) mat2 = np.dot(mat1.T, mat2) v = -mat2 v[0:m, 0:m] += self.cov_re ix = np.arange(m, v.shape[0]) v[ix, ix] += vc_var na = self._expand_re_names(group_ix) v = pd.DataFrame(v, index=na, columns=na) ranef_dict[label] = v return ranef_dict
Returns the conditional covariance matrix of the random effects for each group given the data. Returns ------- random_effects_cov : dict A dictionary mapping the distinct values of the `group` variable to the conditional covariance matrix of the random effects given the data.
random_effects_cov
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def t_test(self, r_matrix, use_t=None): """ Compute a t-test for a each linear hypothesis of the form Rb = q Parameters ---------- r_matrix : array_like If an array is given, a p x k 2d array or length k 1d array specifying the linear restrictions. It is assumed that the linear combination is equal to zero. scale : float, optional An optional `scale` to use. Default is the scale specified by the model fit. use_t : bool, optional If use_t is None, then the default of the model is used. If use_t is True, then the p-values are based on the t distribution. If use_t is False, then the p-values are based on the normal distribution. Returns ------- res : ContrastResults instance The results for the test are attributes of this results instance. The available results have the same elements as the parameter table in `summary()`. """ if r_matrix.shape[1] != self.k_fe: raise ValueError("r_matrix for t-test should have %d columns" % self.k_fe) d = self.k_re2 + self.k_vc z0 = np.zeros((r_matrix.shape[0], d)) r_matrix = np.concatenate((r_matrix, z0), axis=1) tst_rslt = super().t_test(r_matrix, use_t=use_t) return tst_rslt
Compute a t-test for a each linear hypothesis of the form Rb = q Parameters ---------- r_matrix : array_like If an array is given, a p x k 2d array or length k 1d array specifying the linear restrictions. It is assumed that the linear combination is equal to zero. scale : float, optional An optional `scale` to use. Default is the scale specified by the model fit. use_t : bool, optional If use_t is None, then the default of the model is used. If use_t is True, then the p-values are based on the t distribution. If use_t is False, then the p-values are based on the normal distribution. Returns ------- res : ContrastResults instance The results for the test are attributes of this results instance. The available results have the same elements as the parameter table in `summary()`.
t_test
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def summary(self, yname=None, xname_fe=None, xname_re=None, title=None, alpha=.05): """ Summarize the mixed model regression results. Parameters ---------- yname : str, optional Default is `y` xname_fe : list[str], optional Fixed effects covariate names xname_re : list[str], optional Random effects covariate names title : str, optional Title for the top table. If not None, then this replaces the default title alpha : float significance level for the confidence intervals Returns ------- smry : Summary instance this holds the summary tables and text, which can be printed or converted to various output formats. See Also -------- statsmodels.iolib.summary2.Summary : class to hold summary results """ from statsmodels.iolib import summary2 smry = summary2.Summary() info = {} info["Model:"] = "MixedLM" if yname is None: yname = self.model.endog_names param_names = self.model.data.param_names[:] k_fe_params = len(self.fe_params) k_re_params = len(param_names) - len(self.fe_params) if xname_fe is not None: if len(xname_fe) != k_fe_params: msg = "xname_fe should be a list of length %d" % k_fe_params raise ValueError(msg) param_names[:k_fe_params] = xname_fe if xname_re is not None: if len(xname_re) != k_re_params: msg = "xname_re should be a list of length %d" % k_re_params raise ValueError(msg) param_names[k_fe_params:] = xname_re info["No. Observations:"] = str(self.model.n_totobs) info["No. Groups:"] = str(self.model.n_groups) gs = np.array([len(x) for x in self.model.endog_li]) info["Min. group size:"] = "%.0f" % min(gs) info["Max. group size:"] = "%.0f" % max(gs) info["Mean group size:"] = "%.1f" % np.mean(gs) info["Dependent Variable:"] = yname info["Method:"] = self.method info["Scale:"] = self.scale info["Log-Likelihood:"] = self.llf info["Converged:"] = "Yes" if self.converged else "No" smry.add_dict(info) smry.add_title("Mixed Linear Model Regression Results") float_fmt = "%.3f" sdf = np.nan * np.ones((self.k_fe + self.k_re2 + self.k_vc, 6)) # Coefficient estimates sdf[0:self.k_fe, 0] = self.fe_params # Standard errors sdf[0:self.k_fe, 1] = np.sqrt(np.diag(self.cov_params()[0:self.k_fe])) # Z-scores sdf[0:self.k_fe, 2] = sdf[0:self.k_fe, 0] / sdf[0:self.k_fe, 1] # p-values sdf[0:self.k_fe, 3] = 2 * norm.cdf(-np.abs(sdf[0:self.k_fe, 2])) # Confidence intervals qm = -norm.ppf(alpha / 2) sdf[0:self.k_fe, 4] = sdf[0:self.k_fe, 0] - qm * sdf[0:self.k_fe, 1] sdf[0:self.k_fe, 5] = sdf[0:self.k_fe, 0] + qm * sdf[0:self.k_fe, 1] # All random effects variances and covariances jj = self.k_fe for i in range(self.k_re): for j in range(i + 1): sdf[jj, 0] = self.cov_re[i, j] sdf[jj, 1] = np.sqrt(self.scale) * self.bse[jj] jj += 1 # Variance components for i in range(self.k_vc): sdf[jj, 0] = self.vcomp[i] sdf[jj, 1] = np.sqrt(self.scale) * self.bse[jj] jj += 1 sdf = pd.DataFrame(index=param_names, data=sdf) sdf.columns = ['Coef.', 'Std.Err.', 'z', 'P>|z|', '[' + str(alpha/2), str(1-alpha/2) + ']'] for col in sdf.columns: sdf[col] = [float_fmt % x if np.isfinite(x) else "" for x in sdf[col]] smry.add_df(sdf, align='r') return smry
Summarize the mixed model regression results. Parameters ---------- yname : str, optional Default is `y` xname_fe : list[str], optional Fixed effects covariate names xname_re : list[str], optional Random effects covariate names title : str, optional Title for the top table. If not None, then this replaces the default title alpha : float significance level for the confidence intervals Returns ------- smry : Summary instance this holds the summary tables and text, which can be printed or converted to various output formats. See Also -------- statsmodels.iolib.summary2.Summary : class to hold summary results
summary
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def aic(self): """Akaike information criterion""" if self.reml: return np.nan if self.freepat is not None: df = self.freepat.get_packed(use_sqrt=False, has_fe=True).sum() + 1 else: df = self.params.size + 1 return -2 * (self.llf - df)
Akaike information criterion
aic
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def bic(self): """Bayesian information criterion""" if self.reml: return np.nan if self.freepat is not None: df = self.freepat.get_packed(use_sqrt=False, has_fe=True).sum() + 1 else: df = self.params.size + 1 return -2 * self.llf + np.log(self.nobs) * df
Bayesian information criterion
bic
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def profile_re(self, re_ix, vtype, num_low=5, dist_low=1., num_high=5, dist_high=1., **fit_kwargs): """ Profile-likelihood inference for variance parameters. Parameters ---------- re_ix : int If vtype is `re`, this value is the index of the variance parameter for which to construct a profile likelihood. If `vtype` is 'vc' then `re_ix` is the name of the variance parameter to be profiled. vtype : str Either 're' or 'vc', depending on whether the profile analysis is for a random effect or a variance component. num_low : int The number of points at which to calculate the likelihood below the MLE of the parameter of interest. dist_low : float The distance below the MLE of the parameter of interest to begin calculating points on the profile likelihood. num_high : int The number of points at which to calculate the likelihood above the MLE of the parameter of interest. dist_high : float The distance above the MLE of the parameter of interest to begin calculating points on the profile likelihood. **fit_kwargs Additional keyword arguments passed to fit. Returns ------- An array with two columns. The first column contains the values to which the parameter of interest is constrained. The second column contains the corresponding likelihood values. Notes ----- Only variance parameters can be profiled. """ pmodel = self.model k_fe = pmodel.k_fe k_re = pmodel.k_re k_vc = pmodel.k_vc endog, exog = pmodel.endog, pmodel.exog # Need to permute the columns of the random effects design # matrix so that the profiled variable is in the first column. if vtype == 're': ix = np.arange(k_re) ix[0] = re_ix ix[re_ix] = 0 exog_re = pmodel.exog_re.copy()[:, ix] # Permute the covariance structure to match the permuted # design matrix. params = self.params_object.copy() cov_re_unscaled = params.cov_re cov_re_unscaled = cov_re_unscaled[np.ix_(ix, ix)] params.cov_re = cov_re_unscaled ru0 = cov_re_unscaled[0, 0] # Convert dist_low and dist_high to the profile # parameterization cov_re = self.scale * cov_re_unscaled low = (cov_re[0, 0] - dist_low) / self.scale high = (cov_re[0, 0] + dist_high) / self.scale elif vtype == 'vc': re_ix = self.model.exog_vc.names.index(re_ix) params = self.params_object.copy() vcomp = self.vcomp low = (vcomp[re_ix] - dist_low) / self.scale high = (vcomp[re_ix] + dist_high) / self.scale ru0 = vcomp[re_ix] / self.scale # Define the sequence of values to which the parameter of # interest will be constrained. if low <= 0: raise ValueError("dist_low is too large and would result in a " "negative variance. Try a smaller value.") left = np.linspace(low, ru0, num_low + 1) right = np.linspace(ru0, high, num_high+1)[1:] rvalues = np.concatenate((left, right)) # Indicators of which parameters are free and fixed. free = MixedLMParams(k_fe, k_re, k_vc) if self.freepat is None: free.fe_params = np.ones(k_fe) vcomp = np.ones(k_vc) mat = np.ones((k_re, k_re)) else: # If a freepat already has been specified, we add the # constraint to it. free.fe_params = self.freepat.fe_params vcomp = self.freepat.vcomp mat = self.freepat.cov_re if vtype == 're': mat = mat[np.ix_(ix, ix)] if vtype == 're': mat[0, 0] = 0 else: vcomp[re_ix] = 0 free.cov_re = mat free.vcomp = vcomp klass = self.model.__class__ init_kwargs = pmodel._get_init_kwds() if vtype == 're': init_kwargs['exog_re'] = exog_re likev = [] for x in rvalues: model = klass(endog, exog, **init_kwargs) if vtype == 're': cov_re = params.cov_re.copy() cov_re[0, 0] = x params.cov_re = cov_re else: params.vcomp[re_ix] = x # TODO should use fit_kwargs rslt = model.fit(start_params=params, free=free, reml=self.reml, cov_pen=self.cov_pen, **fit_kwargs)._results likev.append([x * rslt.scale, rslt.llf]) likev = np.asarray(likev) return likev
Profile-likelihood inference for variance parameters. Parameters ---------- re_ix : int If vtype is `re`, this value is the index of the variance parameter for which to construct a profile likelihood. If `vtype` is 'vc' then `re_ix` is the name of the variance parameter to be profiled. vtype : str Either 're' or 'vc', depending on whether the profile analysis is for a random effect or a variance component. num_low : int The number of points at which to calculate the likelihood below the MLE of the parameter of interest. dist_low : float The distance below the MLE of the parameter of interest to begin calculating points on the profile likelihood. num_high : int The number of points at which to calculate the likelihood above the MLE of the parameter of interest. dist_high : float The distance above the MLE of the parameter of interest to begin calculating points on the profile likelihood. **fit_kwargs Additional keyword arguments passed to fit. Returns ------- An array with two columns. The first column contains the values to which the parameter of interest is constrained. The second column contains the corresponding likelihood values. Notes ----- Only variance parameters can be profiled.
profile_re
python
statsmodels/statsmodels
statsmodels/regression/mixed_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/mixed_linear_model.py
BSD-3-Clause
def fit(self, method='pinv'): """ Minimal implementation of WLS optimized for performance. Parameters ---------- method : str, optional Method to use to estimate parameters. "pinv", "qr" or "lstsq" * "pinv" uses the Moore-Penrose pseudoinverse to solve the least squares problem. * "qr" uses the QR factorization. * "lstsq" uses the least squares implementation in numpy.linalg Returns ------- results : namedtuple Named tuple containing the fewest terms needed to implement iterative estimation in models. Currently * params : Estimated parameters * fittedvalues : Fit values using original data * resid : Residuals using original data * model : namedtuple with one field, weights * scale : scale computed using weighted residuals Notes ----- Does not perform and checks on the input data See Also -------- statsmodels.regression.linear_model.WLS """ if method == 'pinv': pinv_wexog = np.linalg.pinv(self.wexog) params = pinv_wexog.dot(self.wendog) elif method == 'qr': Q, R = np.linalg.qr(self.wexog) params = np.linalg.solve(R, np.dot(Q.T, self.wendog)) else: params, _, _, _ = np.linalg.lstsq(self.wexog, self.wendog, rcond=-1) return self.results(params)
Minimal implementation of WLS optimized for performance. Parameters ---------- method : str, optional Method to use to estimate parameters. "pinv", "qr" or "lstsq" * "pinv" uses the Moore-Penrose pseudoinverse to solve the least squares problem. * "qr" uses the QR factorization. * "lstsq" uses the least squares implementation in numpy.linalg Returns ------- results : namedtuple Named tuple containing the fewest terms needed to implement iterative estimation in models. Currently * params : Estimated parameters * fittedvalues : Fit values using original data * resid : Residuals using original data * model : namedtuple with one field, weights * scale : scale computed using weighted residuals Notes ----- Does not perform and checks on the input data See Also -------- statsmodels.regression.linear_model.WLS
fit
python
statsmodels/statsmodels
statsmodels/regression/_tools.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/_tools.py
BSD-3-Clause
def results(self, params): """ Construct results params : ndarray Model parameters Notes ----- Allows results to be constructed from either existing parameters or when estimated using using ``fit`` """ fitted_values = self.exog.dot(params) resid = self.endog - fitted_values wresid = self.wendog - self.wexog.dot(params) df_resid = self.wexog.shape[0] - self.wexog.shape[1] scale = np.dot(wresid, wresid) / df_resid return Bunch(params=params, fittedvalues=fitted_values, resid=resid, model=self, scale=scale)
Construct results params : ndarray Model parameters Notes ----- Allows results to be constructed from either existing parameters or when estimated using using ``fit``
results
python
statsmodels/statsmodels
statsmodels/regression/_tools.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/_tools.py
BSD-3-Clause
def whiten(self, data): """ QuantReg model whitener does nothing: returns data. """ return data
QuantReg model whitener does nothing: returns data.
whiten
python
statsmodels/statsmodels
statsmodels/regression/quantile_regression.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/quantile_regression.py
BSD-3-Clause
def fit(self, q=.5, vcov='robust', kernel='epa', bandwidth='hsheather', max_iter=1000, p_tol=1e-6, **kwargs): """ Solve by Iterative Weighted Least Squares Parameters ---------- q : float Quantile must be strictly between 0 and 1 vcov : str, method used to calculate the variance-covariance matrix of the parameters. Default is ``robust``: - robust : heteroskedasticity robust standard errors (as suggested in Greene 6th edition) - iid : iid errors (as in Stata 12) kernel : str, kernel to use in the kernel density estimation for the asymptotic covariance matrix: - epa: Epanechnikov - cos: Cosine - gau: Gaussian - par: Parzene bandwidth : str, Bandwidth selection method in kernel density estimation for asymptotic covariance estimate (full references in QuantReg docstring): - hsheather: Hall-Sheather (1988) - bofinger: Bofinger (1975) - chamberlain: Chamberlain (1994) """ if q <= 0 or q >= 1: raise Exception('q must be strictly between 0 and 1') kern_names = ['biw', 'cos', 'epa', 'gau', 'par'] if kernel not in kern_names: raise Exception("kernel must be one of " + ', '.join(kern_names)) else: kernel = kernels[kernel] if bandwidth == 'hsheather': bandwidth = hall_sheather elif bandwidth == 'bofinger': bandwidth = bofinger elif bandwidth == 'chamberlain': bandwidth = chamberlain else: raise Exception("bandwidth must be in 'hsheather', 'bofinger', 'chamberlain'") endog = self.endog exog = self.exog nobs = self.nobs exog_rank = np.linalg.matrix_rank(self.exog) self.rank = exog_rank self.df_model = float(self.rank - self.k_constant) self.df_resid = self.nobs - self.rank n_iter = 0 xstar = exog beta = np.ones(exog.shape[1]) # TODO: better start, initial beta is used only for convergence check # Note the following does not work yet, # the iteration loop always starts with OLS as initial beta # if start_params is not None: # if len(start_params) != rank: # raise ValueError('start_params has wrong length') # beta = start_params # else: # # start with OLS # beta = np.dot(np.linalg.pinv(exog), endog) diff = 10 cycle = False history = dict(params = [], mse=[]) while n_iter < max_iter and diff > p_tol and not cycle: n_iter += 1 beta0 = beta xtx = np.dot(xstar.T, exog) xty = np.dot(xstar.T, endog) beta = np.dot(pinv(xtx), xty) resid = endog - np.dot(exog, beta) mask = np.abs(resid) < .000001 resid[mask] = ((resid[mask] >= 0) * 2 - 1) * .000001 resid = np.where(resid < 0, q * resid, (1-q) * resid) resid = np.abs(resid) xstar = exog / resid[:, np.newaxis] diff = np.max(np.abs(beta - beta0)) history['params'].append(beta) history['mse'].append(np.mean(resid*resid)) if (n_iter >= 300) and (n_iter % 100 == 0): # check for convergence circle, should not happen for ii in range(2, 10): if np.all(beta == history['params'][-ii]): cycle = True warnings.warn("Convergence cycle detected", ConvergenceWarning) break if n_iter == max_iter: warnings.warn("Maximum number of iterations (" + str(max_iter) + ") reached.", IterationLimitWarning) e = endog - np.dot(exog, beta) # Greene (2008, p.407) writes that Stata 6 uses this bandwidth: # h = 0.9 * np.std(e) / (nobs**0.2) # Instead, we calculate bandwidth as in Stata 12 iqre = stats.scoreatpercentile(e, 75) - stats.scoreatpercentile(e, 25) h = bandwidth(nobs, q) h = min(np.std(endog), iqre / 1.34) * (norm.ppf(q + h) - norm.ppf(q - h)) fhat0 = 1. / (nobs * h) * np.sum(kernel(e / h)) if vcov == 'robust': d = np.where(e > 0, (q/fhat0)**2, ((1-q)/fhat0)**2) xtxi = pinv(np.dot(exog.T, exog)) xtdx = np.dot(exog.T * d[np.newaxis, :], exog) vcov = xtxi @ xtdx @ xtxi elif vcov == 'iid': vcov = (1. / fhat0)**2 * q * (1 - q) * pinv(np.dot(exog.T, exog)) else: raise Exception("vcov must be 'robust' or 'iid'") lfit = QuantRegResults(self, beta, normalized_cov_params=vcov) lfit.q = q lfit.iterations = n_iter lfit.sparsity = 1. / fhat0 lfit.bandwidth = h lfit.history = history return RegressionResultsWrapper(lfit)
Solve by Iterative Weighted Least Squares Parameters ---------- q : float Quantile must be strictly between 0 and 1 vcov : str, method used to calculate the variance-covariance matrix of the parameters. Default is ``robust``: - robust : heteroskedasticity robust standard errors (as suggested in Greene 6th edition) - iid : iid errors (as in Stata 12) kernel : str, kernel to use in the kernel density estimation for the asymptotic covariance matrix: - epa: Epanechnikov - cos: Cosine - gau: Gaussian - par: Parzene bandwidth : str, Bandwidth selection method in kernel density estimation for asymptotic covariance estimate (full references in QuantReg docstring): - hsheather: Hall-Sheather (1988) - bofinger: Bofinger (1975) - chamberlain: Chamberlain (1994)
fit
python
statsmodels/statsmodels
statsmodels/regression/quantile_regression.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/quantile_regression.py
BSD-3-Clause
def summary(self, yname=None, xname=None, title=None, alpha=.05): """Summarize the Regression Results Parameters ---------- yname : str, optional Default is `y` xname : list[str], optional Names for the exogenous variables. Default is `var_##` for ## in the number of regressors. Must match the number of parameters in the model title : str, optional Title for the top table. If not None, then this replaces the default title alpha : float significance level for the confidence intervals Returns ------- smry : Summary instance this holds the summary tables and text, which can be printed or converted to various output formats. See Also -------- statsmodels.iolib.summary.Summary : class to hold summary results """ eigvals = self.eigenvals condno = self.condition_number top_left = [('Dep. Variable:', None), ('Model:', None), ('Method:', ['Least Squares']), ('Date:', None), ('Time:', None) ] top_right = [('Pseudo R-squared:', ["%#8.4g" % self.prsquared]), ('Bandwidth:', ["%#8.4g" % self.bandwidth]), ('Sparsity:', ["%#8.4g" % self.sparsity]), ('No. Observations:', None), ('Df Residuals:', None), ('Df Model:', None) ] if title is None: title = self.model.__class__.__name__ + ' ' + "Regression Results" # create summary table instance from statsmodels.iolib.summary import Summary smry = Summary() smry.add_table_2cols(self, gleft=top_left, gright=top_right, yname=yname, xname=xname, title=title) smry.add_table_params(self, yname=yname, xname=xname, alpha=alpha, use_t=self.use_t) # add warnings/notes, added to text format only etext = [] if eigvals[-1] < 1e-10: wstr = "The smallest eigenvalue is %6.3g. This might indicate " wstr += "that there are\n" wstr += "strong multicollinearity problems or that the design " wstr += "matrix is singular." wstr = wstr % eigvals[-1] etext.append(wstr) elif condno > 1000: # TODO: what is recommended wstr = "The condition number is large, %6.3g. This might " wstr += "indicate that there are\n" wstr += "strong multicollinearity or other numerical " wstr += "problems." wstr = wstr % condno etext.append(wstr) if etext: smry.add_extra_txt(etext) return smry
Summarize the Regression Results Parameters ---------- yname : str, optional Default is `y` xname : list[str], optional Names for the exogenous variables. Default is `var_##` for ## in the number of regressors. Must match the number of parameters in the model title : str, optional Title for the top table. If not None, then this replaces the default title alpha : float significance level for the confidence intervals Returns ------- smry : Summary instance this holds the summary tables and text, which can be printed or converted to various output formats. See Also -------- statsmodels.iolib.summary.Summary : class to hold summary results
summary
python
statsmodels/statsmodels
statsmodels/regression/quantile_regression.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/quantile_regression.py
BSD-3-Clause
def fit(self): """ Fits the model by application of the Kalman filter Returns ------- RecursiveLSResults """ smoother_results = self.smooth(return_ssm=True) with self.ssm.fixed_scale(smoother_results.scale): res = self.smooth() return res
Fits the model by application of the Kalman filter Returns ------- RecursiveLSResults
fit
python
statsmodels/statsmodels
statsmodels/regression/recursive_ls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/recursive_ls.py
BSD-3-Clause
def update(self, params, **kwargs): """ Update the parameters of the model Updates the representation matrices to fill in the new parameter values. Parameters ---------- params : array_like Array of new parameters. transformed : bool, optional Whether or not `params` is already transformed. If set to False, `transform_params` is called. Default is True.. Returns ------- params : array_like Array of parameters. """ pass
Update the parameters of the model Updates the representation matrices to fill in the new parameter values. Parameters ---------- params : array_like Array of new parameters. transformed : bool, optional Whether or not `params` is already transformed. If set to False, `transform_params` is called. Default is True.. Returns ------- params : array_like Array of parameters.
update
python
statsmodels/statsmodels
statsmodels/regression/recursive_ls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/recursive_ls.py
BSD-3-Clause
def recursive_coefficients(self): """ Estimates of regression coefficients, recursively estimated Returns ------- out: Bunch Has the following attributes: - `filtered`: a time series array with the filtered estimate of the component - `filtered_cov`: a time series array with the filtered estimate of the variance/covariance of the component - `smoothed`: a time series array with the smoothed estimate of the component - `smoothed_cov`: a time series array with the smoothed estimate of the variance/covariance of the component - `offset`: an integer giving the offset in the state vector where this component begins """ out = None spec = self.specification start = offset = 0 end = offset + spec.k_exog out = Bunch( filtered=self.filtered_state[start:end], filtered_cov=self.filtered_state_cov[start:end, start:end], smoothed=None, smoothed_cov=None, offset=offset ) if self.smoothed_state is not None: out.smoothed = self.smoothed_state[start:end] if self.smoothed_state_cov is not None: out.smoothed_cov = ( self.smoothed_state_cov[start:end, start:end]) return out
Estimates of regression coefficients, recursively estimated Returns ------- out: Bunch Has the following attributes: - `filtered`: a time series array with the filtered estimate of the component - `filtered_cov`: a time series array with the filtered estimate of the variance/covariance of the component - `smoothed`: a time series array with the smoothed estimate of the component - `smoothed_cov`: a time series array with the smoothed estimate of the variance/covariance of the component - `offset`: an integer giving the offset in the state vector where this component begins
recursive_coefficients
python
statsmodels/statsmodels
statsmodels/regression/recursive_ls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/recursive_ls.py
BSD-3-Clause
def llf_recursive_obs(self): """ (float) Loglikelihood at observation, computed from recursive residuals """ from scipy.stats import norm return np.log(norm.pdf(self.resid_recursive, loc=0, scale=self.scale**0.5))
(float) Loglikelihood at observation, computed from recursive residuals
llf_recursive_obs
python
statsmodels/statsmodels
statsmodels/regression/recursive_ls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/recursive_ls.py
BSD-3-Clause
def llf_recursive(self): """ (float) Loglikelihood defined by recursive residuals, equivalent to OLS """ return np.sum(self.llf_recursive_obs)
(float) Loglikelihood defined by recursive residuals, equivalent to OLS
llf_recursive
python
statsmodels/statsmodels
statsmodels/regression/recursive_ls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/recursive_ls.py
BSD-3-Clause
def ssr(self): """ssr""" d = max(self.nobs_diffuse, self.loglikelihood_burn) return (self.nobs - d) * self.filter_results.obs_cov[0, 0, 0]
ssr
ssr
python
statsmodels/statsmodels
statsmodels/regression/recursive_ls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/recursive_ls.py
BSD-3-Clause
def centered_tss(self): """Centered tss""" return np.sum((self.filter_results.endog[0] - np.mean(self.filter_results.endog))**2)
Centered tss
centered_tss
python
statsmodels/statsmodels
statsmodels/regression/recursive_ls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/recursive_ls.py
BSD-3-Clause
def uncentered_tss(self): """uncentered tss""" return np.sum((self.filter_results.endog[0])**2)
uncentered tss
uncentered_tss
python
statsmodels/statsmodels
statsmodels/regression/recursive_ls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/recursive_ls.py
BSD-3-Clause
def ess(self): """ess""" if self.k_constant: return self.centered_tss - self.ssr else: return self.uncentered_tss - self.ssr
ess
ess
python
statsmodels/statsmodels
statsmodels/regression/recursive_ls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/recursive_ls.py
BSD-3-Clause
def rsquared(self): """rsquared""" if self.k_constant: return 1 - self.ssr / self.centered_tss else: return 1 - self.ssr / self.uncentered_tss
rsquared
rsquared
python
statsmodels/statsmodels
statsmodels/regression/recursive_ls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/recursive_ls.py
BSD-3-Clause
def mse_model(self): """mse_model""" return self.ess / self.df_model
mse_model
mse_model
python
statsmodels/statsmodels
statsmodels/regression/recursive_ls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/recursive_ls.py
BSD-3-Clause
def mse_resid(self): """mse_resid""" return self.ssr / self.df_resid
mse_resid
mse_resid
python
statsmodels/statsmodels
statsmodels/regression/recursive_ls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/recursive_ls.py
BSD-3-Clause
def mse_total(self): """mse_total""" if self.k_constant: return self.centered_tss / (self.df_resid + self.df_model) else: return self.uncentered_tss / (self.df_resid + self.df_model)
mse_total
mse_total
python
statsmodels/statsmodels
statsmodels/regression/recursive_ls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/recursive_ls.py
BSD-3-Clause
def _cusum_significance_bounds(self, alpha, ddof=0, points=None): """ Parameters ---------- alpha : float, optional The significance bound is alpha %. ddof : int, optional The number of periods additional to `k_exog` to exclude in constructing the bounds. Default is zero. This is usually used only for testing purposes. points : iterable, optional The points at which to evaluate the significance bounds. Default is two points, beginning and end of the sample. Notes ----- Comparing against the cusum6 package for Stata, this does not produce exactly the same confidence bands (which are produced in cusum6 by lw, uw) because they burn the first k_exog + 1 periods instead of the first k_exog. If this change is performed (so that `tmp = (self.nobs - d - 1)**0.5`), then the output here matches cusum6. The cusum6 behavior does not seem to be consistent with Brown et al. (1975); it is likely they did that because they needed three initial observations to get the initial OLS estimates, whereas we do not need to do that. """ # Get the constant associated with the significance level if alpha == 0.01: scalar = 1.143 elif alpha == 0.05: scalar = 0.948 elif alpha == 0.10: scalar = 0.950 else: raise ValueError('Invalid significance level.') # Get the points for the significance bound lines d = max(self.nobs_diffuse, self.loglikelihood_burn) tmp = (self.nobs - d - ddof)**0.5 def upper_line(x): return scalar * tmp + 2 * scalar * (x - d) / tmp if points is None: points = np.array([d, self.nobs]) return -upper_line(points), upper_line(points)
Parameters ---------- alpha : float, optional The significance bound is alpha %. ddof : int, optional The number of periods additional to `k_exog` to exclude in constructing the bounds. Default is zero. This is usually used only for testing purposes. points : iterable, optional The points at which to evaluate the significance bounds. Default is two points, beginning and end of the sample. Notes ----- Comparing against the cusum6 package for Stata, this does not produce exactly the same confidence bands (which are produced in cusum6 by lw, uw) because they burn the first k_exog + 1 periods instead of the first k_exog. If this change is performed (so that `tmp = (self.nobs - d - 1)**0.5`), then the output here matches cusum6. The cusum6 behavior does not seem to be consistent with Brown et al. (1975); it is likely they did that because they needed three initial observations to get the initial OLS estimates, whereas we do not need to do that.
_cusum_significance_bounds
python
statsmodels/statsmodels
statsmodels/regression/recursive_ls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/recursive_ls.py
BSD-3-Clause
def _cusum_squares_significance_bounds(self, alpha, points=None): """ Notes ----- Comparing against the cusum6 package for Stata, this does not produce exactly the same confidence bands (which are produced in cusum6 by lww, uww) because they use a different method for computing the critical value; in particular, they use tabled values from Table C, pp. 364-365 of "The Econometric Analysis of Time Series" Harvey, (1990), and use the value given to 99 observations for any larger number of observations. In contrast, we use the approximating critical values suggested in Edgerton and Wells (1994) which allows computing relatively good approximations for any number of observations. """ # Get the approximate critical value associated with the significance # level d = max(self.nobs_diffuse, self.loglikelihood_burn) n = 0.5 * (self.nobs - d) - 1 try: ix = [0.1, 0.05, 0.025, 0.01, 0.005].index(alpha / 2) except ValueError: raise ValueError('Invalid significance level.') scalars = _cusum_squares_scalars[:, ix] crit = scalars[0] / n**0.5 + scalars[1] / n + scalars[2] / n**1.5 # Get the points for the significance bound lines if points is None: points = np.array([d, self.nobs]) line = (points - d) / (self.nobs - d) return line - crit, line + crit
Notes ----- Comparing against the cusum6 package for Stata, this does not produce exactly the same confidence bands (which are produced in cusum6 by lww, uww) because they use a different method for computing the critical value; in particular, they use tabled values from Table C, pp. 364-365 of "The Econometric Analysis of Time Series" Harvey, (1990), and use the value given to 99 observations for any larger number of observations. In contrast, we use the approximating critical values suggested in Edgerton and Wells (1994) which allows computing relatively good approximations for any number of observations.
_cusum_squares_significance_bounds
python
statsmodels/statsmodels
statsmodels/regression/recursive_ls.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/recursive_ls.py
BSD-3-Clause
def conf_int(self, obs=False, alpha=0.05): """ Returns the confidence interval of the value, `effect` of the constraint. This is currently only available for t and z tests. Parameters ---------- alpha : float, optional The significance level for the confidence interval. ie., The default `alpha` = .05 returns a 95% confidence interval. Returns ------- ci : ndarray, (k_constraints, 2) The array has the lower and the upper limit of the confidence interval in the columns. """ se = self.se_obs if obs else self.se_mean q = self.dist.ppf(1 - alpha / 2., *self.dist_args) lower = self.predicted_mean - q * se upper = self.predicted_mean + q * se return np.column_stack((lower, upper))
Returns the confidence interval of the value, `effect` of the constraint. This is currently only available for t and z tests. Parameters ---------- alpha : float, optional The significance level for the confidence interval. ie., The default `alpha` = .05 returns a 95% confidence interval. Returns ------- ci : ndarray, (k_constraints, 2) The array has the lower and the upper limit of the confidence interval in the columns.
conf_int
python
statsmodels/statsmodels
statsmodels/regression/_prediction.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/_prediction.py
BSD-3-Clause
def get_prediction(self, exog=None, transform=True, weights=None, row_labels=None, pred_kwds=None): """ Compute prediction results. Parameters ---------- exog : array_like, optional The values for which you want to predict. transform : bool, optional If the model was fit via a formula, do you want to pass exog through the formula. Default is True. E.g., if you fit a model y ~ log(x1) + log(x2), and transform is True, then you can pass a data structure that contains x1 and x2 in their original form. Otherwise, you'd need to log the data first. weights : array_like, optional Weights interpreted as in WLS, used for the variance of the predicted residual. row_labels : list A list of row labels to use. If not provided, read `exog` is available. **kwargs Some models can take additional keyword arguments, see the predict method of the model for the details. Returns ------- linear_model.PredictionResults The prediction results instance contains prediction and prediction variance and can on demand calculate confidence intervals and summary tables for the prediction of the mean and of new observations. """ # prepare exog and row_labels, based on base Results.predict if transform and hasattr(self.model, 'formula') and exog is not None: if isinstance(exog, pd.Series): # GH-6509 exog = pd.DataFrame(exog) exog = FormulaManager().get_matrices(self.model.data.model_spec, exog) if exog is not None: if row_labels is None: row_labels = getattr(exog, 'index', None) if callable(row_labels): row_labels = None exog = np.asarray(exog) if exog.ndim == 1: # Params informs whether a row or column vector if self.params.shape[0] > 1: exog = exog[None, :] else: exog = exog[:, None] exog = np.atleast_2d(exog) # needed in count model shape[1] else: exog = self.model.exog if weights is None: weights = getattr(self.model, 'weights', None) if row_labels is None: row_labels = getattr(self.model.data, 'row_labels', None) # need to handle other arrays, TODO: is delegating to model possible ? if weights is not None: weights = np.asarray(weights) if (weights.size > 1 and (weights.ndim != 1 or weights.shape[0] == exog.shape[1])): raise ValueError('weights has wrong shape') if pred_kwds is None: pred_kwds = {} predicted_mean = self.model.predict(self.params, exog, **pred_kwds) covb = self.cov_params() var_pred_mean = (exog * np.dot(covb, exog.T).T).sum(1) var_resid = self.scale # self.mse_resid / weights # TODO: check that we have correct scale, Refactor scale #??? # special case for now: if self.cov_type == 'fixed scale': var_resid = self.cov_kwds['scale'] if weights is not None: var_resid /= weights dist = ['norm', 't'][self.use_t] return PredictionResults(predicted_mean, var_pred_mean, var_resid, df=self.df_resid, dist=dist, row_labels=row_labels)
Compute prediction results. Parameters ---------- exog : array_like, optional The values for which you want to predict. transform : bool, optional If the model was fit via a formula, do you want to pass exog through the formula. Default is True. E.g., if you fit a model y ~ log(x1) + log(x2), and transform is True, then you can pass a data structure that contains x1 and x2 in their original form. Otherwise, you'd need to log the data first. weights : array_like, optional Weights interpreted as in WLS, used for the variance of the predicted residual. row_labels : list A list of row labels to use. If not provided, read `exog` is available. **kwargs Some models can take additional keyword arguments, see the predict method of the model for the details. Returns ------- linear_model.PredictionResults The prediction results instance contains prediction and prediction variance and can on demand calculate confidence intervals and summary tables for the prediction of the mean and of new observations.
get_prediction
python
statsmodels/statsmodels
statsmodels/regression/_prediction.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/_prediction.py
BSD-3-Clause
def _get_sigma(sigma, nobs): """ Returns sigma (matrix, nobs by nobs) for GLS and the inverse of its Cholesky decomposition. Handles dimensions and checks integrity. If sigma is None, returns None, None. Otherwise returns sigma, cholsigmainv. """ if sigma is None: return None, None sigma = np.asarray(sigma).squeeze() if sigma.ndim == 0: sigma = np.repeat(sigma, nobs) if sigma.ndim == 1: if sigma.shape != (nobs,): raise ValueError("Sigma must be a scalar, 1d of length %s or a 2d " "array of shape %s x %s" % (nobs, nobs, nobs)) cholsigmainv = 1/np.sqrt(sigma) else: if sigma.shape != (nobs, nobs): raise ValueError("Sigma must be a scalar, 1d of length %s or a 2d " "array of shape %s x %s" % (nobs, nobs, nobs)) cholsigmainv, info = dtrtri(cholesky(sigma, lower=True), lower=True, overwrite_c=True) if info > 0: raise np.linalg.LinAlgError('Cholesky decomposition of sigma ' 'yields a singular matrix') elif info < 0: raise ValueError('Invalid input to dtrtri (info = %d)' % info) return sigma, cholsigmainv
Returns sigma (matrix, nobs by nobs) for GLS and the inverse of its Cholesky decomposition. Handles dimensions and checks integrity. If sigma is None, returns None, None. Otherwise returns sigma, cholsigmainv.
_get_sigma
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def initialize(self): """Initialize model components.""" self.wexog = self.whiten(self.exog) self.wendog = self.whiten(self.endog) # overwrite nobs from class Model: self.nobs = float(self.wexog.shape[0]) self._df_model = None self._df_resid = None self.rank = None
Initialize model components.
initialize
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def df_model(self): """ The model degree of freedom. The dof is defined as the rank of the regressor matrix minus 1 if a constant is included. """ if self._df_model is None: if self.rank is None: self.rank = np.linalg.matrix_rank(self.exog) self._df_model = float(self.rank - self.k_constant) return self._df_model
The model degree of freedom. The dof is defined as the rank of the regressor matrix minus 1 if a constant is included.
df_model
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def df_resid(self): """ The residual degree of freedom. The dof is defined as the number of observations minus the rank of the regressor matrix. """ if self._df_resid is None: if self.rank is None: self.rank = np.linalg.matrix_rank(self.exog) self._df_resid = self.nobs - self.rank return self._df_resid
The residual degree of freedom. The dof is defined as the number of observations minus the rank of the regressor matrix.
df_resid
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def whiten(self, x): """ Whiten method that must be overwritten by individual models. Parameters ---------- x : array_like Data to be whitened. """ raise NotImplementedError("Subclasses must implement.")
Whiten method that must be overwritten by individual models. Parameters ---------- x : array_like Data to be whitened.
whiten
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def fit( self, method: Literal["pinv", "qr"] = "pinv", cov_type: Literal[ "nonrobust", "fixed scale", "HC0", "HC1", "HC2", "HC3", "HAC", "hac-panel", "hac-groupsum", "cluster", ] = "nonrobust", cov_kwds=None, use_t: bool | None = None, **kwargs ): """ Full fit of the model. The results include an estimate of covariance matrix, (whitened) residuals and an estimate of scale. Parameters ---------- method : str, optional Can be "pinv", "qr". "pinv" uses the Moore-Penrose pseudoinverse to solve the least squares problem. "qr" uses the QR factorization. cov_type : str, optional See `regression.linear_model.RegressionResults` for a description of the available covariance estimators. cov_kwds : list or None, optional See `linear_model.RegressionResults.get_robustcov_results` for a description required keywords for alternative covariance estimators. use_t : bool, optional Flag indicating to use the Student's t distribution when computing p-values. Default behavior depends on cov_type. See `linear_model.RegressionResults.get_robustcov_results` for implementation details. **kwargs Additional keyword arguments that contain information used when constructing a model using the formula interface. Returns ------- RegressionResults The model estimation results. See Also -------- RegressionResults The results container. RegressionResults.get_robustcov_results A method to change the covariance estimator used when fitting the model. Notes ----- The fit method uses the pseudoinverse of the design/exogenous variables to solve the least squares minimization. """ if method == "pinv": if not (hasattr(self, 'pinv_wexog') and hasattr(self, 'normalized_cov_params') and hasattr(self, 'rank')): self.pinv_wexog, singular_values = pinv_extended(self.wexog) self.normalized_cov_params = np.dot( self.pinv_wexog, np.transpose(self.pinv_wexog)) # Cache these singular values for use later. self.wexog_singular_values = singular_values self.rank = np.linalg.matrix_rank(np.diag(singular_values)) beta = np.dot(self.pinv_wexog, self.wendog) elif method == "qr": if not (hasattr(self, 'exog_Q') and hasattr(self, 'exog_R') and hasattr(self, 'normalized_cov_params') and hasattr(self, 'rank')): Q, R = np.linalg.qr(self.wexog) self.exog_Q, self.exog_R = Q, R self.normalized_cov_params = np.linalg.inv(np.dot(R.T, R)) # Cache singular values from R. self.wexog_singular_values = np.linalg.svd(R, 0, 0) self.rank = np.linalg.matrix_rank(R) else: Q, R = self.exog_Q, self.exog_R # Needed for some covariance estimators, see GH #8157 self.pinv_wexog = np.linalg.pinv(self.wexog) # used in ANOVA self.effects = effects = np.dot(Q.T, self.wendog) beta = np.linalg.solve(R, effects) else: raise ValueError('method has to be "pinv" or "qr"') if self._df_model is None: self._df_model = float(self.rank - self.k_constant) if self._df_resid is None: self.df_resid = self.nobs - self.rank if isinstance(self, OLS): lfit = OLSResults( self, beta, normalized_cov_params=self.normalized_cov_params, cov_type=cov_type, cov_kwds=cov_kwds, use_t=use_t) else: lfit = RegressionResults( self, beta, normalized_cov_params=self.normalized_cov_params, cov_type=cov_type, cov_kwds=cov_kwds, use_t=use_t, **kwargs) return RegressionResultsWrapper(lfit)
Full fit of the model. The results include an estimate of covariance matrix, (whitened) residuals and an estimate of scale. Parameters ---------- method : str, optional Can be "pinv", "qr". "pinv" uses the Moore-Penrose pseudoinverse to solve the least squares problem. "qr" uses the QR factorization. cov_type : str, optional See `regression.linear_model.RegressionResults` for a description of the available covariance estimators. cov_kwds : list or None, optional See `linear_model.RegressionResults.get_robustcov_results` for a description required keywords for alternative covariance estimators. use_t : bool, optional Flag indicating to use the Student's t distribution when computing p-values. Default behavior depends on cov_type. See `linear_model.RegressionResults.get_robustcov_results` for implementation details. **kwargs Additional keyword arguments that contain information used when constructing a model using the formula interface. Returns ------- RegressionResults The model estimation results. See Also -------- RegressionResults The results container. RegressionResults.get_robustcov_results A method to change the covariance estimator used when fitting the model. Notes ----- The fit method uses the pseudoinverse of the design/exogenous variables to solve the least squares minimization.
fit
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def predict(self, params, exog=None): """ Return linear predicted values from a design matrix. Parameters ---------- params : array_like Parameters of a linear model. exog : array_like, optional Design / exogenous data. Model exog is used if None. Returns ------- array_like An array of fitted values. Notes ----- If the model has not yet been fit, params is not optional. """ # JP: this does not look correct for GLMAR # SS: it needs its own predict method if exog is None: exog = self.exog return np.dot(exog, params)
Return linear predicted values from a design matrix. Parameters ---------- params : array_like Parameters of a linear model. exog : array_like, optional Design / exogenous data. Model exog is used if None. Returns ------- array_like An array of fitted values. Notes ----- If the model has not yet been fit, params is not optional.
predict
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def get_distribution(self, params, scale, exog=None, dist_class=None): """ Construct a random number generator for the predictive distribution. Parameters ---------- params : array_like The model parameters (regression coefficients). scale : scalar The variance parameter. exog : array_like The predictor variable matrix. dist_class : class A random number generator class. Must take 'loc' and 'scale' as arguments and return a random number generator implementing an ``rvs`` method for simulating random values. Defaults to normal. Returns ------- gen Frozen random number generator object with mean and variance determined by the fitted linear model. Use the ``rvs`` method to generate random values. Notes ----- Due to the behavior of ``scipy.stats.distributions objects``, the returned random number generator must be called with ``gen.rvs(n)`` where ``n`` is the number of observations in the data set used to fit the model. If any other value is used for ``n``, misleading results will be produced. """ fit = self.predict(params, exog) if dist_class is None: from scipy.stats.distributions import norm dist_class = norm gen = dist_class(loc=fit, scale=np.sqrt(scale)) return gen
Construct a random number generator for the predictive distribution. Parameters ---------- params : array_like The model parameters (regression coefficients). scale : scalar The variance parameter. exog : array_like The predictor variable matrix. dist_class : class A random number generator class. Must take 'loc' and 'scale' as arguments and return a random number generator implementing an ``rvs`` method for simulating random values. Defaults to normal. Returns ------- gen Frozen random number generator object with mean and variance determined by the fitted linear model. Use the ``rvs`` method to generate random values. Notes ----- Due to the behavior of ``scipy.stats.distributions objects``, the returned random number generator must be called with ``gen.rvs(n)`` where ``n`` is the number of observations in the data set used to fit the model. If any other value is used for ``n``, misleading results will be produced.
get_distribution
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def whiten(self, x): """ GLS whiten method. Parameters ---------- x : array_like Data to be whitened. Returns ------- ndarray The value np.dot(cholsigmainv,X). See Also -------- GLS : Fit a linear model using Generalized Least Squares. """ x = np.asarray(x) if self.sigma is None or self.sigma.shape == (): return x elif self.sigma.ndim == 1: if x.ndim == 1: return x * self.cholsigmainv else: return x * self.cholsigmainv[:, None] else: return np.dot(self.cholsigmainv, x)
GLS whiten method. Parameters ---------- x : array_like Data to be whitened. Returns ------- ndarray The value np.dot(cholsigmainv,X). See Also -------- GLS : Fit a linear model using Generalized Least Squares.
whiten
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def hessian_factor(self, params, scale=None, observed=True): """ Compute weights for calculating Hessian. Parameters ---------- params : ndarray The parameter at which Hessian is evaluated. scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. observed : bool If True, then the observed Hessian is returned. If false then the expected information matrix is returned. Returns ------- ndarray A 1d weight vector used in the calculation of the Hessian. The hessian is obtained by `(exog.T * hessian_factor).dot(exog)`. """ if self.sigma is None or self.sigma.shape == (): return np.ones(self.exog.shape[0]) elif self.sigma.ndim == 1: return self.cholsigmainv else: return np.diag(self.cholsigmainv)
Compute weights for calculating Hessian. Parameters ---------- params : ndarray The parameter at which Hessian is evaluated. scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. observed : bool If True, then the observed Hessian is returned. If false then the expected information matrix is returned. Returns ------- ndarray A 1d weight vector used in the calculation of the Hessian. The hessian is obtained by `(exog.T * hessian_factor).dot(exog)`.
hessian_factor
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def whiten(self, x): """ Whitener for WLS model, multiplies each column by sqrt(self.weights). Parameters ---------- x : array_like Data to be whitened. Returns ------- array_like The whitened values sqrt(weights)*X. """ x = np.asarray(x) if x.ndim == 1: return x * np.sqrt(self.weights) elif x.ndim == 2: return np.sqrt(self.weights)[:, None] * x
Whitener for WLS model, multiplies each column by sqrt(self.weights). Parameters ---------- x : array_like Data to be whitened. Returns ------- array_like The whitened values sqrt(weights)*X.
whiten
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def hessian_factor(self, params, scale=None, observed=True): """ Compute the weights for calculating the Hessian. Parameters ---------- params : ndarray The parameter at which Hessian is evaluated. scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. observed : bool If True, then the observed Hessian is returned. If false then the expected information matrix is returned. Returns ------- ndarray A 1d weight vector used in the calculation of the Hessian. The hessian is obtained by `(exog.T * hessian_factor).dot(exog)`. """ return self.weights
Compute the weights for calculating the Hessian. Parameters ---------- params : ndarray The parameter at which Hessian is evaluated. scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. observed : bool If True, then the observed Hessian is returned. If false then the expected information matrix is returned. Returns ------- ndarray A 1d weight vector used in the calculation of the Hessian. The hessian is obtained by `(exog.T * hessian_factor).dot(exog)`.
hessian_factor
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def loglike(self, params, scale=None): """ The likelihood function for the OLS model. Parameters ---------- params : array_like The coefficients with which to estimate the log-likelihood. scale : float or None If None, return the profile (concentrated) log likelihood (profiled over the scale parameter), else return the log-likelihood using the given scale value. Returns ------- float The likelihood function evaluated at params. """ nobs2 = self.nobs / 2.0 nobs = float(self.nobs) resid = self.endog - np.dot(self.exog, params) if hasattr(self, 'offset'): resid -= self.offset ssr = np.sum(resid**2) if scale is None: # profile log likelihood llf = -nobs2*np.log(2*np.pi) - nobs2*np.log(ssr / nobs) - nobs2 else: # log-likelihood llf = -nobs2 * np.log(2 * np.pi * scale) - ssr / (2*scale) return llf
The likelihood function for the OLS model. Parameters ---------- params : array_like The coefficients with which to estimate the log-likelihood. scale : float or None If None, return the profile (concentrated) log likelihood (profiled over the scale parameter), else return the log-likelihood using the given scale value. Returns ------- float The likelihood function evaluated at params.
loglike
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def whiten(self, x): """ OLS model whitener does nothing. Parameters ---------- x : array_like Data to be whitened. Returns ------- array_like The input array unmodified. See Also -------- OLS : Fit a linear model using Ordinary Least Squares. """ return x
OLS model whitener does nothing. Parameters ---------- x : array_like Data to be whitened. Returns ------- array_like The input array unmodified. See Also -------- OLS : Fit a linear model using Ordinary Least Squares.
whiten
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def score(self, params, scale=None): """ Evaluate the score function at a given point. The score corresponds to the profile (concentrated) log-likelihood in which the scale parameter has been profiled out. Parameters ---------- params : array_like The parameter vector at which the score function is computed. scale : float or None If None, return the profile (concentrated) log likelihood (profiled over the scale parameter), else return the log-likelihood using the given scale value. Returns ------- ndarray The score vector. """ if not hasattr(self, "_wexog_xprod"): self._setup_score_hess() xtxb = np.dot(self._wexog_xprod, params) sdr = -self._wexog_x_wendog + xtxb if scale is None: ssr = self._wendog_xprod - 2 * np.dot(self._wexog_x_wendog.T, params) ssr += np.dot(params, xtxb) return -self.nobs * sdr / ssr else: return -sdr / scale
Evaluate the score function at a given point. The score corresponds to the profile (concentrated) log-likelihood in which the scale parameter has been profiled out. Parameters ---------- params : array_like The parameter vector at which the score function is computed. scale : float or None If None, return the profile (concentrated) log likelihood (profiled over the scale parameter), else return the log-likelihood using the given scale value. Returns ------- ndarray The score vector.
score
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def hessian(self, params, scale=None): """ Evaluate the Hessian function at a given point. Parameters ---------- params : array_like The parameter vector at which the Hessian is computed. scale : float or None If None, return the profile (concentrated) log likelihood (profiled over the scale parameter), else return the log-likelihood using the given scale value. Returns ------- ndarray The Hessian matrix. """ if not hasattr(self, "_wexog_xprod"): self._setup_score_hess() xtxb = np.dot(self._wexog_xprod, params) if scale is None: ssr = self._wendog_xprod - 2 * np.dot(self._wexog_x_wendog.T, params) ssr += np.dot(params, xtxb) ssrp = -2*self._wexog_x_wendog + 2*xtxb hm = self._wexog_xprod / ssr - np.outer(ssrp, ssrp) / ssr**2 return -self.nobs * hm / 2 else: return -self._wexog_xprod / scale
Evaluate the Hessian function at a given point. Parameters ---------- params : array_like The parameter vector at which the Hessian is computed. scale : float or None If None, return the profile (concentrated) log likelihood (profiled over the scale parameter), else return the log-likelihood using the given scale value. Returns ------- ndarray The Hessian matrix.
hessian
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def hessian_factor(self, params, scale=None, observed=True): """ Calculate the weights for the Hessian. Parameters ---------- params : ndarray The parameter at which Hessian is evaluated. scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. observed : bool If True, then the observed Hessian is returned. If false then the expected information matrix is returned. Returns ------- ndarray A 1d weight vector used in the calculation of the Hessian. The hessian is obtained by `(exog.T * hessian_factor).dot(exog)`. """ return np.ones(self.exog.shape[0])
Calculate the weights for the Hessian. Parameters ---------- params : ndarray The parameter at which Hessian is evaluated. scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. observed : bool If True, then the observed Hessian is returned. If false then the expected information matrix is returned. Returns ------- ndarray A 1d weight vector used in the calculation of the Hessian. The hessian is obtained by `(exog.T * hessian_factor).dot(exog)`.
hessian_factor
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def _fit_ridge(self, alpha): """ Fit a linear model using ridge regression. Parameters ---------- alpha : scalar or array_like The penalty weight. If a scalar, the same penalty weight applies to all variables in the model. If a vector, it must have the same length as `params`, and contains a penalty weight for each coefficient. Notes ----- Equivalent to fit_regularized with L1_wt = 0 (but implemented more efficiently). """ u, s, vt = np.linalg.svd(self.exog, 0) v = vt.T q = np.dot(u.T, self.endog) * s s2 = s * s if np.isscalar(alpha): sd = s2 + alpha * self.nobs params = q / sd params = np.dot(v, params) else: alpha = np.asarray(alpha) vtav = self.nobs * np.dot(vt, alpha[:, None] * v) d = np.diag(vtav) + s2 np.fill_diagonal(vtav, d) r = np.linalg.solve(vtav, q) params = np.dot(v, r) from statsmodels.base.elastic_net import RegularizedResults return RegularizedResults(self, params)
Fit a linear model using ridge regression. Parameters ---------- alpha : scalar or array_like The penalty weight. If a scalar, the same penalty weight applies to all variables in the model. If a vector, it must have the same length as `params`, and contains a penalty weight for each coefficient. Notes ----- Equivalent to fit_regularized with L1_wt = 0 (but implemented more efficiently).
_fit_ridge
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def iterative_fit(self, maxiter=3, rtol=1e-4, **kwargs): """ Perform an iterative two-stage procedure to estimate a GLS model. The model is assumed to have AR(p) errors, AR(p) parameters and regression coefficients are estimated iteratively. Parameters ---------- maxiter : int, optional The number of iterations. rtol : float, optional Relative tolerance between estimated coefficients to stop the estimation. Stops if max(abs(last - current) / abs(last)) < rtol. **kwargs Additional keyword arguments passed to `fit`. Returns ------- RegressionResults The results computed using an iterative fit. """ # TODO: update this after going through example. converged = False i = -1 # need to initialize for maxiter < 1 (skip loop) history = {'params': [], 'rho': [self.rho]} for i in range(maxiter - 1): if hasattr(self, 'pinv_wexog'): del self.pinv_wexog self.initialize() results = self.fit() history['params'].append(results.params) if i == 0: last = results.params else: diff = np.max(np.abs(last - results.params) / np.abs(last)) if diff < rtol: converged = True break last = results.params self.rho, _ = yule_walker(results.resid, order=self.order, df=None) history['rho'].append(self.rho) # why not another call to self.initialize # Use kwarg to insert history if not converged and maxiter > 0: # maxiter <= 0 just does OLS if hasattr(self, 'pinv_wexog'): del self.pinv_wexog self.initialize() # if converged then this is a duplicate fit, because we did not # update rho results = self.fit(history=history, **kwargs) results.iter = i + 1 # add last fit to history, not if duplicate fit if not converged: results.history['params'].append(results.params) results.iter += 1 results.converged = converged return results
Perform an iterative two-stage procedure to estimate a GLS model. The model is assumed to have AR(p) errors, AR(p) parameters and regression coefficients are estimated iteratively. Parameters ---------- maxiter : int, optional The number of iterations. rtol : float, optional Relative tolerance between estimated coefficients to stop the estimation. Stops if max(abs(last - current) / abs(last)) < rtol. **kwargs Additional keyword arguments passed to `fit`. Returns ------- RegressionResults The results computed using an iterative fit.
iterative_fit
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def whiten(self, x): """ Whiten a series of columns according to an AR(p) covariance structure. Whitening using this method drops the initial p observations. Parameters ---------- x : array_like The data to be whitened. Returns ------- ndarray The whitened data. """ # TODO: notation for AR process x = np.asarray(x, np.float64) _x = x.copy() # the following loops over the first axis, works for 1d and nd for i in range(self.order): _x[(i + 1):] = _x[(i + 1):] - self.rho[i] * x[0:-(i + 1)] return _x[self.order:]
Whiten a series of columns according to an AR(p) covariance structure. Whitening using this method drops the initial p observations. Parameters ---------- x : array_like The data to be whitened. Returns ------- ndarray The whitened data.
whiten
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause
def yule_walker(x, order=1, method="adjusted", df=None, inv=False, demean=True): """ Estimate AR(p) parameters from a sequence using the Yule-Walker equations. Adjusted or maximum-likelihood estimator (mle) Parameters ---------- x : array_like A 1d array. order : int, optional The order of the autoregressive process. Default is 1. method : str, optional Method can be 'adjusted' or 'mle' and this determines denominator in estimate of autocorrelation function (ACF) at lag k. If 'mle', the denominator is n=X.shape[0], if 'adjusted' the denominator is n-k. The default is adjusted. df : int, optional Specifies the degrees of freedom. If `df` is supplied, then it is assumed the X has `df` degrees of freedom rather than `n`. Default is None. inv : bool If inv is True the inverse of R is also returned. Default is False. demean : bool True, the mean is subtracted from `X` before estimation. Returns ------- rho : ndarray AR(p) coefficients computed using the Yule-Walker method. sigma : float The estimate of the residual standard deviation. See Also -------- burg : Burg's AR estimator. Notes ----- See https://en.wikipedia.org/wiki/Autoregressive_moving_average_model for further details. Examples -------- >>> import statsmodels.api as sm >>> from statsmodels.datasets.sunspots import load >>> data = load() >>> rho, sigma = sm.regression.yule_walker(data.endog, order=4, ... method="mle") >>> rho array([ 1.28310031, -0.45240924, -0.20770299, 0.04794365]) >>> sigma 16.808022730464351 """ # TODO: define R better, look back at notes and technical notes on YW. # First link here is useful # http://www-stat.wharton.upenn.edu/~steele/Courses/956/ResourceDetails/YuleWalkerAndMore.htm method = string_like( method, "method", options=("adjusted", "unbiased", "mle") ) if method == "unbiased": warnings.warn( "unbiased is deprecated in factor of adjusted to reflect that the " "term is adjusting the sample size used in the autocovariance " "calculation rather than estimating an unbiased autocovariance. " "After release 0.13, using 'unbiased' will raise.", FutureWarning, ) method = "adjusted" if method not in ("adjusted", "mle"): raise ValueError("ACF estimation method must be 'adjusted' or 'MLE'") # TODO: Require?? x = np.array(x, dtype=np.float64) if demean: if not x.flags.writeable: x = np.require(x, requirements="W") x -= x.mean() n = df or x.shape[0] # this handles df_resid ie., n - p adj_needed = method == "adjusted" if x.ndim > 1 and x.shape[1] != 1: raise ValueError("expecting a vector to estimate AR parameters") r = np.zeros(order+1, np.float64) r[0] = (x ** 2).sum() / n for k in range(1, order+1): r[k] = (x[0:-k] * x[k:]).sum() / (n - k * adj_needed) R = toeplitz(r[:-1]) try: rho = np.linalg.solve(R, r[1:]) except np.linalg.LinAlgError as err: if 'Singular matrix' in str(err): warnings.warn("Matrix is singular. Using pinv.", ValueWarning) rho = np.linalg.pinv(R) @ r[1:] else: raise sigmasq = r[0] - (r[1:]*rho).sum() if not np.isnan(sigmasq) and sigmasq > 0: sigma = np.sqrt(sigmasq) else: sigma = np.nan if inv: return rho, sigma, np.linalg.inv(R) else: return rho, sigma
Estimate AR(p) parameters from a sequence using the Yule-Walker equations. Adjusted or maximum-likelihood estimator (mle) Parameters ---------- x : array_like A 1d array. order : int, optional The order of the autoregressive process. Default is 1. method : str, optional Method can be 'adjusted' or 'mle' and this determines denominator in estimate of autocorrelation function (ACF) at lag k. If 'mle', the denominator is n=X.shape[0], if 'adjusted' the denominator is n-k. The default is adjusted. df : int, optional Specifies the degrees of freedom. If `df` is supplied, then it is assumed the X has `df` degrees of freedom rather than `n`. Default is None. inv : bool If inv is True the inverse of R is also returned. Default is False. demean : bool True, the mean is subtracted from `X` before estimation. Returns ------- rho : ndarray AR(p) coefficients computed using the Yule-Walker method. sigma : float The estimate of the residual standard deviation. See Also -------- burg : Burg's AR estimator. Notes ----- See https://en.wikipedia.org/wiki/Autoregressive_moving_average_model for further details. Examples -------- >>> import statsmodels.api as sm >>> from statsmodels.datasets.sunspots import load >>> data = load() >>> rho, sigma = sm.regression.yule_walker(data.endog, order=4, ... method="mle") >>> rho array([ 1.28310031, -0.45240924, -0.20770299, 0.04794365]) >>> sigma 16.808022730464351
yule_walker
python
statsmodels/statsmodels
statsmodels/regression/linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/regression/linear_model.py
BSD-3-Clause