code
stringlengths
26
870k
docstring
stringlengths
1
65.6k
func_name
stringlengths
1
194
language
stringclasses
1 value
repo
stringlengths
8
68
path
stringlengths
5
194
url
stringlengths
46
254
license
stringclasses
4 values
def resid_prob(self): """probability residual Probability-scale residual is ``P(Y < y) − P(Y > y)`` where `Y` is the observed choice and ``y`` is a random variable corresponding to the predicted distribution. References ---------- Shepherd BE, Li C, Liu Q (2016) Probability-scale residuals for continuous, discrete, and censored data. The Canadian Journal of Statistics. 44:463–476. Li C and Shepherd BE (2012) A new residual for ordinal outcomes. Biometrika. 99: 473–480 """ from statsmodels.stats.diagnostic_gen import prob_larger_ordinal_choice endog = self.model.endog fitted = self.predict() r = prob_larger_ordinal_choice(fitted)[1] resid_prob = r[np.arange(endog.shape[0]), endog] return resid_prob
probability residual Probability-scale residual is ``P(Y < y) − P(Y > y)`` where `Y` is the observed choice and ``y`` is a random variable corresponding to the predicted distribution. References ---------- Shepherd BE, Li C, Liu Q (2016) Probability-scale residuals for continuous, discrete, and censored data. The Canadian Journal of Statistics. 44:463–476. Li C and Shepherd BE (2012) A new residual for ordinal outcomes. Biometrika. 99: 473–480
resid_prob
python
statsmodels/statsmodels
statsmodels/miscmodels/ordinal_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/miscmodels/ordinal_model.py
BSD-3-Clause
def logposterior(self, params): """ The overall log-density: log p(y, fe, vc, vcp). This differs by an additive constant from the log posterior log p(fe, vc, vcp | y). """ fep, vcp, vc = self._unpack(params) # Contributions from p(y | x, vc) lp = 0 if self.k_fep > 0: lp += np.dot(self.exog, fep) if self.k_vc > 0: lp += self.exog_vc.dot(vc) mu = self.family.link.inverse(lp) ll = self.family.loglike(self.endog, mu) if self.k_vc > 0: # Contributions from p(vc | vcp) vcp0 = vcp[self.ident] s = np.exp(vcp0) ll -= 0.5 * np.sum(vc**2 / s**2) + np.sum(vcp0) # Contributions from p(vc) ll -= 0.5 * np.sum(vcp**2 / self.vcp_p**2) # Contributions from p(fep) if self.k_fep > 0: ll -= 0.5 * np.sum(fep**2 / self.fe_p**2) return ll
The overall log-density: log p(y, fe, vc, vcp). This differs by an additive constant from the log posterior log p(fe, vc, vcp | y).
logposterior
python
statsmodels/statsmodels
statsmodels/genmod/bayes_mixed_glm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/bayes_mixed_glm.py
BSD-3-Clause
def logposterior_grad(self, params): """ The gradient of the log posterior. """ fep, vcp, vc = self._unpack(params) lp = 0 if self.k_fep > 0: lp += np.dot(self.exog, fep) if self.k_vc > 0: lp += self.exog_vc.dot(vc) mu = self.family.link.inverse(lp) score_factor = (self.endog - mu) / self.family.link.deriv(mu) score_factor /= self.family.variance(mu) te = [None, None, None] # Contributions from p(y | x, z, vc) if self.k_fep > 0: te[0] = np.dot(score_factor, self.exog) if self.k_vc > 0: te[2] = self.exog_vc.transpose().dot(score_factor) if self.k_vc > 0: # Contributions from p(vc | vcp) # vcp0 = vcp[self.ident] # s = np.exp(vcp0) # ll -= 0.5 * np.sum(vc**2 / s**2) + np.sum(vcp0) vcp0 = vcp[self.ident] s = np.exp(vcp0) u = vc**2 / s**2 - 1 te[1] = np.bincount(self.ident, weights=u) te[2] -= vc / s**2 # Contributions from p(vcp) # ll -= 0.5 * np.sum(vcp**2 / self.vcp_p**2) te[1] -= vcp / self.vcp_p**2 # Contributions from p(fep) if self.k_fep > 0: te[0] -= fep / self.fe_p**2 te = [x for x in te if x is not None] return np.concatenate(te)
The gradient of the log posterior.
logposterior_grad
python
statsmodels/statsmodels
statsmodels/genmod/bayes_mixed_glm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/bayes_mixed_glm.py
BSD-3-Clause
def from_formula(cls, formula, vc_formulas, data, family=None, vcp_p=1, fe_p=2): """ Fit a BayesMixedGLM using a formula. Parameters ---------- formula : str Formula for the endog and fixed effects terms (use ~ to separate dependent and independent expressions). vc_formulas : dictionary vc_formulas[name] is a one-sided formula that creates one collection of random effects with a common variance parameter. If using categorical (factor) variables to produce variance components, note that generally `0 + ...` should be used so that an intercept is not included. data : data frame The data to which the formulas are applied. family : genmod.families instance A GLM family. vcp_p : float The prior standard deviation for the logarithms of the standard deviations of the random effects. fe_p : float The prior standard deviation for the fixed effects parameters. """ ident = [] exog_vc = [] vcp_names = [] j = 0 for na, fml in vc_formulas.items(): mgr = FormulaManager() mat = mgr.get_matrices(fml, data, pandas=True) exog_vc.append(mat) vcp_names.append(na) ident.append(j * np.ones(mat.shape[1], dtype=np.int_)) j += 1 exog_vc = pd.concat(exog_vc, axis=1) vc_names = exog_vc.columns.tolist() ident = np.concatenate(ident) model = super().from_formula( formula, data=data, family=family, subset=None, exog_vc=exog_vc, ident=ident, vc_names=vc_names, vcp_names=vcp_names, fe_p=fe_p, vcp_p=vcp_p) return model
Fit a BayesMixedGLM using a formula. Parameters ---------- formula : str Formula for the endog and fixed effects terms (use ~ to separate dependent and independent expressions). vc_formulas : dictionary vc_formulas[name] is a one-sided formula that creates one collection of random effects with a common variance parameter. If using categorical (factor) variables to produce variance components, note that generally `0 + ...` should be used so that an intercept is not included. data : data frame The data to which the formulas are applied. family : genmod.families instance A GLM family. vcp_p : float The prior standard deviation for the logarithms of the standard deviations of the random effects. fe_p : float The prior standard deviation for the fixed effects parameters.
from_formula
python
statsmodels/statsmodels
statsmodels/genmod/bayes_mixed_glm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/bayes_mixed_glm.py
BSD-3-Clause
def fit(self, method="BFGS", minim_opts=None): """ fit is equivalent to fit_map. See fit_map for parameter information. Use `fit_vb` to fit the model using variational Bayes. """ self.fit_map(method, minim_opts)
fit is equivalent to fit_map. See fit_map for parameter information. Use `fit_vb` to fit the model using variational Bayes.
fit
python
statsmodels/statsmodels
statsmodels/genmod/bayes_mixed_glm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/bayes_mixed_glm.py
BSD-3-Clause
def fit_map(self, method="BFGS", minim_opts=None, scale_fe=False): """ Construct the Laplace approximation to the posterior distribution. Parameters ---------- method : str Optimization method for finding the posterior mode. minim_opts : dict Options passed to scipy.minimize. scale_fe : bool If True, the columns of the fixed effects design matrix are centered and scaled to unit variance before fitting the model. The results are back-transformed so that the results are presented on the original scale. Returns ------- BayesMixedGLMResults instance. """ if scale_fe: mn = self.exog.mean(0) sc = self.exog.std(0) self._exog_save = self.exog self.exog = self.exog.copy() ixs = np.flatnonzero(sc > 1e-8) self.exog[:, ixs] -= mn[ixs] self.exog[:, ixs] /= sc[ixs] def fun(params): return -self.logposterior(params) def grad(params): return -self.logposterior_grad(params) start = self._get_start() r = minimize(fun, start, method=method, jac=grad, options=minim_opts) if not r.success: msg = ("Laplace fitting did not converge, |gradient|=%.6f" % np.sqrt(np.sum(r.jac**2))) warnings.warn(msg) from statsmodels.tools.numdiff import approx_fprime hess = approx_fprime(r.x, grad) cov = np.linalg.inv(hess) params = r.x if scale_fe: self.exog = self._exog_save del self._exog_save params[ixs] /= sc[ixs] cov[ixs, :][:, ixs] /= np.outer(sc[ixs], sc[ixs]) return BayesMixedGLMResults(self, params, cov, optim_retvals=r)
Construct the Laplace approximation to the posterior distribution. Parameters ---------- method : str Optimization method for finding the posterior mode. minim_opts : dict Options passed to scipy.minimize. scale_fe : bool If True, the columns of the fixed effects design matrix are centered and scaled to unit variance before fitting the model. The results are back-transformed so that the results are presented on the original scale. Returns ------- BayesMixedGLMResults instance.
fit_map
python
statsmodels/statsmodels
statsmodels/genmod/bayes_mixed_glm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/bayes_mixed_glm.py
BSD-3-Clause
def predict(self, params, exog=None, linear=False): """ Return the fitted mean structure. Parameters ---------- params : array_like The parameter vector, may be the full parameter vector, or may be truncated to include only the mean parameters. exog : array_like The design matrix for the mean structure. If omitted, use the model's design matrix. linear : bool If True, return the linear predictor without passing through the link function. Returns ------- A 1-dimensional array of predicted values """ if exog is None: exog = self.exog q = exog.shape[1] pr = np.dot(exog, params[0:q]) if not linear: pr = self.family.link.inverse(pr) return pr
Return the fitted mean structure. Parameters ---------- params : array_like The parameter vector, may be the full parameter vector, or may be truncated to include only the mean parameters. exog : array_like The design matrix for the mean structure. If omitted, use the model's design matrix. linear : bool If True, return the linear predictor without passing through the link function. Returns ------- A 1-dimensional array of predicted values
predict
python
statsmodels/statsmodels
statsmodels/genmod/bayes_mixed_glm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/bayes_mixed_glm.py
BSD-3-Clause
def vb_elbo_base(self, h, tm, fep_mean, vcp_mean, vc_mean, fep_sd, vcp_sd, vc_sd): """ Returns the evidence lower bound (ELBO) for the model. This function calculates the family-specific ELBO function based on information provided from a subclass. Parameters ---------- h : function mapping 1d vector to 1d vector The contribution of the model to the ELBO function can be expressed as y_i*lp_i + Eh_i(z), where y_i and lp_i are the response and linear predictor for observation i, and z is a standard normal random variable. This formulation can be achieved for any GLM with a canonical link function. """ # p(y | vc) contributions iv = 0 for w in glw: z = self.rng * w[1] iv += w[0] * h(z) * np.exp(-z**2 / 2) iv /= np.sqrt(2 * np.pi) iv *= self.rng iv += self.endog * tm iv = iv.sum() # p(vc | vcp) * p(vcp) * p(fep) contributions iv += self._elbo_common(fep_mean, fep_sd, vcp_mean, vcp_sd, vc_mean, vc_sd) r = (iv + np.sum(np.log(fep_sd)) + np.sum(np.log(vcp_sd)) + np.sum( np.log(vc_sd))) return r
Returns the evidence lower bound (ELBO) for the model. This function calculates the family-specific ELBO function based on information provided from a subclass. Parameters ---------- h : function mapping 1d vector to 1d vector The contribution of the model to the ELBO function can be expressed as y_i*lp_i + Eh_i(z), where y_i and lp_i are the response and linear predictor for observation i, and z is a standard normal random variable. This formulation can be achieved for any GLM with a canonical link function.
vb_elbo_base
python
statsmodels/statsmodels
statsmodels/genmod/bayes_mixed_glm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/bayes_mixed_glm.py
BSD-3-Clause
def vb_elbo_grad_base(self, h, tm, tv, fep_mean, vcp_mean, vc_mean, fep_sd, vcp_sd, vc_sd): """ Return the gradient of the ELBO function. See vb_elbo_base for parameters. """ fep_mean_grad = 0. fep_sd_grad = 0. vcp_mean_grad = 0. vcp_sd_grad = 0. vc_mean_grad = 0. vc_sd_grad = 0. # p(y | vc) contributions for w in glw: z = self.rng * w[1] u = h(z) * np.exp(-z**2 / 2) / np.sqrt(2 * np.pi) r = u / np.sqrt(tv) fep_mean_grad += w[0] * np.dot(u, self.exog) vc_mean_grad += w[0] * self.exog_vc.transpose().dot(u) fep_sd_grad += w[0] * z * np.dot(r, self.exog**2 * fep_sd) v = self.exog_vc2.multiply(vc_sd).transpose().dot(r) v = np.squeeze(np.asarray(v)) vc_sd_grad += w[0] * z * v fep_mean_grad *= self.rng vc_mean_grad *= self.rng fep_sd_grad *= self.rng vc_sd_grad *= self.rng fep_mean_grad += np.dot(self.endog, self.exog) vc_mean_grad += self.exog_vc.transpose().dot(self.endog) (fep_mean_grad_i, fep_sd_grad_i, vcp_mean_grad_i, vcp_sd_grad_i, vc_mean_grad_i, vc_sd_grad_i) = self._elbo_grad_common( fep_mean, fep_sd, vcp_mean, vcp_sd, vc_mean, vc_sd) fep_mean_grad += fep_mean_grad_i fep_sd_grad += fep_sd_grad_i vcp_mean_grad += vcp_mean_grad_i vcp_sd_grad += vcp_sd_grad_i vc_mean_grad += vc_mean_grad_i vc_sd_grad += vc_sd_grad_i fep_sd_grad += 1 / fep_sd vcp_sd_grad += 1 / vcp_sd vc_sd_grad += 1 / vc_sd mean_grad = np.concatenate((fep_mean_grad, vcp_mean_grad, vc_mean_grad)) sd_grad = np.concatenate((fep_sd_grad, vcp_sd_grad, vc_sd_grad)) if self.verbose: print( "|G|=%f" % np.sqrt(np.sum(mean_grad**2) + np.sum(sd_grad**2))) return mean_grad, sd_grad
Return the gradient of the ELBO function. See vb_elbo_base for parameters.
vb_elbo_grad_base
python
statsmodels/statsmodels
statsmodels/genmod/bayes_mixed_glm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/bayes_mixed_glm.py
BSD-3-Clause
def fit_vb(self, mean=None, sd=None, fit_method="BFGS", minim_opts=None, scale_fe=False, verbose=False): """ Fit a model using the variational Bayes mean field approximation. Parameters ---------- mean : array_like Starting value for VB mean vector sd : array_like Starting value for VB standard deviation vector fit_method : str Algorithm for scipy.minimize minim_opts : dict Options passed to scipy.minimize scale_fe : bool If true, the columns of the fixed effects design matrix are centered and scaled to unit variance before fitting the model. The results are back-transformed so that the results are presented on the original scale. verbose : bool If True, print the gradient norm to the screen each time it is calculated. Notes ----- The goal is to find a factored Gaussian approximation q1*q2*... to the posterior distribution, approximately minimizing the KL divergence from the factored approximation to the actual posterior. The KL divergence, or ELBO function has the form E* log p(y, fe, vcp, vc) - E* log q where E* is expectation with respect to the product of qj. References ---------- Blei, Kucukelbir, McAuliffe (2017). Variational Inference: A review for Statisticians https://arxiv.org/pdf/1601.00670.pdf """ self.verbose = verbose if scale_fe: mn = self.exog.mean(0) sc = self.exog.std(0) self._exog_save = self.exog self.exog = self.exog.copy() ixs = np.flatnonzero(sc > 1e-8) self.exog[:, ixs] -= mn[ixs] self.exog[:, ixs] /= sc[ixs] n = self.k_fep + self.k_vcp + self.k_vc ml = self.k_fep + self.k_vcp + self.k_vc if mean is None: m = np.zeros(n) else: if len(mean) != ml: raise ValueError( "mean has incorrect length, %d != %d" % (len(mean), ml)) m = mean.copy() if sd is None: s = -0.5 + 0.1 * np.random.normal(size=n) else: if len(sd) != ml: raise ValueError( "sd has incorrect length, %d != %d" % (len(sd), ml)) # s is parametrized on the log-scale internally when # optimizing the ELBO function (this is transparent to the # caller) s = np.log(sd) # Do not allow the variance parameter starting mean values to # be too small. i1, i2 = self.k_fep, self.k_fep + self.k_vcp m[i1:i2] = np.where(m[i1:i2] < -1, -1, m[i1:i2]) # Do not allow the posterior standard deviation starting values # to be too small. s = np.where(s < -1, -1, s) def elbo(x): n = len(x) // 2 return -self.vb_elbo(x[:n], np.exp(x[n:])) def elbo_grad(x): n = len(x) // 2 gm, gs = self.vb_elbo_grad(x[:n], np.exp(x[n:])) gs *= np.exp(x[n:]) return -np.concatenate((gm, gs)) start = np.concatenate((m, s)) mm = minimize( elbo, start, jac=elbo_grad, method=fit_method, options=minim_opts) if not mm.success: warnings.warn("VB fitting did not converge") n = len(mm.x) // 2 params = mm.x[0:n] va = np.exp(2 * mm.x[n:]) if scale_fe: self.exog = self._exog_save del self._exog_save params[ixs] /= sc[ixs] va[ixs] /= sc[ixs]**2 return BayesMixedGLMResults(self, params, va, mm)
Fit a model using the variational Bayes mean field approximation. Parameters ---------- mean : array_like Starting value for VB mean vector sd : array_like Starting value for VB standard deviation vector fit_method : str Algorithm for scipy.minimize minim_opts : dict Options passed to scipy.minimize scale_fe : bool If true, the columns of the fixed effects design matrix are centered and scaled to unit variance before fitting the model. The results are back-transformed so that the results are presented on the original scale. verbose : bool If True, print the gradient norm to the screen each time it is calculated. Notes ----- The goal is to find a factored Gaussian approximation q1*q2*... to the posterior distribution, approximately minimizing the KL divergence from the factored approximation to the actual posterior. The KL divergence, or ELBO function has the form E* log p(y, fe, vcp, vc) - E* log q where E* is expectation with respect to the product of qj. References ---------- Blei, Kucukelbir, McAuliffe (2017). Variational Inference: A review for Statisticians https://arxiv.org/pdf/1601.00670.pdf
fit_vb
python
statsmodels/statsmodels
statsmodels/genmod/bayes_mixed_glm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/bayes_mixed_glm.py
BSD-3-Clause
def random_effects(self, term=None): """ Posterior mean and standard deviation of random effects. Parameters ---------- term : int or None If None, results for all random effects are returned. If an integer, returns results for a given set of random effects. The value of `term` refers to an element of the `ident` vector, or to a position in the `vc_formulas` list. Returns ------- Data frame of posterior means and posterior standard deviations of random effects. """ z = self.vc_mean s = self.vc_sd na = self.model.vc_names if term is not None: termix = self.model.vcp_names.index(term) ii = np.flatnonzero(self.model.ident == termix) z = z[ii] s = s[ii] na = [na[i] for i in ii] x = pd.DataFrame({"Mean": z, "SD": s}) if na is not None: x.index = na return x
Posterior mean and standard deviation of random effects. Parameters ---------- term : int or None If None, results for all random effects are returned. If an integer, returns results for a given set of random effects. The value of `term` refers to an element of the `ident` vector, or to a position in the `vc_formulas` list. Returns ------- Data frame of posterior means and posterior standard deviations of random effects.
random_effects
python
statsmodels/statsmodels
statsmodels/genmod/bayes_mixed_glm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/bayes_mixed_glm.py
BSD-3-Clause
def predict(self, exog=None, linear=False): """ Return predicted values for the mean structure. Parameters ---------- exog : array_like The design matrix for the mean structure. If None, use the model's design matrix. linear : bool If True, returns the linear predictor, otherwise transform the linear predictor using the link function. Returns ------- A one-dimensional array of fitted values. """ return self.model.predict(self.params, exog, linear)
Return predicted values for the mean structure. Parameters ---------- exog : array_like The design matrix for the mean structure. If None, use the model's design matrix. linear : bool If True, returns the linear predictor, otherwise transform the linear predictor using the link function. Returns ------- A one-dimensional array of fitted values.
predict
python
statsmodels/statsmodels
statsmodels/genmod/bayes_mixed_glm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/bayes_mixed_glm.py
BSD-3-Clause
def vb_elbo(self, vb_mean, vb_sd): """ Returns the evidence lower bound (ELBO) for the model. """ fep_mean, vcp_mean, vc_mean = self._unpack(vb_mean) fep_sd, vcp_sd, vc_sd = self._unpack(vb_sd) tm, tv = self._lp_stats(fep_mean, fep_sd, vc_mean, vc_sd) def h(z): return -np.log(1 + np.exp(tm + np.sqrt(tv) * z)) return self.vb_elbo_base(h, tm, fep_mean, vcp_mean, vc_mean, fep_sd, vcp_sd, vc_sd)
Returns the evidence lower bound (ELBO) for the model.
vb_elbo
python
statsmodels/statsmodels
statsmodels/genmod/bayes_mixed_glm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/bayes_mixed_glm.py
BSD-3-Clause
def vb_elbo_grad(self, vb_mean, vb_sd): """ Returns the gradient of the model's evidence lower bound (ELBO). """ fep_mean, vcp_mean, vc_mean = self._unpack(vb_mean) fep_sd, vcp_sd, vc_sd = self._unpack(vb_sd) tm, tv = self._lp_stats(fep_mean, fep_sd, vc_mean, vc_sd) def h(z): u = tm + np.sqrt(tv) * z x = np.zeros_like(u) ii = np.flatnonzero(u > 0) uu = u[ii] x[ii] = 1 / (1 + np.exp(-uu)) ii = np.flatnonzero(u <= 0) uu = u[ii] x[ii] = np.exp(uu) / (1 + np.exp(uu)) return -x return self.vb_elbo_grad_base(h, tm, tv, fep_mean, vcp_mean, vc_mean, fep_sd, vcp_sd, vc_sd)
Returns the gradient of the model's evidence lower bound (ELBO).
vb_elbo_grad
python
statsmodels/statsmodels
statsmodels/genmod/bayes_mixed_glm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/bayes_mixed_glm.py
BSD-3-Clause
def vb_elbo(self, vb_mean, vb_sd): """ Returns the evidence lower bound (ELBO) for the model. """ fep_mean, vcp_mean, vc_mean = self._unpack(vb_mean) fep_sd, vcp_sd, vc_sd = self._unpack(vb_sd) tm, tv = self._lp_stats(fep_mean, fep_sd, vc_mean, vc_sd) def h(z): return -np.exp(tm + np.sqrt(tv) * z) return self.vb_elbo_base(h, tm, fep_mean, vcp_mean, vc_mean, fep_sd, vcp_sd, vc_sd)
Returns the evidence lower bound (ELBO) for the model.
vb_elbo
python
statsmodels/statsmodels
statsmodels/genmod/bayes_mixed_glm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/bayes_mixed_glm.py
BSD-3-Clause
def vb_elbo_grad(self, vb_mean, vb_sd): """ Returns the gradient of the model's evidence lower bound (ELBO). """ fep_mean, vcp_mean, vc_mean = self._unpack(vb_mean) fep_sd, vcp_sd, vc_sd = self._unpack(vb_sd) tm, tv = self._lp_stats(fep_mean, fep_sd, vc_mean, vc_sd) def h(z): y = -np.exp(tm + np.sqrt(tv) * z) return y return self.vb_elbo_grad_base(h, tm, tv, fep_mean, vcp_mean, vc_mean, fep_sd, vcp_sd, vc_sd)
Returns the gradient of the model's evidence lower bound (ELBO).
vb_elbo_grad
python
statsmodels/statsmodels
statsmodels/genmod/bayes_mixed_glm.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/bayes_mixed_glm.py
BSD-3-Clause
def __init__(self, lhs, rhs, exog): """ Parameters ---------- lhs : ndarray A q x p matrix which is the left hand side of the constraint lhs * param = rhs. The number of constraints is q >= 1 and p is the dimension of the parameter vector. rhs : ndarray A 1-dimensional vector of length q which is the right hand side of the constraint equation. exog : ndarray The n x p exognenous data for the full model. """ # In case a row or column vector is passed (patsy linear # constraints passes a column vector). rhs = np.atleast_1d(rhs.squeeze()) if rhs.ndim > 1: raise ValueError("The right hand side of the constraint " "must be a vector.") if len(rhs) != lhs.shape[0]: raise ValueError("The number of rows of the left hand " "side constraint matrix L must equal " "the length of the right hand side " "constraint vector R.") self.lhs = lhs self.rhs = rhs # The columns of lhs0 are an orthogonal basis for the # orthogonal complement to row(lhs), the columns of lhs1 are # an orthogonal basis for row(lhs). The columns of lhsf = # [lhs0, lhs1] are mutually orthogonal. lhs_u, lhs_s, lhs_vt = np.linalg.svd(lhs.T, full_matrices=1) self.lhs0 = lhs_u[:, len(lhs_s):] self.lhs1 = lhs_u[:, 0:len(lhs_s)] self.lhsf = np.hstack((self.lhs0, self.lhs1)) # param0 is one solution to the underdetermined system # L * param = R. self.param0 = np.dot(self.lhs1, np.dot(lhs_vt, self.rhs) / lhs_s) self._offset_increment = np.dot(exog, self.param0) self.orig_exog = exog self.exog_fulltrans = np.dot(exog, self.lhsf)
Parameters ---------- lhs : ndarray A q x p matrix which is the left hand side of the constraint lhs * param = rhs. The number of constraints is q >= 1 and p is the dimension of the parameter vector. rhs : ndarray A 1-dimensional vector of length q which is the right hand side of the constraint equation. exog : ndarray The n x p exognenous data for the full model.
__init__
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def offset_increment(self): """ Returns a vector that should be added to the offset vector to accommodate the constraint. Parameters ---------- exog : array_like The exogeneous data for the model. """ return self._offset_increment
Returns a vector that should be added to the offset vector to accommodate the constraint. Parameters ---------- exog : array_like The exogeneous data for the model.
offset_increment
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def reduced_exog(self): """ Returns a linearly transformed exog matrix whose columns span the constrained model space. Parameters ---------- exog : array_like The exogeneous data for the model. """ return self.exog_fulltrans[:, 0:self.lhs0.shape[1]]
Returns a linearly transformed exog matrix whose columns span the constrained model space. Parameters ---------- exog : array_like The exogeneous data for the model.
reduced_exog
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def restore_exog(self): """ Returns the full exog matrix before it was reduced to satisfy the constraint. """ return self.orig_exog
Returns the full exog matrix before it was reduced to satisfy the constraint.
restore_exog
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def unpack_param(self, params): """ Converts the parameter vector `params` from reduced to full coordinates. """ return self.param0 + np.dot(self.lhs0, params)
Converts the parameter vector `params` from reduced to full coordinates.
unpack_param
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def unpack_cov(self, bcov): """ Converts the covariance matrix `bcov` from reduced to full coordinates. """ return np.dot(self.lhs0, np.dot(bcov, self.lhs0.T))
Converts the covariance matrix `bcov` from reduced to full coordinates.
unpack_cov
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def cluster_list(self, array): """ Returns `array` split into subarrays corresponding to the cluster structure. """ if array.ndim == 1: return [np.array(array[self.group_indices[k]]) for k in self.group_labels] else: return [np.array(array[self.group_indices[k], :]) for k in self.group_labels]
Returns `array` split into subarrays corresponding to the cluster structure.
cluster_list
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def compare_score_test(self, submodel): """ Perform a score test for the given submodel against this model. Parameters ---------- submodel : GEEResults instance A fitted GEE model that is a submodel of this model. Returns ------- A dictionary with keys "statistic", "p-value", and "df", containing the score test statistic, its chi^2 p-value, and the degrees of freedom used to compute the p-value. Notes ----- The score test can be performed without calling 'fit' on the larger model. The provided submodel must be obtained from a fitted GEE. This method performs the same score test as can be obtained by fitting the GEE with a linear constraint and calling `score_test` on the results. References ---------- Xu Guo and Wei Pan (2002). "Small sample performance of the score test in GEE". http://www.sph.umn.edu/faculty1/wp-content/uploads/2012/11/rr2002-013.pdf """ # Since the model has not been fit, its scaletype has not been # set. So give it the scaletype of the submodel. self.scaletype = submodel.model.scaletype # Check consistency between model and submodel (not a comprehensive # check) submod = submodel.model if self.exog.shape[0] != submod.exog.shape[0]: msg = "Model and submodel have different numbers of cases." raise ValueError(msg) if self.exog.shape[1] == submod.exog.shape[1]: msg = "Model and submodel have the same number of variables" warnings.warn(msg) if not isinstance(self.family, type(submod.family)): msg = "Model and submodel have different GLM families." warnings.warn(msg) if not isinstance(self.cov_struct, type(submod.cov_struct)): warnings.warn("Model and submodel have different GEE covariance " "structures.") if not np.equal(self.weights, submod.weights).all(): msg = "Model and submodel should have the same weights." warnings.warn(msg) # Get the positions of the submodel variables in the # parent model qm, qc = _score_test_submodel(self, submodel.model) if qm is None: msg = "The provided model is not a submodel." raise ValueError(msg) # Embed the submodel params into a params vector for the # parent model params_ex = np.dot(qm, submodel.params) # Attempt to preserve the state of the parent model cov_struct_save = self.cov_struct import copy cached_means_save = copy.deepcopy(self.cached_means) # Get the score vector of the submodel params in # the parent model self.cov_struct = submodel.cov_struct self.update_cached_means(params_ex) _, score = self._update_mean_params() if score is None: msg = "Singular matrix encountered in GEE score test" warnings.warn(msg, ConvergenceWarning) return None if not hasattr(self, "ddof_scale"): self.ddof_scale = self.exog.shape[1] if not hasattr(self, "scaling_factor"): self.scaling_factor = 1 _, ncov1, cmat = self._covmat() score2 = np.dot(qc.T, score) try: amat = np.linalg.inv(ncov1) except np.linalg.LinAlgError: amat = np.linalg.pinv(ncov1) bmat_11 = np.dot(qm.T, np.dot(cmat, qm)) bmat_22 = np.dot(qc.T, np.dot(cmat, qc)) bmat_12 = np.dot(qm.T, np.dot(cmat, qc)) amat_11 = np.dot(qm.T, np.dot(amat, qm)) amat_12 = np.dot(qm.T, np.dot(amat, qc)) try: ab = np.linalg.solve(amat_11, bmat_12) except np.linalg.LinAlgError: ab = np.dot(np.linalg.pinv(amat_11), bmat_12) score_cov = bmat_22 - np.dot(amat_12.T, ab) try: aa = np.linalg.solve(amat_11, amat_12) except np.linalg.LinAlgError: aa = np.dot(np.linalg.pinv(amat_11), amat_12) score_cov -= np.dot(bmat_12.T, aa) try: ab = np.linalg.solve(amat_11, bmat_11) except np.linalg.LinAlgError: ab = np.dot(np.linalg.pinv(amat_11), bmat_11) try: aa = np.linalg.solve(amat_11, amat_12) except np.linalg.LinAlgError: aa = np.dot(np.linalg.pinv(amat_11), amat_12) score_cov += np.dot(amat_12.T, np.dot(ab, aa)) # Attempt to restore state self.cov_struct = cov_struct_save self.cached_means = cached_means_save from scipy.stats.distributions import chi2 try: sc2 = np.linalg.solve(score_cov, score2) except np.linalg.LinAlgError: sc2 = np.dot(np.linalg.pinv(score_cov), score2) score_statistic = np.dot(score2, sc2) score_df = len(score2) score_pvalue = 1 - chi2.cdf(score_statistic, score_df) return {"statistic": score_statistic, "df": score_df, "p-value": score_pvalue}
Perform a score test for the given submodel against this model. Parameters ---------- submodel : GEEResults instance A fitted GEE model that is a submodel of this model. Returns ------- A dictionary with keys "statistic", "p-value", and "df", containing the score test statistic, its chi^2 p-value, and the degrees of freedom used to compute the p-value. Notes ----- The score test can be performed without calling 'fit' on the larger model. The provided submodel must be obtained from a fitted GEE. This method performs the same score test as can be obtained by fitting the GEE with a linear constraint and calling `score_test` on the results. References ---------- Xu Guo and Wei Pan (2002). "Small sample performance of the score test in GEE". http://www.sph.umn.edu/faculty1/wp-content/uploads/2012/11/rr2002-013.pdf
compare_score_test
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def estimate_scale(self): """ Estimate the dispersion/scale. """ if self.scaletype is None: if isinstance(self.family, (families.Binomial, families.Poisson, families.NegativeBinomial, _Multinomial)): return 1. elif isinstance(self.scaletype, float): return np.array(self.scaletype) endog = self.endog_li cached_means = self.cached_means nobs = self.nobs varfunc = self.family.variance scale = 0. fsum = 0. for i in range(self.num_group): if len(endog[i]) == 0: continue expval, _ = cached_means[i] sdev = np.sqrt(varfunc(expval)) resid = (endog[i] - expval) / sdev if self.weights is not None: f = self.weights_li[i] scale += np.sum(f * (resid ** 2)) fsum += f.sum() else: scale += np.sum(resid ** 2) fsum += len(resid) scale /= (fsum * (nobs - self.ddof_scale) / float(nobs)) return scale
Estimate the dispersion/scale.
estimate_scale
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def mean_deriv(self, exog, lin_pred): """ Derivative of the expected endog with respect to the parameters. Parameters ---------- exog : array_like The exogeneous data at which the derivative is computed. lin_pred : array_like The values of the linear predictor. Returns ------- The value of the derivative of the expected endog with respect to the parameter vector. Notes ----- If there is an offset or exposure, it should be added to `lin_pred` prior to calling this function. """ idl = self.family.link.inverse_deriv(lin_pred) dmat = exog * idl[:, None] return dmat
Derivative of the expected endog with respect to the parameters. Parameters ---------- exog : array_like The exogeneous data at which the derivative is computed. lin_pred : array_like The values of the linear predictor. Returns ------- The value of the derivative of the expected endog with respect to the parameter vector. Notes ----- If there is an offset or exposure, it should be added to `lin_pred` prior to calling this function.
mean_deriv
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def mean_deriv_exog(self, exog, params, offset_exposure=None): """ Derivative of the expected endog with respect to exog. Parameters ---------- exog : array_like Values of the independent variables at which the derivative is calculated. params : array_like Parameter values at which the derivative is calculated. offset_exposure : array_like, optional Combined offset and exposure. Returns ------- The derivative of the expected endog with respect to exog. """ lin_pred = np.dot(exog, params) if offset_exposure is not None: lin_pred += offset_exposure idl = self.family.link.inverse_deriv(lin_pred) dmat = np.outer(idl, params) return dmat
Derivative of the expected endog with respect to exog. Parameters ---------- exog : array_like Values of the independent variables at which the derivative is calculated. params : array_like Parameter values at which the derivative is calculated. offset_exposure : array_like, optional Combined offset and exposure. Returns ------- The derivative of the expected endog with respect to exog.
mean_deriv_exog
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def _update_mean_params(self): """ Returns ------- update : array_like The update vector such that params + update is the next iterate when solving the score equations. score : array_like The current value of the score equations, not incorporating the scale parameter. If desired, multiply this vector by the scale parameter to incorporate the scale. """ endog = self.endog_li exog = self.exog_li weights = getattr(self, "weights_li", None) cached_means = self.cached_means varfunc = self.family.variance bmat, score = 0, 0 for i in range(self.num_group): expval, lpr = cached_means[i] resid = endog[i] - expval dmat = self.mean_deriv(exog[i], lpr) sdev = np.sqrt(varfunc(expval)) if weights is not None: w = weights[i] wresid = resid * w wdmat = dmat * w[:, None] else: wresid = resid wdmat = dmat rslt = self.cov_struct.covariance_matrix_solve( expval, i, sdev, (wdmat, wresid)) if rslt is None: return None, None vinv_d, vinv_resid = tuple(rslt) bmat += np.dot(dmat.T, vinv_d) score += np.dot(dmat.T, vinv_resid) try: update = np.linalg.solve(bmat, score) except np.linalg.LinAlgError: update = np.dot(np.linalg.pinv(bmat), score) self._fit_history["cov_adjust"].append( self.cov_struct.cov_adjust) return update, score
Returns ------- update : array_like The update vector such that params + update is the next iterate when solving the score equations. score : array_like The current value of the score equations, not incorporating the scale parameter. If desired, multiply this vector by the scale parameter to incorporate the scale.
_update_mean_params
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def update_cached_means(self, mean_params): """ cached_means should always contain the most recent calculation of the group-wise mean vectors. This function should be called every time the regression parameters are changed, to keep the cached means up to date. """ endog = self.endog_li exog = self.exog_li offset = self.offset_li linkinv = self.family.link.inverse self.cached_means = [] for i in range(self.num_group): if len(endog[i]) == 0: continue lpr = np.dot(exog[i], mean_params) if offset is not None: lpr += offset[i] expval = linkinv(lpr) self.cached_means.append((expval, lpr))
cached_means should always contain the most recent calculation of the group-wise mean vectors. This function should be called every time the regression parameters are changed, to keep the cached means up to date.
update_cached_means
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def _covmat(self): """ Returns the sampling covariance matrix of the regression parameters and related quantities. Returns ------- cov_robust : array_like The robust, or sandwich estimate of the covariance, which is meaningful even if the working covariance structure is incorrectly specified. cov_naive : array_like The model-based estimate of the covariance, which is meaningful if the covariance structure is correctly specified. cmat : array_like The center matrix of the sandwich expression, used in obtaining score test results. """ endog = self.endog_li exog = self.exog_li weights = getattr(self, "weights_li", None) varfunc = self.family.variance cached_means = self.cached_means # Calculate the naive (model-based) and robust (sandwich) # covariances. bmat, cmat = 0, 0 for i in range(self.num_group): expval, lpr = cached_means[i] resid = endog[i] - expval dmat = self.mean_deriv(exog[i], lpr) sdev = np.sqrt(varfunc(expval)) if weights is not None: w = weights[i] wresid = resid * w wdmat = dmat * w[:, None] else: wresid = resid wdmat = dmat rslt = self.cov_struct.covariance_matrix_solve( expval, i, sdev, (wdmat, wresid)) if rslt is None: return None, None, None, None vinv_d, vinv_resid = tuple(rslt) bmat += np.dot(dmat.T, vinv_d) dvinv_resid = np.dot(dmat.T, vinv_resid) cmat += np.outer(dvinv_resid, dvinv_resid) scale = self.estimate_scale() try: bmati = np.linalg.inv(bmat) except np.linalg.LinAlgError: bmati = np.linalg.pinv(bmat) cov_naive = bmati * scale cov_robust = np.dot(bmati, np.dot(cmat, bmati)) cov_naive *= self.scaling_factor cov_robust *= self.scaling_factor return cov_robust, cov_naive, cmat
Returns the sampling covariance matrix of the regression parameters and related quantities. Returns ------- cov_robust : array_like The robust, or sandwich estimate of the covariance, which is meaningful even if the working covariance structure is incorrectly specified. cov_naive : array_like The model-based estimate of the covariance, which is meaningful if the covariance structure is correctly specified. cmat : array_like The center matrix of the sandwich expression, used in obtaining score test results.
_covmat
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def fit_regularized(self, pen_wt, scad_param=3.7, maxiter=100, ddof_scale=None, update_assoc=5, ctol=1e-5, ztol=1e-3, eps=1e-6, scale=None): """ Regularized estimation for GEE. Parameters ---------- pen_wt : float The penalty weight (a non-negative scalar). scad_param : float Non-negative scalar determining the shape of the Scad penalty. maxiter : int The maximum number of iterations. ddof_scale : int Value to subtract from `nobs` when calculating the denominator degrees of freedom for t-statistics, defaults to the number of columns in `exog`. update_assoc : int The dependence parameters are updated every `update_assoc` iterations of the mean structure parameter updates. ctol : float Convergence criterion, default is one order of magnitude smaller than proposed in section 3.1 of Wang et al. ztol : float Coefficients smaller than this value are treated as being zero, default is based on section 5 of Wang et al. eps : non-negative scalar Numerical constant, see section 3.2 of Wang et al. scale : float or string If a float, this value is used as the scale parameter. If "X2", the scale parameter is always estimated using Pearson's chi-square method (e.g. as in a quasi-Poisson analysis). If None, the default approach for the family is used to estimate the scale parameter. Returns ------- GEEResults instance. Note that not all methods of the results class make sense when the model has been fit with regularization. Notes ----- This implementation assumes that the link is canonical. References ---------- Wang L, Zhou J, Qu A. (2012). Penalized generalized estimating equations for high-dimensional longitudinal data analysis. Biometrics. 2012 Jun;68(2):353-60. doi: 10.1111/j.1541-0420.2011.01678.x. https://www.ncbi.nlm.nih.gov/pubmed/21955051 http://users.stat.umn.edu/~wangx346/research/GEE_selection.pdf """ self.scaletype = scale mean_params = np.zeros(self.exog.shape[1]) self.update_cached_means(mean_params) converged = False fit_history = defaultdict(list) # Subtract this number from the total sample size when # normalizing the scale parameter estimate. if ddof_scale is None: self.ddof_scale = self.exog.shape[1] else: if not ddof_scale >= 0: raise ValueError( "ddof_scale must be a non-negative number or None") self.ddof_scale = ddof_scale # Keep this private for now. In some cases the early steps are # very small so it seems necessary to ensure a certain minimum # number of iterations before testing for convergence. miniter = 20 for itr in range(maxiter): update, hm = self._update_regularized( mean_params, pen_wt, scad_param, eps) if update is None: msg = "Singular matrix encountered in regularized GEE update" warnings.warn(msg, ConvergenceWarning) break if itr > miniter and np.sqrt(np.sum(update**2)) < ctol: converged = True break mean_params += update fit_history['params'].append(mean_params.copy()) self.update_cached_means(mean_params) if itr != 0 and (itr % update_assoc == 0): self._update_assoc(mean_params) if not converged: msg = "GEE.fit_regularized did not converge" warnings.warn(msg) mean_params[np.abs(mean_params) < ztol] = 0 self._update_assoc(mean_params) ma = self._regularized_covmat(mean_params) cov = np.linalg.solve(hm, ma) cov = np.linalg.solve(hm, cov.T) # kwargs to add to results instance, need to be available in __init__ res_kwds = dict(cov_type="robust", cov_robust=cov) scale = self.estimate_scale() rslt = GEEResults(self, mean_params, cov, scale, regularized=True, attr_kwds=res_kwds) rslt.fit_history = fit_history return GEEResultsWrapper(rslt)
Regularized estimation for GEE. Parameters ---------- pen_wt : float The penalty weight (a non-negative scalar). scad_param : float Non-negative scalar determining the shape of the Scad penalty. maxiter : int The maximum number of iterations. ddof_scale : int Value to subtract from `nobs` when calculating the denominator degrees of freedom for t-statistics, defaults to the number of columns in `exog`. update_assoc : int The dependence parameters are updated every `update_assoc` iterations of the mean structure parameter updates. ctol : float Convergence criterion, default is one order of magnitude smaller than proposed in section 3.1 of Wang et al. ztol : float Coefficients smaller than this value are treated as being zero, default is based on section 5 of Wang et al. eps : non-negative scalar Numerical constant, see section 3.2 of Wang et al. scale : float or string If a float, this value is used as the scale parameter. If "X2", the scale parameter is always estimated using Pearson's chi-square method (e.g. as in a quasi-Poisson analysis). If None, the default approach for the family is used to estimate the scale parameter. Returns ------- GEEResults instance. Note that not all methods of the results class make sense when the model has been fit with regularization. Notes ----- This implementation assumes that the link is canonical. References ---------- Wang L, Zhou J, Qu A. (2012). Penalized generalized estimating equations for high-dimensional longitudinal data analysis. Biometrics. 2012 Jun;68(2):353-60. doi: 10.1111/j.1541-0420.2011.01678.x. https://www.ncbi.nlm.nih.gov/pubmed/21955051 http://users.stat.umn.edu/~wangx346/research/GEE_selection.pdf
fit_regularized
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def _handle_constraint(self, mean_params, bcov): """ Expand the parameter estimate `mean_params` and covariance matrix `bcov` to the coordinate system of the unconstrained model. Parameters ---------- mean_params : array_like A parameter vector estimate for the reduced model. bcov : array_like The covariance matrix of mean_params. Returns ------- mean_params : array_like The input parameter vector mean_params, expanded to the coordinate system of the full model bcov : array_like The input covariance matrix bcov, expanded to the coordinate system of the full model """ # The number of variables in the full model red_p = len(mean_params) full_p = self.constraint.lhs.shape[1] mean_params0 = np.r_[mean_params, np.zeros(full_p - red_p)] # Get the score vector under the full model. save_exog_li = self.exog_li self.exog_li = self.constraint.exog_fulltrans_li import copy save_cached_means = copy.deepcopy(self.cached_means) self.update_cached_means(mean_params0) _, score = self._update_mean_params() if score is None: warnings.warn("Singular matrix encountered in GEE score test", ConvergenceWarning) return None, None _, ncov1, cmat = self._covmat() scale = self.estimate_scale() cmat = cmat / scale ** 2 score2 = score[red_p:] / scale amat = np.linalg.inv(ncov1) bmat_11 = cmat[0:red_p, 0:red_p] bmat_22 = cmat[red_p:, red_p:] bmat_12 = cmat[0:red_p, red_p:] amat_11 = amat[0:red_p, 0:red_p] amat_12 = amat[0:red_p, red_p:] score_cov = bmat_22 - np.dot(amat_12.T, np.linalg.solve(amat_11, bmat_12)) score_cov -= np.dot(bmat_12.T, np.linalg.solve(amat_11, amat_12)) score_cov += np.dot(amat_12.T, np.dot(np.linalg.solve(amat_11, bmat_11), np.linalg.solve(amat_11, amat_12))) from scipy.stats.distributions import chi2 score_statistic = np.dot(score2, np.linalg.solve(score_cov, score2)) score_df = len(score2) score_pvalue = 1 - chi2.cdf(score_statistic, score_df) self.score_test_results = {"statistic": score_statistic, "df": score_df, "p-value": score_pvalue} mean_params = self.constraint.unpack_param(mean_params) bcov = self.constraint.unpack_cov(bcov) self.exog_li = save_exog_li self.cached_means = save_cached_means self.exog = self.constraint.restore_exog() return mean_params, bcov
Expand the parameter estimate `mean_params` and covariance matrix `bcov` to the coordinate system of the unconstrained model. Parameters ---------- mean_params : array_like A parameter vector estimate for the reduced model. bcov : array_like The covariance matrix of mean_params. Returns ------- mean_params : array_like The input parameter vector mean_params, expanded to the coordinate system of the full model bcov : array_like The input covariance matrix bcov, expanded to the coordinate system of the full model
_handle_constraint
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def _update_assoc(self, params): """ Update the association parameters """ self.cov_struct.update(params)
Update the association parameters
_update_assoc
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def _derivative_exog(self, params, exog=None, transform='dydx', dummy_idx=None, count_idx=None): """ For computing marginal effects, returns dF(XB) / dX where F(.) is the fitted mean. transform can be 'dydx', 'dyex', 'eydx', or 'eyex'. Not all of these make sense in the presence of discrete regressors, but checks are done in the results in get_margeff. """ # This form should be appropriate for group 1 probit, logit, # logistic, cloglog, heckprob, xtprobit. offset_exposure = None if exog is None: exog = self.exog offset_exposure = self._offset_exposure margeff = self.mean_deriv_exog(exog, params, offset_exposure) if 'ex' in transform: margeff *= exog if 'ey' in transform: margeff /= self.predict(params, exog)[:, None] if count_idx is not None: from statsmodels.discrete.discrete_margins import ( _get_count_effects, ) margeff = _get_count_effects(margeff, exog, count_idx, transform, self, params) if dummy_idx is not None: from statsmodels.discrete.discrete_margins import ( _get_dummy_effects, ) margeff = _get_dummy_effects(margeff, exog, dummy_idx, transform, self, params) return margeff
For computing marginal effects, returns dF(XB) / dX where F(.) is the fitted mean. transform can be 'dydx', 'dyex', 'eydx', or 'eyex'. Not all of these make sense in the presence of discrete regressors, but checks are done in the results in get_margeff.
_derivative_exog
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def qic(self, params, scale, cov_params, n_step=1000): """ Returns quasi-information criteria and quasi-likelihood values. Parameters ---------- params : array_like The GEE estimates of the regression parameters. scale : scalar Estimated scale parameter cov_params : array_like An estimate of the covariance matrix for the model parameters. Conventionally this is the robust covariance matrix. n_step : integer The number of points in the trapezoidal approximation to the quasi-likelihood function. Returns ------- ql : scalar The quasi-likelihood value qic : scalar A QIC that can be used to compare the mean and covariance structures of the model. qicu : scalar A simplified QIC that can be used to compare mean structures but not covariance structures Notes ----- The quasi-likelihood used here is obtained by numerically evaluating Wedderburn's integral representation of the quasi-likelihood function. This approach is valid for all families and links. Many other packages use analytical expressions for quasi-likelihoods that are valid in special cases where the link function is canonical. These analytical expressions may omit additive constants that only depend on the data. Therefore, the numerical values of our QL and QIC values will differ from the values reported by other packages. However only the differences between two QIC values calculated for different models using the same data are meaningful. Our QIC should produce the same QIC differences as other software. When using the QIC for models with unknown scale parameter, use a common estimate of the scale parameter for all models being compared. References ---------- .. [*] W. Pan (2001). Akaike's information criterion in generalized estimating equations. Biometrics (57) 1. """ varfunc = self.family.variance means = [] omega = 0.0 # omega^-1 is the model-based covariance assuming independence for i in range(self.num_group): expval, lpr = self.cached_means[i] means.append(expval) dmat = self.mean_deriv(self.exog_li[i], lpr) omega += np.dot(dmat.T, dmat) / scale means = np.concatenate(means) # The quasi-likelihood, use change of variables so the integration is # from -1 to 1. endog_li = np.concatenate(self.endog_li) du = means - endog_li qv = np.empty(n_step) xv = np.linspace(-0.99999, 1, n_step) for i, g in enumerate(xv): u = endog_li + (g + 1) * du / 2.0 vu = varfunc(u) qv[i] = -np.sum(du**2 * (g + 1) / vu) qv /= (4 * scale) try: from scipy.integrate import trapezoid except ImportError: # Remove after minimum is SciPy 1.7 from scipy.integrate import trapz as trapezoid ql = trapezoid(qv, dx=xv[1] - xv[0]) qicu = -2 * ql + 2 * self.exog.shape[1] qic = -2 * ql + 2 * np.trace(np.dot(omega, cov_params)) return ql, qic, qicu
Returns quasi-information criteria and quasi-likelihood values. Parameters ---------- params : array_like The GEE estimates of the regression parameters. scale : scalar Estimated scale parameter cov_params : array_like An estimate of the covariance matrix for the model parameters. Conventionally this is the robust covariance matrix. n_step : integer The number of points in the trapezoidal approximation to the quasi-likelihood function. Returns ------- ql : scalar The quasi-likelihood value qic : scalar A QIC that can be used to compare the mean and covariance structures of the model. qicu : scalar A simplified QIC that can be used to compare mean structures but not covariance structures Notes ----- The quasi-likelihood used here is obtained by numerically evaluating Wedderburn's integral representation of the quasi-likelihood function. This approach is valid for all families and links. Many other packages use analytical expressions for quasi-likelihoods that are valid in special cases where the link function is canonical. These analytical expressions may omit additive constants that only depend on the data. Therefore, the numerical values of our QL and QIC values will differ from the values reported by other packages. However only the differences between two QIC values calculated for different models using the same data are meaningful. Our QIC should produce the same QIC differences as other software. When using the QIC for models with unknown scale parameter, use a common estimate of the scale parameter for all models being compared. References ---------- .. [*] W. Pan (2001). Akaike's information criterion in generalized estimating equations. Biometrics (57) 1.
qic
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def resid(self): """ The response residuals. """ return self.resid_response
The response residuals.
resid
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def standard_errors(self, cov_type="robust"): """ This is a convenience function that returns the standard errors for any covariance type. The value of `bse` is the standard errors for whichever covariance type is specified as an argument to `fit` (defaults to "robust"). Parameters ---------- cov_type : str One of "robust", "naive", or "bias_reduced". Determines the covariance used to compute standard errors. Defaults to "robust". """ # Check covariance_type covariance_type = cov_type.lower() allowed_covariances = ["robust", "naive", "bias_reduced"] if covariance_type not in allowed_covariances: msg = ("GEE: `covariance_type` must be one of " + ", ".join(allowed_covariances)) raise ValueError(msg) if covariance_type == "robust": return np.sqrt(np.diag(self.cov_robust)) elif covariance_type == "naive": return np.sqrt(np.diag(self.cov_naive)) elif covariance_type == "bias_reduced": if self.cov_robust_bc is None: raise ValueError( "GEE: `bias_reduced` covariance not available") return np.sqrt(np.diag(self.cov_robust_bc))
This is a convenience function that returns the standard errors for any covariance type. The value of `bse` is the standard errors for whichever covariance type is specified as an argument to `fit` (defaults to "robust"). Parameters ---------- cov_type : str One of "robust", "naive", or "bias_reduced". Determines the covariance used to compute standard errors. Defaults to "robust".
standard_errors
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def score_test(self): """ Return the results of a score test for a linear constraint. Returns ------- Adictionary containing the p-value, the test statistic, and the degrees of freedom for the score test. Notes ----- See also GEE.compare_score_test for an alternative way to perform a score test. GEEResults.score_test is more general, in that it supports testing arbitrary linear equality constraints. However GEE.compare_score_test might be easier to use when comparing two explicit models. References ---------- Xu Guo and Wei Pan (2002). "Small sample performance of the score test in GEE". http://www.sph.umn.edu/faculty1/wp-content/uploads/2012/11/rr2002-013.pdf """ if not hasattr(self.model, "score_test_results"): msg = "score_test on results instance only available when " msg += " model was fit with constraints" raise ValueError(msg) return self.model.score_test_results
Return the results of a score test for a linear constraint. Returns ------- Adictionary containing the p-value, the test statistic, and the degrees of freedom for the score test. Notes ----- See also GEE.compare_score_test for an alternative way to perform a score test. GEEResults.score_test is more general, in that it supports testing arbitrary linear equality constraints. However GEE.compare_score_test might be easier to use when comparing two explicit models. References ---------- Xu Guo and Wei Pan (2002). "Small sample performance of the score test in GEE". http://www.sph.umn.edu/faculty1/wp-content/uploads/2012/11/rr2002-013.pdf
score_test
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def resid_split(self): """ Returns the residuals, the endogeneous data minus the fitted values from the model. The residuals are returned as a list of arrays containing the residuals for each cluster. """ sresid = [] for v in self.model.group_labels: ii = self.model.group_indices[v] sresid.append(self.resid[ii]) return sresid
Returns the residuals, the endogeneous data minus the fitted values from the model. The residuals are returned as a list of arrays containing the residuals for each cluster.
resid_split
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def resid_centered(self): """ Returns the residuals centered within each group. """ cresid = self.resid.copy() for v in self.model.group_labels: ii = self.model.group_indices[v] cresid[ii] -= cresid[ii].mean() return cresid
Returns the residuals centered within each group.
resid_centered
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def resid_centered_split(self): """ Returns the residuals centered within each group. The residuals are returned as a list of arrays containing the centered residuals for each cluster. """ sresid = [] for v in self.model.group_labels: ii = self.model.group_indices[v] sresid.append(self.centered_resid[ii]) return sresid
Returns the residuals centered within each group. The residuals are returned as a list of arrays containing the centered residuals for each cluster.
resid_centered_split
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def qic(self, scale=None, n_step=1000): """ Returns the QIC and QICu information criteria. See GEE.qic for documentation. """ # It is easy to forget to set the scale parameter. Sometimes # this is intentional, so we warn. if scale is None: warnings.warn("QIC values obtained using scale=None are not " "appropriate for comparing models") if scale is None: scale = self.scale _, qic, qicu = self.model.qic(self.params, scale, self.cov_params(), n_step=n_step) return qic, qicu
Returns the QIC and QICu information criteria. See GEE.qic for documentation.
qic
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def conf_int(self, alpha=.05, cols=None, cov_type=None): """ Returns confidence intervals for the fitted parameters. Parameters ---------- alpha : float, optional The `alpha` level for the confidence interval. i.e., The default `alpha` = .05 returns a 95% confidence interval. cols : array_like, optional `cols` specifies which confidence intervals to return cov_type : str The covariance type used for computing standard errors; must be one of 'robust', 'naive', and 'bias reduced'. See `GEE` for details. Notes ----- The confidence interval is based on the Gaussian distribution. """ # super does not allow to specify cov_type and method is not # implemented, # FIXME: remove this method here if cov_type is None: bse = self.bse else: bse = self.standard_errors(cov_type=cov_type) params = self.params dist = stats.norm q = dist.ppf(1 - alpha / 2) if cols is None: lower = self.params - q * bse upper = self.params + q * bse else: cols = np.asarray(cols) lower = params[cols] - q * bse[cols] upper = params[cols] + q * bse[cols] return np.asarray(lzip(lower, upper))
Returns confidence intervals for the fitted parameters. Parameters ---------- alpha : float, optional The `alpha` level for the confidence interval. i.e., The default `alpha` = .05 returns a 95% confidence interval. cols : array_like, optional `cols` specifies which confidence intervals to return cov_type : str The covariance type used for computing standard errors; must be one of 'robust', 'naive', and 'bias reduced'. See `GEE` for details. Notes ----- The confidence interval is based on the Gaussian distribution.
conf_int
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def summary(self, yname=None, xname=None, title=None, alpha=.05): """ Summarize the GEE regression results Parameters ---------- yname : str, optional Default is `y` xname : list[str], optional Names for the exogenous variables, default is `var_#` for ## in the number of regressors. Must match the number of parameters in the model title : str, optional Title for the top table. If not None, then this replaces the default title alpha : float significance level for the confidence intervals cov_type : str The covariance type used to compute the standard errors; one of 'robust' (the usual robust sandwich-type covariance estimate), 'naive' (ignores dependence), and 'bias reduced' (the Mancl/DeRouen estimate). Returns ------- smry : Summary instance this holds the summary tables and text, which can be printed or converted to various output formats. See Also -------- statsmodels.iolib.summary.Summary : class to hold summary results """ top_left = [('Dep. Variable:', None), ('Model:', None), ('Method:', ['Generalized']), ('', ['Estimating Equations']), ('Family:', [self.model.family.__class__.__name__]), ('Dependence structure:', [self.model.cov_struct.__class__.__name__]), ('Date:', None), ('Covariance type: ', [self.cov_type, ]) ] NY = [len(y) for y in self.model.endog_li] top_right = [('No. Observations:', [sum(NY)]), ('No. clusters:', [len(self.model.endog_li)]), ('Min. cluster size:', [min(NY)]), ('Max. cluster size:', [max(NY)]), ('Mean cluster size:', ["%.1f" % np.mean(NY)]), ('Num. iterations:', ['%d' % len(self.fit_history['params'])]), ('Scale:', ["%.3f" % self.scale]), ('Time:', None), ] # The skew of the residuals skew1 = stats.skew(self.resid) kurt1 = stats.kurtosis(self.resid) skew2 = stats.skew(self.centered_resid) kurt2 = stats.kurtosis(self.centered_resid) diagn_left = [('Skew:', ["%12.4f" % skew1]), ('Centered skew:', ["%12.4f" % skew2])] diagn_right = [('Kurtosis:', ["%12.4f" % kurt1]), ('Centered kurtosis:', ["%12.4f" % kurt2]) ] if title is None: title = self.model.__class__.__name__ + ' ' +\ "Regression Results" # Override the exog variable names if xname is provided as an # argument. if xname is None: xname = self.model.exog_names if yname is None: yname = self.model.endog_names # Create summary table instance from statsmodels.iolib.summary import Summary smry = Summary() smry.add_table_2cols(self, gleft=top_left, gright=top_right, yname=yname, xname=xname, title=title) smry.add_table_params(self, yname=yname, xname=xname, alpha=alpha, use_t=False) smry.add_table_2cols(self, gleft=diagn_left, gright=diagn_right, yname=yname, xname=xname, title="") return smry
Summarize the GEE regression results Parameters ---------- yname : str, optional Default is `y` xname : list[str], optional Names for the exogenous variables, default is `var_#` for ## in the number of regressors. Must match the number of parameters in the model title : str, optional Title for the top table. If not None, then this replaces the default title alpha : float significance level for the confidence intervals cov_type : str The covariance type used to compute the standard errors; one of 'robust' (the usual robust sandwich-type covariance estimate), 'naive' (ignores dependence), and 'bias reduced' (the Mancl/DeRouen estimate). Returns ------- smry : Summary instance this holds the summary tables and text, which can be printed or converted to various output formats. See Also -------- statsmodels.iolib.summary.Summary : class to hold summary results
summary
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def get_margeff(self, at='overall', method='dydx', atexog=None, dummy=False, count=False): """Get marginal effects of the fitted model. Parameters ---------- at : str, optional Options are: - 'overall', The average of the marginal effects at each observation. - 'mean', The marginal effects at the mean of each regressor. - 'median', The marginal effects at the median of each regressor. - 'zero', The marginal effects at zero for each regressor. - 'all', The marginal effects at each observation. If `at` is 'all' only margeff will be available. Note that if `exog` is specified, then marginal effects for all variables not specified by `exog` are calculated using the `at` option. method : str, optional Options are: - 'dydx' - dy/dx - No transformation is made and marginal effects are returned. This is the default. - 'eyex' - estimate elasticities of variables in `exog` -- d(lny)/d(lnx) - 'dyex' - estimate semi-elasticity -- dy/d(lnx) - 'eydx' - estimate semi-elasticity -- d(lny)/dx Note that tranformations are done after each observation is calculated. Semi-elasticities for binary variables are computed using the midpoint method. 'dyex' and 'eyex' do not make sense for discrete variables. atexog : array_like, optional Optionally, you can provide the exogenous variables over which to get the marginal effects. This should be a dictionary with the key as the zero-indexed column number and the value of the dictionary. Default is None for all independent variables less the constant. dummy : bool, optional If False, treats binary variables (if present) as continuous. This is the default. Else if True, treats binary variables as changing from 0 to 1. Note that any variable that is either 0 or 1 is treated as binary. Each binary variable is treated separately for now. count : bool, optional If False, treats count variables (if present) as continuous. This is the default. Else if True, the marginal effect is the change in probabilities when each observation is increased by one. Returns ------- effects : ndarray the marginal effect corresponding to the input options Notes ----- When using after Poisson, returns the expected number of events per period, assuming that the model is loglinear. """ if self.model.constraint is not None: warnings.warn("marginal effects ignore constraints", ValueWarning) return GEEMargins(self, (at, method, atexog, dummy, count))
Get marginal effects of the fitted model. Parameters ---------- at : str, optional Options are: - 'overall', The average of the marginal effects at each observation. - 'mean', The marginal effects at the mean of each regressor. - 'median', The marginal effects at the median of each regressor. - 'zero', The marginal effects at zero for each regressor. - 'all', The marginal effects at each observation. If `at` is 'all' only margeff will be available. Note that if `exog` is specified, then marginal effects for all variables not specified by `exog` are calculated using the `at` option. method : str, optional Options are: - 'dydx' - dy/dx - No transformation is made and marginal effects are returned. This is the default. - 'eyex' - estimate elasticities of variables in `exog` -- d(lny)/d(lnx) - 'dyex' - estimate semi-elasticity -- dy/d(lnx) - 'eydx' - estimate semi-elasticity -- d(lny)/dx Note that tranformations are done after each observation is calculated. Semi-elasticities for binary variables are computed using the midpoint method. 'dyex' and 'eyex' do not make sense for discrete variables. atexog : array_like, optional Optionally, you can provide the exogenous variables over which to get the marginal effects. This should be a dictionary with the key as the zero-indexed column number and the value of the dictionary. Default is None for all independent variables less the constant. dummy : bool, optional If False, treats binary variables (if present) as continuous. This is the default. Else if True, treats binary variables as changing from 0 to 1. Note that any variable that is either 0 or 1 is treated as binary. Each binary variable is treated separately for now. count : bool, optional If False, treats count variables (if present) as continuous. This is the default. Else if True, the marginal effect is the change in probabilities when each observation is increased by one. Returns ------- effects : ndarray the marginal effect corresponding to the input options Notes ----- When using after Poisson, returns the expected number of events per period, assuming that the model is loglinear.
get_margeff
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def plot_isotropic_dependence(self, ax=None, xpoints=10, min_n=50): """ Create a plot of the pairwise products of within-group residuals against the corresponding time differences. This plot can be used to assess the possible form of an isotropic covariance structure. Parameters ---------- ax : AxesSubplot An axes on which to draw the graph. If None, new figure and axes objects are created xpoints : scalar or array_like If scalar, the number of points equally spaced points on the time difference axis used to define bins for calculating local means. If an array, the specific points that define the bins. min_n : int The minimum sample size in a bin for the mean residual product to be included on the plot. """ from statsmodels.graphics import utils as gutils resid = self.model.cluster_list(self.resid) time = self.model.cluster_list(self.model.time) # All within-group pairwise time distances (xdt) and the # corresponding products of scaled residuals (xre). xre, xdt = [], [] for re, ti in zip(resid, time): ix = np.tril_indices(re.shape[0], 0) re = re[ix[0]] * re[ix[1]] / self.scale ** 2 xre.append(re) dists = np.sqrt(((ti[ix[0], :] - ti[ix[1], :]) ** 2).sum(1)) xdt.append(dists) xre = np.concatenate(xre) xdt = np.concatenate(xdt) if ax is None: fig, ax = gutils.create_mpl_ax(ax) else: fig = ax.get_figure() # Convert to a correlation ii = np.flatnonzero(xdt == 0) v0 = np.mean(xre[ii]) xre /= v0 # Use the simple average to smooth, since fancier smoothers # that trim and downweight outliers give biased results (we # need the actual mean of a skewed distribution). if np.isscalar(xpoints): xpoints = np.linspace(0, max(xdt), xpoints) dg = np.digitize(xdt, xpoints) dgu = np.unique(dg) hist = np.asarray([np.sum(dg == k) for k in dgu]) ii = np.flatnonzero(hist >= min_n) dgu = dgu[ii] dgy = np.asarray([np.mean(xre[dg == k]) for k in dgu]) dgx = np.asarray([np.mean(xdt[dg == k]) for k in dgu]) ax.plot(dgx, dgy, '-', color='orange', lw=5) ax.set_xlabel("Time difference") ax.set_ylabel("Product of scaled residuals") return fig
Create a plot of the pairwise products of within-group residuals against the corresponding time differences. This plot can be used to assess the possible form of an isotropic covariance structure. Parameters ---------- ax : AxesSubplot An axes on which to draw the graph. If None, new figure and axes objects are created xpoints : scalar or array_like If scalar, the number of points equally spaced points on the time difference axis used to define bins for calculating local means. If an array, the specific points that define the bins. min_n : int The minimum sample size in a bin for the mean residual product to be included on the plot.
plot_isotropic_dependence
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def sensitivity_params(self, dep_params_first, dep_params_last, num_steps): """ Refits the GEE model using a sequence of values for the dependence parameters. Parameters ---------- dep_params_first : array_like The first dep_params in the sequence dep_params_last : array_like The last dep_params in the sequence num_steps : int The number of dep_params in the sequence Returns ------- results : array_like The GEEResults objects resulting from the fits. """ model = self.model import copy cov_struct = copy.deepcopy(self.model.cov_struct) # We are fixing the dependence structure in each run. update_dep = model.update_dep model.update_dep = False dep_params = [] results = [] for x in np.linspace(0, 1, num_steps): dp = x * dep_params_last + (1 - x) * dep_params_first dep_params.append(dp) model.cov_struct = copy.deepcopy(cov_struct) model.cov_struct.dep_params = dp rslt = model.fit(start_params=self.params, ctol=self.ctol, params_niter=self.params_niter, first_dep_update=self.first_dep_update, cov_type=self.cov_type) results.append(rslt) model.update_dep = update_dep return results
Refits the GEE model using a sequence of values for the dependence parameters. Parameters ---------- dep_params_first : array_like The first dep_params in the sequence dep_params_last : array_like The last dep_params in the sequence num_steps : int The number of dep_params in the sequence Returns ------- results : array_like The GEEResults objects resulting from the fits.
sensitivity_params
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def setup_ordinal(self, endog, exog, groups, time, offset): """ Restructure ordinal data as binary indicators so that they can be analyzed using Generalized Estimating Equations. """ self.endog_orig = endog.copy() self.exog_orig = exog.copy() self.groups_orig = groups.copy() if offset is not None: self.offset_orig = offset.copy() else: self.offset_orig = None offset = np.zeros(len(endog)) if time is not None: self.time_orig = time.copy() else: self.time_orig = None time = np.zeros((len(endog), 1)) exog = np.asarray(exog) endog = np.asarray(endog) groups = np.asarray(groups) time = np.asarray(time) offset = np.asarray(offset) # The unique outcomes, except the greatest one. self.endog_values = np.unique(endog) endog_cuts = self.endog_values[0:-1] ncut = len(endog_cuts) nrows = ncut * len(endog) exog_out = np.zeros((nrows, exog.shape[1]), dtype=np.float64) endog_out = np.zeros(nrows, dtype=np.float64) intercepts = np.zeros((nrows, ncut), dtype=np.float64) groups_out = np.zeros(nrows, dtype=groups.dtype) time_out = np.zeros((nrows, time.shape[1]), dtype=np.float64) offset_out = np.zeros(nrows, dtype=np.float64) jrow = 0 zipper = zip(exog, endog, groups, time, offset) for (exog_row, endog_value, group_value, time_value, offset_value) in zipper: # Loop over thresholds for the indicators for thresh_ix, thresh in enumerate(endog_cuts): exog_out[jrow, :] = exog_row endog_out[jrow] = int(np.squeeze(endog_value > thresh)) intercepts[jrow, thresh_ix] = 1 groups_out[jrow] = group_value time_out[jrow] = time_value offset_out[jrow] = offset_value jrow += 1 exog_out = np.concatenate((intercepts, exog_out), axis=1) # exog column names, including intercepts xnames = ["I(y>%.1f)" % v for v in endog_cuts] if type(self.exog_orig) is pd.DataFrame: xnames.extend(self.exog_orig.columns) else: xnames.extend(["x%d" % k for k in range(1, exog.shape[1] + 1)]) exog_out = pd.DataFrame(exog_out, columns=xnames) # Preserve the endog name if there is one if type(self.endog_orig) is pd.Series: endog_out = pd.Series(endog_out, name=self.endog_orig.name) return endog_out, exog_out, groups_out, time_out, offset_out
Restructure ordinal data as binary indicators so that they can be analyzed using Generalized Estimating Equations.
setup_ordinal
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def plot_distribution(self, ax=None, exog_values=None): """ Plot the fitted probabilities of endog in an ordinal model, for specified values of the predictors. Parameters ---------- ax : AxesSubplot An axes on which to draw the graph. If None, new figure and axes objects are created exog_values : array_like A list of dictionaries, with each dictionary mapping variable names to values at which the variable is held fixed. The values P(endog=y | exog) are plotted for all possible values of y, at the given exog value. Variables not included in a dictionary are held fixed at the mean value. Example: -------- We have a model with covariates 'age' and 'sex', and wish to plot the probabilities P(endog=y | exog) for males (sex=0) and for females (sex=1), as separate paths on the plot. Since 'age' is not included below in the map, it is held fixed at its mean value. >>> ev = [{"sex": 1}, {"sex": 0}] >>> rslt.distribution_plot(exog_values=ev) """ from statsmodels.graphics import utils as gutils if ax is None: fig, ax = gutils.create_mpl_ax(ax) else: fig = ax.get_figure() # If no covariate patterns are specified, create one with all # variables set to their mean values. if exog_values is None: exog_values = [{}, ] exog_means = self.model.exog.mean(0) ix_icept = [i for i, x in enumerate(self.model.exog_names) if x.startswith("I(")] for ev in exog_values: for k in ev.keys(): if k not in self.model.exog_names: raise ValueError("%s is not a variable in the model" % k) # Get the fitted probability for each level, at the given # covariate values. pr = [] for j in ix_icept: xp = np.zeros_like(self.params) xp[j] = 1. for i, vn in enumerate(self.model.exog_names): if i in ix_icept: continue # User-specified value if vn in ev: xp[i] = ev[vn] # Mean value else: xp[i] = exog_means[i] p = 1 / (1 + np.exp(-np.dot(xp, self.params))) pr.append(p) pr.insert(0, 1) pr.append(0) pr = np.asarray(pr) prd = -np.diff(pr) ax.plot(self.model.endog_values, prd, 'o-') ax.set_xlabel("Response value") ax.set_ylabel("Probability") ax.set_ylim(0, 1) return fig
Plot the fitted probabilities of endog in an ordinal model, for specified values of the predictors. Parameters ---------- ax : AxesSubplot An axes on which to draw the graph. If None, new figure and axes objects are created exog_values : array_like A list of dictionaries, with each dictionary mapping variable names to values at which the variable is held fixed. The values P(endog=y | exog) are plotted for all possible values of y, at the given exog value. Variables not included in a dictionary are held fixed at the mean value. Example: -------- We have a model with covariates 'age' and 'sex', and wish to plot the probabilities P(endog=y | exog) for males (sex=0) and for females (sex=1), as separate paths on the plot. Since 'age' is not included below in the map, it is held fixed at its mean value. >>> ev = [{"sex": 1}, {"sex": 0}] >>> rslt.distribution_plot(exog_values=ev)
plot_distribution
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def _score_test_submodel(par, sub): """ Return transformation matrices for design matrices. Parameters ---------- par : instance The parent model sub : instance The sub-model Returns ------- qm : array_like Matrix mapping the design matrix of the parent to the design matrix for the sub-model. qc : array_like Matrix mapping the design matrix of the parent to the orthogonal complement of the columnspace of the submodel in the columnspace of the parent. Notes ----- Returns None, None if the provided submodel is not actually a submodel. """ x1 = par.exog x2 = sub.exog u, s, vt = np.linalg.svd(x1, 0) v = vt.T # Get the orthogonal complement of col(x2) in col(x1). a, _ = np.linalg.qr(x2) a = u - np.dot(a, np.dot(a.T, u)) x2c, sb, _ = np.linalg.svd(a, 0) x2c = x2c[:, sb > 1e-12] # x1 * qm = x2 ii = np.flatnonzero(np.abs(s) > 1e-12) qm = np.dot(v[:, ii], np.dot(u[:, ii].T, x2) / s[ii, None]) e = np.max(np.abs(x2 - np.dot(x1, qm))) if e > 1e-8: return None, None # x1 * qc = x2c qc = np.dot(v[:, ii], np.dot(u[:, ii].T, x2c) / s[ii, None]) return qm, qc
Return transformation matrices for design matrices. Parameters ---------- par : instance The parent model sub : instance The sub-model Returns ------- qm : array_like Matrix mapping the design matrix of the parent to the design matrix for the sub-model. qc : array_like Matrix mapping the design matrix of the parent to the orthogonal complement of the columnspace of the submodel in the columnspace of the parent. Notes ----- Returns None, None if the provided submodel is not actually a submodel.
_score_test_submodel
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def setup_nominal(self, endog, exog, groups, time, offset): """ Restructure nominal data as binary indicators so that they can be analyzed using Generalized Estimating Equations. """ self.endog_orig = endog.copy() self.exog_orig = exog.copy() self.groups_orig = groups.copy() if offset is not None: self.offset_orig = offset.copy() else: self.offset_orig = None offset = np.zeros(len(endog)) if time is not None: self.time_orig = time.copy() else: self.time_orig = None time = np.zeros((len(endog), 1)) exog = np.asarray(exog) endog = np.asarray(endog) groups = np.asarray(groups) time = np.asarray(time) offset = np.asarray(offset) # The unique outcomes, except the greatest one. self.endog_values = np.unique(endog) endog_cuts = self.endog_values[0:-1] ncut = len(endog_cuts) self.ncut = ncut nrows = len(endog_cuts) * exog.shape[0] ncols = len(endog_cuts) * exog.shape[1] exog_out = np.zeros((nrows, ncols), dtype=np.float64) endog_out = np.zeros(nrows, dtype=np.float64) groups_out = np.zeros(nrows, dtype=np.float64) time_out = np.zeros((nrows, time.shape[1]), dtype=np.float64) offset_out = np.zeros(nrows, dtype=np.float64) jrow = 0 zipper = zip(exog, endog, groups, time, offset) for (exog_row, endog_value, group_value, time_value, offset_value) in zipper: # Loop over thresholds for the indicators for thresh_ix, thresh in enumerate(endog_cuts): u = np.zeros(len(endog_cuts), dtype=np.float64) u[thresh_ix] = 1 exog_out[jrow, :] = np.kron(u, exog_row) endog_out[jrow] = (int(endog_value == thresh)) groups_out[jrow] = group_value time_out[jrow] = time_value offset_out[jrow] = offset_value jrow += 1 # exog names if isinstance(self.exog_orig, pd.DataFrame): xnames_in = self.exog_orig.columns else: xnames_in = ["x%d" % k for k in range(1, exog.shape[1] + 1)] xnames = [] for tr in endog_cuts: xnames.extend([f"{v}[{tr:.1f}]" for v in xnames_in]) exog_out = pd.DataFrame(exog_out, columns=xnames) exog_out = pd.DataFrame(exog_out, columns=xnames) # Preserve endog name if there is one if isinstance(self.endog_orig, pd.Series): endog_out = pd.Series(endog_out, name=self.endog_orig.name) return endog_out, exog_out, groups_out, time_out, offset_out
Restructure nominal data as binary indicators so that they can be analyzed using Generalized Estimating Equations.
setup_nominal
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def mean_deriv(self, exog, lin_pred): """ Derivative of the expected endog with respect to the parameters. Parameters ---------- exog : array_like The exogeneous data at which the derivative is computed, number of rows must be a multiple of `ncut`. lin_pred : array_like The values of the linear predictor, length must be multiple of `ncut`. Returns ------- The derivative of the expected endog with respect to the parameters. """ expval = np.exp(lin_pred) # Reshape so that each row contains all the indicators # corresponding to one multinomial observation. expval_m = np.reshape(expval, (len(expval) // self.ncut, self.ncut)) # The normalizing constant for the multinomial probabilities. denom = 1 + expval_m.sum(1) denom = np.kron(denom, np.ones(self.ncut, dtype=np.float64)) # The multinomial probabilities mprob = expval / denom # First term of the derivative: denom * expval' / denom^2 = # expval' / denom. dmat = mprob[:, None] * exog # Second term of the derivative: -expval * denom' / denom^2 ddenom = expval[:, None] * exog dmat -= mprob[:, None] * ddenom / denom[:, None] return dmat
Derivative of the expected endog with respect to the parameters. Parameters ---------- exog : array_like The exogeneous data at which the derivative is computed, number of rows must be a multiple of `ncut`. lin_pred : array_like The values of the linear predictor, length must be multiple of `ncut`. Returns ------- The derivative of the expected endog with respect to the parameters.
mean_deriv
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def mean_deriv_exog(self, exog, params, offset_exposure=None): """ Derivative of the expected endog with respect to exog for the multinomial model, used in analyzing marginal effects. Parameters ---------- exog : array_like The exogeneous data at which the derivative is computed, number of rows must be a multiple of `ncut`. lpr : array_like The linear predictor values, length must be multiple of `ncut`. Returns ------- The value of the derivative of the expected endog with respect to exog. Notes ----- offset_exposure must be set at None for the multinomial family. """ if offset_exposure is not None: warnings.warn("Offset/exposure ignored for the multinomial family", ValueWarning) lpr = np.dot(exog, params) expval = np.exp(lpr) expval_m = np.reshape(expval, (len(expval) // self.ncut, self.ncut)) denom = 1 + expval_m.sum(1) denom = np.kron(denom, np.ones(self.ncut, dtype=np.float64)) bmat0 = np.outer(np.ones(exog.shape[0]), params) # Masking matrix qmat = [] for j in range(self.ncut): ee = np.zeros(self.ncut, dtype=np.float64) ee[j] = 1 qmat.append(np.kron(ee, np.ones(len(params) // self.ncut))) qmat = np.array(qmat) qmat = np.kron(np.ones((exog.shape[0] // self.ncut, 1)), qmat) bmat = bmat0 * qmat dmat = expval[:, None] * bmat / denom[:, None] expval_mb = np.kron(expval_m, np.ones((self.ncut, 1))) expval_mb = np.kron(expval_mb, np.ones((1, self.ncut))) dmat -= expval[:, None] * (bmat * expval_mb) / denom[:, None] ** 2 return dmat
Derivative of the expected endog with respect to exog for the multinomial model, used in analyzing marginal effects. Parameters ---------- exog : array_like The exogeneous data at which the derivative is computed, number of rows must be a multiple of `ncut`. lpr : array_like The linear predictor values, length must be multiple of `ncut`. Returns ------- The value of the derivative of the expected endog with respect to exog. Notes ----- offset_exposure must be set at None for the multinomial family.
mean_deriv_exog
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def plot_distribution(self, ax=None, exog_values=None): """ Plot the fitted probabilities of endog in an nominal model, for specified values of the predictors. Parameters ---------- ax : AxesSubplot An axes on which to draw the graph. If None, new figure and axes objects are created exog_values : array_like A list of dictionaries, with each dictionary mapping variable names to values at which the variable is held fixed. The values P(endog=y | exog) are plotted for all possible values of y, at the given exog value. Variables not included in a dictionary are held fixed at the mean value. Example: -------- We have a model with covariates 'age' and 'sex', and wish to plot the probabilities P(endog=y | exog) for males (sex=0) and for females (sex=1), as separate paths on the plot. Since 'age' is not included below in the map, it is held fixed at its mean value. >>> ex = [{"sex": 1}, {"sex": 0}] >>> rslt.distribution_plot(exog_values=ex) """ from statsmodels.graphics import utils as gutils if ax is None: fig, ax = gutils.create_mpl_ax(ax) else: fig = ax.get_figure() # If no covariate patterns are specified, create one with all # variables set to their mean values. if exog_values is None: exog_values = [{}, ] link = self.model.family.link.inverse ncut = self.model.family.ncut k = int(self.model.exog.shape[1] / ncut) exog_means = self.model.exog.mean(0)[0:k] exog_names = self.model.exog_names[0:k] exog_names = [x.split("[")[0] for x in exog_names] params = np.reshape(self.params, (ncut, len(self.params) // ncut)) for ev in exog_values: exog = exog_means.copy() for k in ev.keys(): if k not in exog_names: raise ValueError("%s is not a variable in the model" % k) ii = exog_names.index(k) exog[ii] = ev[k] lpr = np.dot(params, exog) pr = link(lpr) pr = np.r_[pr, 1 - pr.sum()] ax.plot(self.model.endog_values, pr, 'o-') ax.set_xlabel("Response value") ax.set_ylabel("Probability") ax.set_xticks(self.model.endog_values) ax.set_xticklabels(self.model.endog_values) ax.set_ylim(0, 1) return fig
Plot the fitted probabilities of endog in an nominal model, for specified values of the predictors. Parameters ---------- ax : AxesSubplot An axes on which to draw the graph. If None, new figure and axes objects are created exog_values : array_like A list of dictionaries, with each dictionary mapping variable names to values at which the variable is held fixed. The values P(endog=y | exog) are plotted for all possible values of y, at the given exog value. Variables not included in a dictionary are held fixed at the mean value. Example: -------- We have a model with covariates 'age' and 'sex', and wish to plot the probabilities P(endog=y | exog) for males (sex=0) and for females (sex=1), as separate paths on the plot. Since 'age' is not included below in the map, it is held fixed at its mean value. >>> ex = [{"sex": 1}, {"sex": 0}] >>> rslt.distribution_plot(exog_values=ex)
plot_distribution
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def inverse(self, lpr): """ Inverse of the multinomial logit transform, which gives the expected values of the data as a function of the linear predictors. Parameters ---------- lpr : array_like (length must be divisible by `ncut`) The linear predictors Returns ------- prob : ndarray Probabilities, or expected values """ expval = np.exp(lpr) denom = 1 + np.reshape(expval, (len(expval) // self.ncut, self.ncut)).sum(1) denom = np.kron(denom, np.ones(self.ncut, dtype=np.float64)) prob = expval / denom return prob
Inverse of the multinomial logit transform, which gives the expected values of the data as a function of the linear predictors. Parameters ---------- lpr : array_like (length must be divisible by `ncut`) The linear predictors Returns ------- prob : ndarray Probabilities, or expected values
inverse
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def __init__(self, nlevels, check_link=True): """ Parameters ---------- nlevels : int The number of distinct categories for the multinomial distribution. """ self._check_link = check_link self.initialize(nlevels)
Parameters ---------- nlevels : int The number of distinct categories for the multinomial distribution.
__init__
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def summary_frame(self, alpha=.05): """ Returns a DataFrame summarizing the marginal effects. Parameters ---------- alpha : float Number between 0 and 1. The confidence intervals have the probability 1-alpha. Returns ------- frame : DataFrames A DataFrame summarizing the marginal effects. """ _check_at_is_all(self.margeff_options) from pandas import DataFrame names = [_transform_names[self.margeff_options['method']], 'Std. Err.', 'z', 'Pr(>|z|)', 'Conf. Int. Low', 'Cont. Int. Hi.'] ind = self.results.model.exog.var(0) != 0 # True if not a constant exog_names = self.results.model.exog_names var_names = [name for i, name in enumerate(exog_names) if ind[i]] table = np.column_stack((self.margeff, self.margeff_se, self.tvalues, self.pvalues, self.conf_int(alpha))) return DataFrame(table, columns=names, index=var_names)
Returns a DataFrame summarizing the marginal effects. Parameters ---------- alpha : float Number between 0 and 1. The confidence intervals have the probability 1-alpha. Returns ------- frame : DataFrames A DataFrame summarizing the marginal effects.
summary_frame
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def conf_int(self, alpha=.05): """ Returns the confidence intervals of the marginal effects Parameters ---------- alpha : float Number between 0 and 1. The confidence intervals have the probability 1-alpha. Returns ------- conf_int : ndarray An array with lower, upper confidence intervals for the marginal effects. """ _check_at_is_all(self.margeff_options) me_se = self.margeff_se q = stats.norm.ppf(1 - alpha / 2) lower = self.margeff - q * me_se upper = self.margeff + q * me_se return np.asarray(lzip(lower, upper))
Returns the confidence intervals of the marginal effects Parameters ---------- alpha : float Number between 0 and 1. The confidence intervals have the probability 1-alpha. Returns ------- conf_int : ndarray An array with lower, upper confidence intervals for the marginal effects.
conf_int
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def summary(self, alpha=.05): """ Returns a summary table for marginal effects Parameters ---------- alpha : float Number between 0 and 1. The confidence intervals have the probability 1-alpha. Returns ------- Summary : SummaryTable A SummaryTable instance """ _check_at_is_all(self.margeff_options) results = self.results model = results.model title = model.__class__.__name__ + " Marginal Effects" method = self.margeff_options['method'] top_left = [('Dep. Variable:', [model.endog_names]), ('Method:', [method]), ('At:', [self.margeff_options['at']]), ] from statsmodels.iolib.summary import ( Summary, summary_params, table_extend, ) exog_names = model.exog_names[:] # copy smry = Summary() const_idx = model.data.const_idx if const_idx is not None: exog_names.pop(const_idx) J = int(getattr(model, "J", 1)) if J > 1: yname, yname_list = results._get_endog_name(model.endog_names, None, all=True) else: yname = model.endog_names yname_list = [yname] smry.add_table_2cols(self, gleft=top_left, gright=[], yname=yname, xname=exog_names, title=title) # NOTE: add_table_params is not general enough yet for margeff # could use a refactor with getattr instead of hard-coded params # tvalues etc. table = [] conf_int = self.conf_int(alpha) margeff = self.margeff margeff_se = self.margeff_se tvalues = self.tvalues pvalues = self.pvalues if J > 1: for eq in range(J): restup = (results, margeff[:, eq], margeff_se[:, eq], tvalues[:, eq], pvalues[:, eq], conf_int[:, :, eq]) tble = summary_params(restup, yname=yname_list[eq], xname=exog_names, alpha=alpha, use_t=False, skip_header=True) tble.title = yname_list[eq] # overwrite coef with method name header = ['', _transform_names[method], 'std err', 'z', 'P>|z|', '[%3.1f%% Conf. Int.]' % (100 - alpha * 100)] tble.insert_header_row(0, header) # from IPython.core.debugger import Pdb; Pdb().set_trace() table.append(tble) table = table_extend(table, keep_headers=True) else: restup = (results, margeff, margeff_se, tvalues, pvalues, conf_int) table = summary_params(restup, yname=yname, xname=exog_names, alpha=alpha, use_t=False, skip_header=True) header = ['', _transform_names[method], 'std err', 'z', 'P>|z|', '[%3.1f%% Conf. Int.]' % (100 - alpha * 100)] table.insert_header_row(0, header) smry.tables.append(table) return smry
Returns a summary table for marginal effects Parameters ---------- alpha : float Number between 0 and 1. The confidence intervals have the probability 1-alpha. Returns ------- Summary : SummaryTable A SummaryTable instance
summary
python
statsmodels/statsmodels
statsmodels/genmod/generalized_estimating_equations.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_estimating_equations.py
BSD-3-Clause
def initialize(self, model): """ Called by GEE, used by implementations that need additional setup prior to running `fit`. Parameters ---------- model : GEE class A reference to the parent GEE class instance. """ self.model = model
Called by GEE, used by implementations that need additional setup prior to running `fit`. Parameters ---------- model : GEE class A reference to the parent GEE class instance.
initialize
python
statsmodels/statsmodels
statsmodels/genmod/cov_struct.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/cov_struct.py
BSD-3-Clause
def update(self, params): """ Update the association parameter values based on the current regression coefficients. Parameters ---------- params : array_like Working values for the regression parameters. """ raise NotImplementedError
Update the association parameter values based on the current regression coefficients. Parameters ---------- params : array_like Working values for the regression parameters.
update
python
statsmodels/statsmodels
statsmodels/genmod/cov_struct.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/cov_struct.py
BSD-3-Clause
def covariance_matrix(self, endog_expval, index): """ Returns the working covariance or correlation matrix for a given cluster of data. Parameters ---------- endog_expval : array_like The expected values of endog for the cluster for which the covariance or correlation matrix will be returned index : int The index of the cluster for which the covariance or correlation matrix will be returned Returns ------- M : matrix The covariance or correlation matrix of endog is_cor : bool True if M is a correlation matrix, False if M is a covariance matrix """ raise NotImplementedError
Returns the working covariance or correlation matrix for a given cluster of data. Parameters ---------- endog_expval : array_like The expected values of endog for the cluster for which the covariance or correlation matrix will be returned index : int The index of the cluster for which the covariance or correlation matrix will be returned Returns ------- M : matrix The covariance or correlation matrix of endog is_cor : bool True if M is a correlation matrix, False if M is a covariance matrix
covariance_matrix
python
statsmodels/statsmodels
statsmodels/genmod/cov_struct.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/cov_struct.py
BSD-3-Clause
def covariance_matrix_solve(self, expval, index, stdev, rhs): """ Solves matrix equations of the form `covmat * soln = rhs` and returns the values of `soln`, where `covmat` is the covariance matrix represented by this class. Parameters ---------- expval : array_like The expected value of endog for each observed value in the group. index : int The group index. stdev : array_like The standard deviation of endog for each observation in the group. rhs : list/tuple of array_like A set of right-hand sides; each defines a matrix equation to be solved. Returns ------- soln : list/tuple of array_like The solutions to the matrix equations. Notes ----- Returns None if the solver fails. Some dependence structures do not use `expval` and/or `index` to determine the correlation matrix. Some families (e.g. binomial) do not use the `stdev` parameter when forming the covariance matrix. If the covariance matrix is singular or not SPD, it is projected to the nearest such matrix. These projection events are recorded in the fit_history attribute of the GEE model. Systems of linear equations with the covariance matrix as the left hand side (LHS) are solved for different right hand sides (RHS); the LHS is only factorized once to save time. This is a default implementation, it can be reimplemented in subclasses to optimize the linear algebra according to the structure of the covariance matrix. """ vmat, is_cor = self.covariance_matrix(expval, index) if is_cor: vmat *= np.outer(stdev, stdev) # Factor the covariance matrix. If the factorization fails, # attempt to condition it into a factorizable matrix. threshold = 1e-2 success = False cov_adjust = 0 for itr in range(20): try: vco = spl.cho_factor(vmat) success = True break except np.linalg.LinAlgError: vmat = cov_nearest(vmat, method=self.cov_nearest_method, threshold=threshold) threshold *= 2 cov_adjust += 1 msg = "At least one covariance matrix was not PSD " msg += "and required projection." warnings.warn(msg) self.cov_adjust.append(cov_adjust) # Last resort if we still cannot factor the covariance matrix. if not success: warnings.warn( "Unable to condition covariance matrix to an SPD " "matrix using cov_nearest", ConvergenceWarning) vmat = np.diag(np.diag(vmat)) vco = spl.cho_factor(vmat) soln = [spl.cho_solve(vco, x) for x in rhs] return soln
Solves matrix equations of the form `covmat * soln = rhs` and returns the values of `soln`, where `covmat` is the covariance matrix represented by this class. Parameters ---------- expval : array_like The expected value of endog for each observed value in the group. index : int The group index. stdev : array_like The standard deviation of endog for each observation in the group. rhs : list/tuple of array_like A set of right-hand sides; each defines a matrix equation to be solved. Returns ------- soln : list/tuple of array_like The solutions to the matrix equations. Notes ----- Returns None if the solver fails. Some dependence structures do not use `expval` and/or `index` to determine the correlation matrix. Some families (e.g. binomial) do not use the `stdev` parameter when forming the covariance matrix. If the covariance matrix is singular or not SPD, it is projected to the nearest such matrix. These projection events are recorded in the fit_history attribute of the GEE model. Systems of linear equations with the covariance matrix as the left hand side (LHS) are solved for different right hand sides (RHS); the LHS is only factorized once to save time. This is a default implementation, it can be reimplemented in subclasses to optimize the linear algebra according to the structure of the covariance matrix.
covariance_matrix_solve
python
statsmodels/statsmodels
statsmodels/genmod/cov_struct.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/cov_struct.py
BSD-3-Clause
def summary(self): """ Returns a text summary of the current estimate of the dependence structure. """ raise NotImplementedError
Returns a text summary of the current estimate of the dependence structure.
summary
python
statsmodels/statsmodels
statsmodels/genmod/cov_struct.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/cov_struct.py
BSD-3-Clause
def initialize(self, model): """ Called on the first call to update `ilabels` is a list of n_i x n_i matrices containing integer labels that correspond to specific correlation parameters. Two elements of ilabels[i] with the same label share identical variance components. `designx` is a matrix, with each row containing dummy variables indicating which variance components are associated with the corresponding element of QY. """ super().initialize(model) if self.model.weights is not None: warnings.warn("weights not implemented for nested cov_struct, " "using unweighted covariance estimate", NotImplementedWarning) # A bit of processing of the nest data id_matrix = np.asarray(self.model.dep_data) if id_matrix.ndim == 1: id_matrix = id_matrix[:, None] self.id_matrix = id_matrix endog = self.model.endog_li designx, ilabels = [], [] # The number of layers of nesting n_nest = self.id_matrix.shape[1] for i in range(self.model.num_group): ngrp = len(endog[i]) glab = self.model.group_labels[i] rix = self.model.group_indices[glab] # Determine the number of common variance components # shared by each pair of observations. ix1, ix2 = np.tril_indices(ngrp, -1) ncm = (self.id_matrix[rix[ix1], :] == self.id_matrix[rix[ix2], :]).sum(1) # This is used to construct the working correlation # matrix. ilabel = np.zeros((ngrp, ngrp), dtype=np.int32) ilabel[(ix1, ix2)] = ncm + 1 ilabel[(ix2, ix1)] = ncm + 1 ilabels.append(ilabel) # This is used to estimate the variance components. dsx = np.zeros((len(ix1), n_nest + 1), dtype=np.float64) dsx[:, 0] = 1 for k in np.unique(ncm): ii = np.flatnonzero(ncm == k) dsx[ii, 1:k + 1] = 1 designx.append(dsx) self.designx = np.concatenate(designx, axis=0) self.ilabels = ilabels svd = np.linalg.svd(self.designx, 0) self.designx_u = svd[0] self.designx_s = svd[1] self.designx_v = svd[2].T
Called on the first call to update `ilabels` is a list of n_i x n_i matrices containing integer labels that correspond to specific correlation parameters. Two elements of ilabels[i] with the same label share identical variance components. `designx` is a matrix, with each row containing dummy variables indicating which variance components are associated with the corresponding element of QY.
initialize
python
statsmodels/statsmodels
statsmodels/genmod/cov_struct.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/cov_struct.py
BSD-3-Clause
def summary(self): """ Returns a summary string describing the state of the dependence structure. """ dep_names = ["Groups"] if hasattr(self.model, "_dep_data_names"): dep_names.extend(self.model._dep_data_names) else: dep_names.extend(["Component %d:" % (k + 1) for k in range(len(self.vcomp_coeff) - 1)]) if hasattr(self.model, "_groups_name"): dep_names[0] = self.model._groups_name dep_names.append("Residual") vc = self.vcomp_coeff.tolist() vc.append(self.scale - np.sum(vc)) smry = pd.DataFrame({"Variance": vc}, index=dep_names) return smry
Returns a summary string describing the state of the dependence structure.
summary
python
statsmodels/statsmodels
statsmodels/genmod/cov_struct.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/cov_struct.py
BSD-3-Clause
def pooled_odds_ratio(self, tables): """ Returns the pooled odds ratio for a list of 2x2 tables. The pooled odds ratio is the inverse variance weighted average of the sample odds ratios of the tables. """ if len(tables) == 0: return 1. # Get the sampled odds ratios and variances log_oddsratio, var = [], [] for table in tables: lor = np.log(table[1, 1]) + np.log(table[0, 0]) -\ np.log(table[0, 1]) - np.log(table[1, 0]) log_oddsratio.append(lor) var.append((1 / table.astype(np.float64)).sum()) # Calculate the inverse variance weighted average wts = [1 / v for v in var] wtsum = sum(wts) wts = [w / wtsum for w in wts] log_pooled_or = sum([w * e for w, e in zip(wts, log_oddsratio)]) return np.exp(log_pooled_or)
Returns the pooled odds ratio for a list of 2x2 tables. The pooled odds ratio is the inverse variance weighted average of the sample odds ratios of the tables.
pooled_odds_ratio
python
statsmodels/statsmodels
statsmodels/genmod/cov_struct.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/cov_struct.py
BSD-3-Clause
def observed_crude_oddsratio(self): """ To obtain the crude (global) odds ratio, first pool all binary indicators corresponding to a given pair of cut points (c,c'), then calculate the odds ratio for this 2x2 table. The crude odds ratio is the inverse variance weighted average of these odds ratios. Since the covariate effects are ignored, this OR will generally be greater than the stratified OR. """ cpp = self.cpp endog = self.model.endog_li # Storage for the contingency tables for each (c,c') tables = {} for ii in cpp[0].keys(): tables[ii] = np.zeros((2, 2), dtype=np.float64) # Get the observed crude OR for i in range(len(endog)): # The observed joint values for the current cluster yvec = endog[i] endog_11 = np.outer(yvec, yvec) endog_10 = np.outer(yvec, 1. - yvec) endog_01 = np.outer(1. - yvec, yvec) endog_00 = np.outer(1. - yvec, 1. - yvec) cpp1 = cpp[i] for ky in cpp1.keys(): ix = cpp1[ky] tables[ky][1, 1] += endog_11[ix[:, 0], ix[:, 1]].sum() tables[ky][1, 0] += endog_10[ix[:, 0], ix[:, 1]].sum() tables[ky][0, 1] += endog_01[ix[:, 0], ix[:, 1]].sum() tables[ky][0, 0] += endog_00[ix[:, 0], ix[:, 1]].sum() return self.pooled_odds_ratio(list(tables.values()))
To obtain the crude (global) odds ratio, first pool all binary indicators corresponding to a given pair of cut points (c,c'), then calculate the odds ratio for this 2x2 table. The crude odds ratio is the inverse variance weighted average of these odds ratios. Since the covariate effects are ignored, this OR will generally be greater than the stratified OR.
observed_crude_oddsratio
python
statsmodels/statsmodels
statsmodels/genmod/cov_struct.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/cov_struct.py
BSD-3-Clause
def get_eyy(self, endog_expval, index): """ Returns a matrix V such that V[i,j] is the joint probability that endog[i] = 1 and endog[j] = 1, based on the marginal probabilities of endog and the global odds ratio `current_or`. """ current_or = self.dep_params ibd = self.ibd[index] # The between-observation joint probabilities if current_or == 1.0: vmat = np.outer(endog_expval, endog_expval) else: psum = endog_expval[:, None] + endog_expval[None, :] pprod = endog_expval[:, None] * endog_expval[None, :] pfac = np.sqrt((1. + psum * (current_or - 1.)) ** 2 + 4 * current_or * (1. - current_or) * pprod) vmat = 1. + psum * (current_or - 1.) - pfac vmat /= 2. * (current_or - 1) # Fix E[YY'] for elements that belong to same observation for bdl in ibd: evy = endog_expval[bdl[0]:bdl[1]] if self.endog_type == "ordinal": vmat[bdl[0]:bdl[1], bdl[0]:bdl[1]] =\ np.minimum.outer(evy, evy) else: vmat[bdl[0]:bdl[1], bdl[0]:bdl[1]] = np.diag(evy) return vmat
Returns a matrix V such that V[i,j] is the joint probability that endog[i] = 1 and endog[j] = 1, based on the marginal probabilities of endog and the global odds ratio `current_or`.
get_eyy
python
statsmodels/statsmodels
statsmodels/genmod/cov_struct.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/cov_struct.py
BSD-3-Clause
def update(self, params): """ Update the global odds ratio based on the current value of params. """ cpp = self.cpp cached_means = self.model.cached_means # This will happen if all the clusters have only # one observation if len(cpp[0]) == 0: return tables = {} for ii in cpp[0]: tables[ii] = np.zeros((2, 2), dtype=np.float64) for i in range(self.model.num_group): endog_expval, _ = cached_means[i] emat_11 = self.get_eyy(endog_expval, i) emat_10 = endog_expval[:, None] - emat_11 emat_01 = -emat_11 + endog_expval emat_00 = 1. - (emat_11 + emat_10 + emat_01) cpp1 = cpp[i] for ky in cpp1.keys(): ix = cpp1[ky] tables[ky][1, 1] += emat_11[ix[:, 0], ix[:, 1]].sum() tables[ky][1, 0] += emat_10[ix[:, 0], ix[:, 1]].sum() tables[ky][0, 1] += emat_01[ix[:, 0], ix[:, 1]].sum() tables[ky][0, 0] += emat_00[ix[:, 0], ix[:, 1]].sum() cor_expval = self.pooled_odds_ratio(list(tables.values())) self.dep_params *= self.crude_or / cor_expval if not np.isfinite(self.dep_params): self.dep_params = 1. warnings.warn("dep_params became inf, resetting to 1", ConvergenceWarning)
Update the global odds ratio based on the current value of params.
update
python
statsmodels/statsmodels
statsmodels/genmod/cov_struct.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/cov_struct.py
BSD-3-Clause
def _make_pairs(self, i, j): """ Create arrays containing all unique ordered pairs of i, j. The arrays i and j must be one-dimensional containing non-negative integers. """ mat = np.zeros((len(i) * len(j), 2), dtype=np.int32) # Create the pairs and order them f = np.ones(len(j)) mat[:, 0] = np.kron(f, i).astype(np.int32) f = np.ones(len(i)) mat[:, 1] = np.kron(j, f).astype(np.int32) mat.sort(1) # Remove repeated rows try: dtype = np.dtype((np.void, mat.dtype.itemsize * mat.shape[1])) bmat = np.ascontiguousarray(mat).view(dtype) _, idx = np.unique(bmat, return_index=True) except TypeError: # workaround for old numpy that cannot call unique with complex # dtypes rs = np.random.RandomState(4234) bmat = np.dot(mat, rs.uniform(size=mat.shape[1])) _, idx = np.unique(bmat, return_index=True) mat = mat[idx, :] return mat[:, 0], mat[:, 1]
Create arrays containing all unique ordered pairs of i, j. The arrays i and j must be one-dimensional containing non-negative integers.
_make_pairs
python
statsmodels/statsmodels
statsmodels/genmod/cov_struct.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/cov_struct.py
BSD-3-Clause
def initialize(self): """ Initialize a generalized linear model. """ self.df_model = np.linalg.matrix_rank(self.exog) - 1 if (self.freq_weights is not None) and \ (self.freq_weights.shape[0] == self.endog.shape[0]): self.wnobs = self.freq_weights.sum() self.df_resid = self.wnobs - self.df_model - 1 else: self.wnobs = self.exog.shape[0] self.df_resid = self.exog.shape[0] - self.df_model - 1
Initialize a generalized linear model.
initialize
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def loglike_mu(self, mu, scale=1.): """ Evaluate the log-likelihood for a generalized linear model. """ scale = float_like(scale, "scale") return self.family.loglike(self.endog, mu, self.var_weights, self.freq_weights, scale)
Evaluate the log-likelihood for a generalized linear model.
loglike_mu
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def loglike(self, params, scale=None): """ Evaluate the log-likelihood for a generalized linear model. """ scale = float_like(scale, "scale", optional=True) lin_pred = np.dot(self.exog, params) + self._offset_exposure expval = self.family.link.inverse(lin_pred) if scale is None: scale = self.estimate_scale(expval) llf = self.family.loglike(self.endog, expval, self.var_weights, self.freq_weights, scale) return llf
Evaluate the log-likelihood for a generalized linear model.
loglike
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def score_obs(self, params, scale=None): """score first derivative of the loglikelihood for each observation. Parameters ---------- params : ndarray Parameter at which score is evaluated. scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. Returns ------- score_obs : ndarray, 2d The first derivative of the loglikelihood function evaluated at params for each observation. """ scale = float_like(scale, "scale", optional=True) score_factor = self.score_factor(params, scale=scale) return score_factor[:, None] * self.exog
score first derivative of the loglikelihood for each observation. Parameters ---------- params : ndarray Parameter at which score is evaluated. scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. Returns ------- score_obs : ndarray, 2d The first derivative of the loglikelihood function evaluated at params for each observation.
score_obs
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def score(self, params, scale=None): """score, first derivative of the loglikelihood function Parameters ---------- params : ndarray Parameter at which score is evaluated. scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. Returns ------- score : ndarray_1d The first derivative of the loglikelihood function calculated as the sum of `score_obs` """ scale = float_like(scale, "scale", optional=True) score_factor = self.score_factor(params, scale=scale) return np.dot(score_factor, self.exog)
score, first derivative of the loglikelihood function Parameters ---------- params : ndarray Parameter at which score is evaluated. scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. Returns ------- score : ndarray_1d The first derivative of the loglikelihood function calculated as the sum of `score_obs`
score
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def score_factor(self, params, scale=None): """weights for score for each observation This can be considered as score residuals. Parameters ---------- params : ndarray parameter at which score is evaluated scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. Returns ------- score_factor : ndarray_1d A 1d weight vector used in the calculation of the score_obs. The score_obs are obtained by `score_factor[:, None] * exog` """ scale = float_like(scale, "scale", optional=True) mu = self.predict(params) if scale is None: scale = self.estimate_scale(mu) score_factor = (self.endog - mu) / self.family.link.deriv(mu) score_factor /= self.family.variance(mu) score_factor *= self.iweights * self.n_trials if not scale == 1: score_factor /= scale return score_factor
weights for score for each observation This can be considered as score residuals. Parameters ---------- params : ndarray parameter at which score is evaluated scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. Returns ------- score_factor : ndarray_1d A 1d weight vector used in the calculation of the score_obs. The score_obs are obtained by `score_factor[:, None] * exog`
score_factor
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def hessian_factor(self, params, scale=None, observed=True): """Weights for calculating Hessian Parameters ---------- params : ndarray parameter at which Hessian is evaluated scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. observed : bool If True, then the observed Hessian is returned. If false then the expected information matrix is returned. Returns ------- hessian_factor : ndarray, 1d A 1d weight vector used in the calculation of the Hessian. The hessian is obtained by `(exog.T * hessian_factor).dot(exog)` """ # calculating eim_factor mu = self.predict(params) if scale is None: scale = self.estimate_scale(mu) eim_factor = 1 / (self.family.link.deriv(mu)**2 * self.family.variance(mu)) eim_factor *= self.iweights * self.n_trials if not observed: if not scale == 1: eim_factor /= scale return eim_factor # calculating oim_factor, eim_factor is with scale=1 score_factor = self.score_factor(params, scale=1.) if eim_factor.ndim > 1 or score_factor.ndim > 1: raise RuntimeError('something wrong') tmp = self.family.variance(mu) * self.family.link.deriv2(mu) tmp += self.family.variance.deriv(mu) * self.family.link.deriv(mu) tmp = score_factor * tmp # correct for duplicatee iweights in oim_factor and score_factor tmp /= self.iweights * self.n_trials oim_factor = eim_factor * (1 + tmp) if tmp.ndim > 1: raise RuntimeError('something wrong') if not scale == 1: oim_factor /= scale return oim_factor
Weights for calculating Hessian Parameters ---------- params : ndarray parameter at which Hessian is evaluated scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. observed : bool If True, then the observed Hessian is returned. If false then the expected information matrix is returned. Returns ------- hessian_factor : ndarray, 1d A 1d weight vector used in the calculation of the Hessian. The hessian is obtained by `(exog.T * hessian_factor).dot(exog)`
hessian_factor
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def hessian(self, params, scale=None, observed=None): """Hessian, second derivative of loglikelihood function Parameters ---------- params : ndarray parameter at which Hessian is evaluated scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. observed : bool If True, then the observed Hessian is returned (default). If False, then the expected information matrix is returned. Returns ------- hessian : ndarray Hessian, i.e. observed information, or expected information matrix. """ if observed is None: if getattr(self, '_optim_hessian', None) == 'eim': observed = False else: observed = True scale = float_like(scale, "scale", optional=True) tmp = getattr(self, '_tmp_like_exog', np.empty_like(self.exog, dtype=float)) factor = self.hessian_factor(params, scale=scale, observed=observed) np.multiply(self.exog.T, factor, out=tmp.T) return -tmp.T.dot(self.exog)
Hessian, second derivative of loglikelihood function Parameters ---------- params : ndarray parameter at which Hessian is evaluated scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. observed : bool If True, then the observed Hessian is returned (default). If False, then the expected information matrix is returned. Returns ------- hessian : ndarray Hessian, i.e. observed information, or expected information matrix.
hessian
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def information(self, params, scale=None): """ Fisher information matrix. """ scale = float_like(scale, "scale", optional=True) return self.hessian(params, scale=scale, observed=False)
Fisher information matrix.
information
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def _derivative_exog(self, params, exog=None, transform="dydx", dummy_idx=None, count_idx=None, offset=None, exposure=None): """ Derivative of mean, expected endog with respect to the parameters """ if exog is None: exog = self.exog if (offset is not None) or (exposure is not None): raise NotImplementedError("offset and exposure not supported") lin_pred = self.predict(params, exog, which="linear", offset=offset, exposure=exposure) k_extra = getattr(self, 'k_extra', 0) params_exog = params if k_extra == 0 else params[:-k_extra] margeff = (self.family.link.inverse_deriv(lin_pred)[:, None] * params_exog) if 'ex' in transform: margeff *= exog if 'ey' in transform: mean = self.family.link.inverse(lin_pred) margeff /= mean[:,None] return self._derivative_exog_helper(margeff, params, exog, dummy_idx, count_idx, transform)
Derivative of mean, expected endog with respect to the parameters
_derivative_exog
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def _derivative_exog_helper(self, margeff, params, exog, dummy_idx, count_idx, transform): """ Helper for _derivative_exog to wrap results appropriately """ from statsmodels.discrete.discrete_margins import ( _get_count_effects, _get_dummy_effects, ) if count_idx is not None: margeff = _get_count_effects(margeff, exog, count_idx, transform, self, params) if dummy_idx is not None: margeff = _get_dummy_effects(margeff, exog, dummy_idx, transform, self, params) return margeff
Helper for _derivative_exog to wrap results appropriately
_derivative_exog_helper
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def _derivative_predict(self, params, exog=None, transform='dydx', offset=None, exposure=None): """ Derivative of the expected endog with respect to the parameters. Parameters ---------- params : ndarray parameter at which score is evaluated exog : ndarray or None Explanatory variables at which derivative are computed. If None, then the estimation exog is used. offset, exposure : None Not yet implemented. Returns ------- The value of the derivative of the expected endog with respect to the parameter vector. """ # core part is same as derivative_mean_params # additionally handles exog and transform if exog is None: exog = self.exog if (offset is not None) or (exposure is not None) or ( getattr(self, 'offset', None) is not None): raise NotImplementedError("offset and exposure not supported") lin_pred = self.predict(params, exog=exog, which="linear") idl = self.family.link.inverse_deriv(lin_pred) dmat = exog * idl[:, None] if 'ey' in transform: mean = self.family.link.inverse(lin_pred) dmat /= mean[:, None] return dmat
Derivative of the expected endog with respect to the parameters. Parameters ---------- params : ndarray parameter at which score is evaluated exog : ndarray or None Explanatory variables at which derivative are computed. If None, then the estimation exog is used. offset, exposure : None Not yet implemented. Returns ------- The value of the derivative of the expected endog with respect to the parameter vector.
_derivative_predict
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def _deriv_mean_dparams(self, params): """ Derivative of the expected endog with respect to the parameters. Parameters ---------- params : ndarray parameter at which score is evaluated Returns ------- The value of the derivative of the expected endog with respect to the parameter vector. """ lin_pred = self.predict(params, which="linear") idl = self.family.link.inverse_deriv(lin_pred) dmat = self.exog * idl[:, None] return dmat
Derivative of the expected endog with respect to the parameters. Parameters ---------- params : ndarray parameter at which score is evaluated Returns ------- The value of the derivative of the expected endog with respect to the parameter vector.
_deriv_mean_dparams
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def _deriv_score_obs_dendog(self, params, scale=None): """derivative of score_obs w.r.t. endog Parameters ---------- params : ndarray parameter at which score is evaluated scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. Returns ------- derivative : ndarray_2d The derivative of the score_obs with respect to endog. This can is given by `score_factor0[:, None] * exog` where `score_factor0` is the score_factor without the residual. """ scale = float_like(scale, "scale", optional=True) mu = self.predict(params) if scale is None: scale = self.estimate_scale(mu) score_factor = 1 / self.family.link.deriv(mu) score_factor /= self.family.variance(mu) score_factor *= self.iweights * self.n_trials if not scale == 1: score_factor /= scale return score_factor[:, None] * self.exog
derivative of score_obs w.r.t. endog Parameters ---------- params : ndarray parameter at which score is evaluated scale : None or float If scale is None, then the default scale will be calculated. Default scale is defined by `self.scaletype` and set in fit. If scale is not None, then it is used as a fixed scale. Returns ------- derivative : ndarray_2d The derivative of the score_obs with respect to endog. This can is given by `score_factor0[:, None] * exog` where `score_factor0` is the score_factor without the residual.
_deriv_score_obs_dendog
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def score_test(self, params_constrained, k_constraints=None, exog_extra=None, observed=True): """score test for restrictions or for omitted variables The covariance matrix for the score is based on the Hessian, i.e. observed information matrix or optionally on the expected information matrix.. Parameters ---------- params_constrained : array_like estimated parameter of the restricted model. This can be the parameter estimate for the current when testing for omitted variables. k_constraints : int or None Number of constraints that were used in the estimation of params restricted relative to the number of exog in the model. This must be provided if no exog_extra are given. If exog_extra is not None, then k_constraints is assumed to be zero if it is None. exog_extra : None or array_like Explanatory variables that are jointly tested for inclusion in the model, i.e. omitted variables. observed : bool If True, then the observed Hessian is used in calculating the covariance matrix of the score. If false then the expected information matrix is used. Returns ------- chi2_stat : float chisquare statistic for the score test p-value : float P-value of the score test based on the chisquare distribution. df : int Degrees of freedom used in the p-value calculation. This is equal to the number of constraints. Notes ----- not yet verified for case with scale not equal to 1. """ if exog_extra is None: if k_constraints is None: raise ValueError('if exog_extra is None, then k_constraints' 'needs to be given') score = self.score(params_constrained) hessian = self.hessian(params_constrained, observed=observed) else: # exog_extra = np.asarray(exog_extra) if k_constraints is None: k_constraints = 0 ex = np.column_stack((self.exog, exog_extra)) k_constraints += ex.shape[1] - self.exog.shape[1] score_factor = self.score_factor(params_constrained) score = (score_factor[:, None] * ex).sum(0) hessian_factor = self.hessian_factor(params_constrained, observed=observed) hessian = -np.dot(ex.T * hessian_factor, ex) from scipy import stats # TODO check sign, why minus? chi2stat = -score.dot(np.linalg.solve(hessian, score[:, None])) pval = stats.chi2.sf(chi2stat, k_constraints) # return a stats results instance instead? Contrast? return chi2stat, pval, k_constraints
score test for restrictions or for omitted variables The covariance matrix for the score is based on the Hessian, i.e. observed information matrix or optionally on the expected information matrix.. Parameters ---------- params_constrained : array_like estimated parameter of the restricted model. This can be the parameter estimate for the current when testing for omitted variables. k_constraints : int or None Number of constraints that were used in the estimation of params restricted relative to the number of exog in the model. This must be provided if no exog_extra are given. If exog_extra is not None, then k_constraints is assumed to be zero if it is None. exog_extra : None or array_like Explanatory variables that are jointly tested for inclusion in the model, i.e. omitted variables. observed : bool If True, then the observed Hessian is used in calculating the covariance matrix of the score. If false then the expected information matrix is used. Returns ------- chi2_stat : float chisquare statistic for the score test p-value : float P-value of the score test based on the chisquare distribution. df : int Degrees of freedom used in the p-value calculation. This is equal to the number of constraints. Notes ----- not yet verified for case with scale not equal to 1.
score_test
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def _update_history(self, tmp_result, mu, history): """ Helper method to update history during iterative fit. """ history['params'].append(tmp_result.params) history['deviance'].append(self.family.deviance(self.endog, mu, self.var_weights, self.freq_weights, self.scale)) return history
Helper method to update history during iterative fit.
_update_history
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def estimate_scale(self, mu): """ Estimate the dispersion/scale. Type of scale can be chose in the fit method. Parameters ---------- mu : ndarray mu is the mean response estimate Returns ------- Estimate of scale Notes ----- The default scale for Binomial, Poisson and Negative Binomial families is 1. The default for the other families is Pearson's Chi-Square estimate. See Also -------- statsmodels.genmod.generalized_linear_model.GLM.fit """ if not self.scaletype: if isinstance(self.family, (families.Binomial, families.Poisson, families.NegativeBinomial)): return 1. else: return self._estimate_x2_scale(mu) if isinstance(self.scaletype, float): return np.array(self.scaletype) if isinstance(self.scaletype, str): if self.scaletype.lower() == 'x2': return self._estimate_x2_scale(mu) elif self.scaletype.lower() == 'dev': return (self.family.deviance(self.endog, mu, self.var_weights, self.freq_weights, 1.) / (self.df_resid)) else: raise ValueError("Scale %s with type %s not understood" % (self.scaletype, type(self.scaletype))) else: raise ValueError("Scale %s with type %s not understood" % (self.scaletype, type(self.scaletype)))
Estimate the dispersion/scale. Type of scale can be chose in the fit method. Parameters ---------- mu : ndarray mu is the mean response estimate Returns ------- Estimate of scale Notes ----- The default scale for Binomial, Poisson and Negative Binomial families is 1. The default for the other families is Pearson's Chi-Square estimate. See Also -------- statsmodels.genmod.generalized_linear_model.GLM.fit
estimate_scale
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def estimate_tweedie_power(self, mu, method='brentq', low=1.01, high=5.): """ Tweedie specific function to estimate scale and the variance parameter. The variance parameter is also referred to as p, xi, or shape. Parameters ---------- mu : array_like Fitted mean response variable method : str, defaults to 'brentq' Scipy optimizer used to solve the Pearson equation. Only brentq currently supported. low : float, optional Low end of the bracketing interval [a,b] to be used in the search for the power. Defaults to 1.01. high : float, optional High end of the bracketing interval [a,b] to be used in the search for the power. Defaults to 5. Returns ------- power : float The estimated shape or power. """ if method == 'brentq': from scipy.optimize import brentq def psi_p(power, mu): scale = ((self.iweights * (self.endog - mu) ** 2 / (mu ** power)).sum() / self.df_resid) return (np.sum(self.iweights * ((self.endog - mu) ** 2 / (scale * (mu ** power)) - 1) * np.log(mu)) / self.freq_weights.sum()) power = brentq(psi_p, low, high, args=(mu)) else: raise NotImplementedError('Only brentq can currently be used') return power
Tweedie specific function to estimate scale and the variance parameter. The variance parameter is also referred to as p, xi, or shape. Parameters ---------- mu : array_like Fitted mean response variable method : str, defaults to 'brentq' Scipy optimizer used to solve the Pearson equation. Only brentq currently supported. low : float, optional Low end of the bracketing interval [a,b] to be used in the search for the power. Defaults to 1.01. high : float, optional High end of the bracketing interval [a,b] to be used in the search for the power. Defaults to 5. Returns ------- power : float The estimated shape or power.
estimate_tweedie_power
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def predict(self, params, exog=None, exposure=None, offset=None, which="mean", linear=None): """ Return predicted values for a design matrix Parameters ---------- params : array_like Parameters / coefficients of a GLM. exog : array_like, optional Design / exogenous data. Is exog is None, model exog is used. exposure : array_like, optional Exposure time values, only can be used with the log link function. See notes for details. offset : array_like, optional Offset values. See notes for details. which : 'mean', 'linear', 'var'(optional) Statitistic to predict. Default is 'mean'. - 'mean' returns the conditional expectation of endog E(y | x), i.e. inverse of the model's link function of linear predictor. - 'linear' returns the linear predictor of the mean function. - 'var_unscaled' variance of endog implied by the likelihood model. This does not include scale or var_weights. linear : bool The ``linear` keyword is deprecated and will be removed, use ``which`` keyword instead. If True, returns the linear predicted values. If False or None, then the statistic specified by ``which`` will be returned. Returns ------- An array of fitted values Notes ----- Any `exposure` and `offset` provided here take precedence over the `exposure` and `offset` used in the model fit. If `exog` is passed as an argument here, then any `exposure` and `offset` values in the fit will be ignored. Exposure values must be strictly positive. """ if linear is not None: msg = 'linear keyword is deprecated, use which="linear"' warnings.warn(msg, FutureWarning) if linear is True: which = "linear" # Use fit offset if appropriate if offset is None and exog is None and hasattr(self, 'offset'): offset = self.offset elif offset is None: offset = 0. if exposure is not None and not isinstance(self.family.link, families.links.Log): raise ValueError("exposure can only be used with the log link " "function") # Use fit exposure if appropriate if exposure is None and exog is None and hasattr(self, 'exposure'): # Already logged exposure = self.exposure elif exposure is None: exposure = 0. else: exposure = np.log(np.asarray(exposure)) if exog is None: exog = self.exog linpred = np.dot(exog, params) + offset + exposure if which == "mean": return self.family.fitted(linpred) elif which == "linear": return linpred elif which == "var_unscaled": mean = self.family.fitted(linpred) var_ = self.family.variance(mean) return var_ else: raise ValueError(f'The which value "{which}" is not recognized')
Return predicted values for a design matrix Parameters ---------- params : array_like Parameters / coefficients of a GLM. exog : array_like, optional Design / exogenous data. Is exog is None, model exog is used. exposure : array_like, optional Exposure time values, only can be used with the log link function. See notes for details. offset : array_like, optional Offset values. See notes for details. which : 'mean', 'linear', 'var'(optional) Statitistic to predict. Default is 'mean'. - 'mean' returns the conditional expectation of endog E(y | x), i.e. inverse of the model's link function of linear predictor. - 'linear' returns the linear predictor of the mean function. - 'var_unscaled' variance of endog implied by the likelihood model. This does not include scale or var_weights. linear : bool The ``linear` keyword is deprecated and will be removed, use ``which`` keyword instead. If True, returns the linear predicted values. If False or None, then the statistic specified by ``which`` will be returned. Returns ------- An array of fitted values Notes ----- Any `exposure` and `offset` provided here take precedence over the `exposure` and `offset` used in the model fit. If `exog` is passed as an argument here, then any `exposure` and `offset` values in the fit will be ignored. Exposure values must be strictly positive.
predict
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def get_distribution(self, params, scale=None, exog=None, exposure=None, offset=None, var_weights=1., n_trials=1.): """ Return a instance of the predictive distribution. Parameters ---------- params : array_like The model parameters. scale : scalar The scale parameter. exog : array_like The predictor variable matrix. offset : array_like or None Offset variable for predicted mean. exposure : array_like or None Log(exposure) will be added to the linear prediction. var_weights : array_like 1d array of variance (analytic) weights. The default is None. n_trials : int Number of trials for the binomial distribution. The default is 1 which corresponds to a Bernoulli random variable. Returns ------- gen Instance of a scipy frozen distribution based on estimated parameters. Use the ``rvs`` method to generate random values. Notes ----- Due to the behavior of ``scipy.stats.distributions objects``, the returned random number generator must be called with ``gen.rvs(n)`` where ``n`` is the number of observations in the data set used to fit the model. If any other value is used for ``n``, misleading results will be produced. """ scale = float_like(scale, "scale", optional=True) # use scale=1, independent of QMLE scale for discrete if isinstance(self.family, (families.Binomial, families.Poisson, families.NegativeBinomial)): scale = 1. mu = self.predict(params, exog, exposure, offset, which="mean") kwds = {} if (np.any(n_trials != 1) and isinstance(self.family, families.Binomial)): kwds["n_trials"] = n_trials distr = self.family.get_distribution(mu, scale, var_weights=var_weights, **kwds) return distr
Return a instance of the predictive distribution. Parameters ---------- params : array_like The model parameters. scale : scalar The scale parameter. exog : array_like The predictor variable matrix. offset : array_like or None Offset variable for predicted mean. exposure : array_like or None Log(exposure) will be added to the linear prediction. var_weights : array_like 1d array of variance (analytic) weights. The default is None. n_trials : int Number of trials for the binomial distribution. The default is 1 which corresponds to a Bernoulli random variable. Returns ------- gen Instance of a scipy frozen distribution based on estimated parameters. Use the ``rvs`` method to generate random values. Notes ----- Due to the behavior of ``scipy.stats.distributions objects``, the returned random number generator must be called with ``gen.rvs(n)`` where ``n`` is the number of observations in the data set used to fit the model. If any other value is used for ``n``, misleading results will be produced.
get_distribution
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def fit(self, start_params=None, maxiter=100, method='IRLS', tol=1e-8, scale=None, cov_type='nonrobust', cov_kwds=None, use_t=None, full_output=True, disp=False, max_start_irls=3, **kwargs): """ Fits a generalized linear model for a given family. Parameters ---------- start_params : array_like, optional Initial guess of the solution for the loglikelihood maximization. The default is family-specific and is given by the ``family.starting_mu(endog)``. If start_params is given then the initial mean will be calculated as ``np.dot(exog, start_params)``. maxiter : int, optional Default is 100. method : str Default is 'IRLS' for iteratively reweighted least squares. Otherwise gradient optimization is used. tol : float Convergence tolerance. Default is 1e-8. scale : str or float, optional `scale` can be 'X2', 'dev', or a float The default value is None, which uses `X2` for Gamma, Gaussian, and Inverse Gaussian. `X2` is Pearson's chi-square divided by `df_resid`. The default is 1 for the Binomial and Poisson families. `dev` is the deviance divided by df_resid cov_type : str The type of parameter estimate covariance matrix to compute. cov_kwds : dict-like Extra arguments for calculating the covariance of the parameter estimates. use_t : bool If True, the Student t-distribution is used for inference. full_output : bool, optional Set to True to have all available output in the Results object's mle_retvals attribute. The output is dependent on the solver. See LikelihoodModelResults notes section for more information. Not used if methhod is IRLS. disp : bool, optional Set to True to print convergence messages. Not used if method is IRLS. max_start_irls : int The number of IRLS iterations used to obtain starting values for gradient optimization. Only relevant if `method` is set to something other than 'IRLS'. atol : float, optional (available with IRLS fits) The absolute tolerance criterion that must be satisfied. Defaults to ``tol``. Convergence is attained when: :math:`rtol * prior + atol > abs(current - prior)` rtol : float, optional (available with IRLS fits) The relative tolerance criterion that must be satisfied. Defaults to 0 which means ``rtol`` is not used. Convergence is attained when: :math:`rtol * prior + atol > abs(current - prior)` tol_criterion : str, optional (available with IRLS fits) Defaults to ``'deviance'``. Can optionally be ``'params'``. wls_method : str, optional (available with IRLS fits) options are 'lstsq', 'pinv' and 'qr' specifies which linear algebra function to use for the irls optimization. Default is `lstsq` which uses the same underlying svd based approach as 'pinv', but is faster during iterations. 'lstsq' and 'pinv' regularize the estimate in singular and near-singular cases by truncating small singular values based on `rcond` of the respective numpy.linalg function. 'qr' is only valid for cases that are not singular nor near-singular. optim_hessian : {'eim', 'oim'}, optional (available with scipy optimizer fits) When 'oim'--the default--the observed Hessian is used in fitting. 'eim' is the expected Hessian. This may provide more stable fits, but adds assumption that the Hessian is correctly specified. Notes ----- If method is 'IRLS', then an additional keyword 'attach_wls' is available. This is currently for internal use only and might change in future versions. If attach_wls' is true, then the final WLS instance of the IRLS iteration is attached to the results instance as `results_wls` attribute. """ if isinstance(scale, str): scale = scale.lower() if scale not in ("x2", "dev"): raise ValueError( "scale must be either X2 or dev when a string." ) elif scale is not None: # GH-6627 try: scale = float(scale) except Exception as exc: raise type(exc)( "scale must be a float if given and no a string." ) self.scaletype = scale if method.lower() == "irls": if cov_type.lower() == 'eim': cov_type = 'nonrobust' return self._fit_irls(start_params=start_params, maxiter=maxiter, tol=tol, scale=scale, cov_type=cov_type, cov_kwds=cov_kwds, use_t=use_t, **kwargs) else: self._optim_hessian = kwargs.get('optim_hessian') if self._optim_hessian is not None: del kwargs['optim_hessian'] self._tmp_like_exog = np.empty_like(self.exog, dtype=float) fit_ = self._fit_gradient(start_params=start_params, method=method, maxiter=maxiter, tol=tol, scale=scale, full_output=full_output, disp=disp, cov_type=cov_type, cov_kwds=cov_kwds, use_t=use_t, max_start_irls=max_start_irls, **kwargs) del self._optim_hessian del self._tmp_like_exog return fit_
Fits a generalized linear model for a given family. Parameters ---------- start_params : array_like, optional Initial guess of the solution for the loglikelihood maximization. The default is family-specific and is given by the ``family.starting_mu(endog)``. If start_params is given then the initial mean will be calculated as ``np.dot(exog, start_params)``. maxiter : int, optional Default is 100. method : str Default is 'IRLS' for iteratively reweighted least squares. Otherwise gradient optimization is used. tol : float Convergence tolerance. Default is 1e-8. scale : str or float, optional `scale` can be 'X2', 'dev', or a float The default value is None, which uses `X2` for Gamma, Gaussian, and Inverse Gaussian. `X2` is Pearson's chi-square divided by `df_resid`. The default is 1 for the Binomial and Poisson families. `dev` is the deviance divided by df_resid cov_type : str The type of parameter estimate covariance matrix to compute. cov_kwds : dict-like Extra arguments for calculating the covariance of the parameter estimates. use_t : bool If True, the Student t-distribution is used for inference. full_output : bool, optional Set to True to have all available output in the Results object's mle_retvals attribute. The output is dependent on the solver. See LikelihoodModelResults notes section for more information. Not used if methhod is IRLS. disp : bool, optional Set to True to print convergence messages. Not used if method is IRLS. max_start_irls : int The number of IRLS iterations used to obtain starting values for gradient optimization. Only relevant if `method` is set to something other than 'IRLS'. atol : float, optional (available with IRLS fits) The absolute tolerance criterion that must be satisfied. Defaults to ``tol``. Convergence is attained when: :math:`rtol * prior + atol > abs(current - prior)` rtol : float, optional (available with IRLS fits) The relative tolerance criterion that must be satisfied. Defaults to 0 which means ``rtol`` is not used. Convergence is attained when: :math:`rtol * prior + atol > abs(current - prior)` tol_criterion : str, optional (available with IRLS fits) Defaults to ``'deviance'``. Can optionally be ``'params'``. wls_method : str, optional (available with IRLS fits) options are 'lstsq', 'pinv' and 'qr' specifies which linear algebra function to use for the irls optimization. Default is `lstsq` which uses the same underlying svd based approach as 'pinv', but is faster during iterations. 'lstsq' and 'pinv' regularize the estimate in singular and near-singular cases by truncating small singular values based on `rcond` of the respective numpy.linalg function. 'qr' is only valid for cases that are not singular nor near-singular. optim_hessian : {'eim', 'oim'}, optional (available with scipy optimizer fits) When 'oim'--the default--the observed Hessian is used in fitting. 'eim' is the expected Hessian. This may provide more stable fits, but adds assumption that the Hessian is correctly specified. Notes ----- If method is 'IRLS', then an additional keyword 'attach_wls' is available. This is currently for internal use only and might change in future versions. If attach_wls' is true, then the final WLS instance of the IRLS iteration is attached to the results instance as `results_wls` attribute.
fit
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def _fit_gradient(self, start_params=None, method="newton", maxiter=100, tol=1e-8, full_output=True, disp=True, scale=None, cov_type='nonrobust', cov_kwds=None, use_t=None, max_start_irls=3, **kwargs): """ Fits a generalized linear model for a given family iteratively using the scipy gradient optimizers. """ # fix scale during optimization, see #4616 scaletype = self.scaletype self.scaletype = 1. if (max_start_irls > 0) and (start_params is None): irls_rslt = self._fit_irls(start_params=start_params, maxiter=max_start_irls, tol=tol, scale=1., cov_type='nonrobust', cov_kwds=None, use_t=None, **kwargs) start_params = irls_rslt.params del irls_rslt rslt = super().fit(start_params=start_params, maxiter=maxiter, full_output=full_output, method=method, disp=disp, **kwargs) # reset scaletype to original self.scaletype = scaletype mu = self.predict(rslt.params) scale = self.estimate_scale(mu) if rslt.normalized_cov_params is None: cov_p = None else: cov_p = rslt.normalized_cov_params / scale if cov_type.lower() == 'eim': oim = False cov_type = 'nonrobust' else: oim = True try: cov_p = np.linalg.inv(-self.hessian(rslt.params, observed=oim)) / scale except LinAlgError: warnings.warn('Inverting hessian failed, no bse or cov_params ' 'available', HessianInversionWarning) cov_p = None results_class = getattr(self, '_results_class', GLMResults) results_class_wrapper = getattr(self, '_results_class_wrapper', GLMResultsWrapper) glm_results = results_class(self, rslt.params, cov_p, scale, cov_type=cov_type, cov_kwds=cov_kwds, use_t=use_t) # TODO: iteration count is not always available history = {'iteration': 0} if full_output: glm_results.mle_retvals = rslt.mle_retvals if 'iterations' in rslt.mle_retvals: history['iteration'] = rslt.mle_retvals['iterations'] glm_results.method = method glm_results.fit_history = history return results_class_wrapper(glm_results)
Fits a generalized linear model for a given family iteratively using the scipy gradient optimizers.
_fit_gradient
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def _fit_irls(self, start_params=None, maxiter=100, tol=1e-8, scale=None, cov_type='nonrobust', cov_kwds=None, use_t=None, **kwargs): """ Fits a generalized linear model for a given family using iteratively reweighted least squares (IRLS). """ attach_wls = kwargs.pop('attach_wls', False) atol = kwargs.get('atol') rtol = kwargs.get('rtol', 0.) tol_criterion = kwargs.get('tol_criterion', 'deviance') wls_method = kwargs.get('wls_method', 'lstsq') atol = tol if atol is None else atol endog = self.endog wlsexog = self.exog if start_params is None: start_params = np.zeros(self.exog.shape[1]) mu = self.family.starting_mu(self.endog) lin_pred = self.family.predict(mu) else: lin_pred = np.dot(wlsexog, start_params) + self._offset_exposure mu = self.family.fitted(lin_pred) self.scale = self.estimate_scale(mu) dev = self.family.deviance(self.endog, mu, self.var_weights, self.freq_weights, self.scale) if np.isnan(dev): raise ValueError("The first guess on the deviance function " "returned a nan. This could be a boundary " " problem and should be reported.") # first guess on the deviance is assumed to be scaled by 1. # params are none to start, so they line up with the deviance history = dict(params=[np.inf, start_params], deviance=[np.inf, dev]) converged = False criterion = history[tol_criterion] # This special case is used to get the likelihood for a specific # params vector. if maxiter == 0: mu = self.family.fitted(lin_pred) self.scale = self.estimate_scale(mu) wls_results = lm.RegressionResults(self, start_params, None) iteration = 0 for iteration in range(maxiter): self.weights = (self.iweights * self.n_trials * self.family.weights(mu)) wlsendog = (lin_pred + self.family.link.deriv(mu) * (self.endog-mu) - self._offset_exposure) wls_mod = reg_tools._MinimalWLS(wlsendog, wlsexog, self.weights, check_endog=True, check_weights=True) wls_results = wls_mod.fit(method=wls_method) lin_pred = np.dot(self.exog, wls_results.params) lin_pred += self._offset_exposure mu = self.family.fitted(lin_pred) history = self._update_history(wls_results, mu, history) self.scale = self.estimate_scale(mu) if endog.squeeze().ndim == 1 and np.allclose(mu - endog, 0): msg = ("Perfect separation or prediction detected, " "parameter may not be identified") warnings.warn(msg, category=PerfectSeparationWarning) converged = _check_convergence(criterion, iteration + 1, atol, rtol) if converged: break self.mu = mu if maxiter > 0: # Only if iterative used wls_method2 = 'pinv' if wls_method == 'lstsq' else wls_method wls_model = lm.WLS(wlsendog, wlsexog, self.weights) wls_results = wls_model.fit(method=wls_method2) glm_results = GLMResults(self, wls_results.params, wls_results.normalized_cov_params, self.scale, cov_type=cov_type, cov_kwds=cov_kwds, use_t=use_t) glm_results.method = "IRLS" glm_results.mle_settings = {} glm_results.mle_settings['wls_method'] = wls_method glm_results.mle_settings['optimizer'] = glm_results.method if (maxiter > 0) and (attach_wls is True): glm_results.results_wls = wls_results history['iteration'] = iteration + 1 glm_results.fit_history = history glm_results.converged = converged return GLMResultsWrapper(glm_results)
Fits a generalized linear model for a given family using iteratively reweighted least squares (IRLS).
_fit_irls
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def fit_constrained(self, constraints, start_params=None, **fit_kwds): """fit the model subject to linear equality constraints The constraints are of the form `R params = q` where R is the constraint_matrix and q is the vector of constraint_values. The estimation creates a new model with transformed design matrix, exog, and converts the results back to the original parameterization. Parameters ---------- constraints : formula expression or tuple If it is a tuple, then the constraint needs to be given by two arrays (constraint_matrix, constraint_value), i.e. (R, q). Otherwise, the constraints can be given as strings or list of strings. see t_test for details start_params : None or array_like starting values for the optimization. `start_params` needs to be given in the original parameter space and are internally transformed. **fit_kwds : keyword arguments fit_kwds are used in the optimization of the transformed model. Returns ------- results : Results instance """ from statsmodels.base._constraints import ( LinearConstraints, fit_constrained, ) from statsmodels.formula._manager import FormulaManager # same pattern as in base.LikelihoodModel.t_test lc = FormulaManager().get_linear_constraints(constraints, self.exog_names) R, q = lc.constraint_matrix, lc.constraint_values # TODO: add start_params option, need access to tranformation # fit_constrained needs to do the transformation params, cov, res_constr = fit_constrained(self, R, q, start_params=start_params, fit_kwds=fit_kwds) # create dummy results Instance, TODO: wire up properly res = self.fit(start_params=params, maxiter=0) # we get a wrapper back res._results.params = params res._results.cov_params_default = cov cov_type = fit_kwds.get('cov_type', 'nonrobust') if cov_type != 'nonrobust': res._results.normalized_cov_params = cov / res_constr.scale else: res._results.normalized_cov_params = None res._results.scale = res_constr.scale k_constr = len(q) res._results.df_resid += k_constr res._results.df_model -= k_constr res._results.constraints = LinearConstraints.from_formula_parser(lc) res._results.k_constr = k_constr res._results.results_constrained = res_constr return res
fit the model subject to linear equality constraints The constraints are of the form `R params = q` where R is the constraint_matrix and q is the vector of constraint_values. The estimation creates a new model with transformed design matrix, exog, and converts the results back to the original parameterization. Parameters ---------- constraints : formula expression or tuple If it is a tuple, then the constraint needs to be given by two arrays (constraint_matrix, constraint_value), i.e. (R, q). Otherwise, the constraints can be given as strings or list of strings. see t_test for details start_params : None or array_like starting values for the optimization. `start_params` needs to be given in the original parameter space and are internally transformed. **fit_kwds : keyword arguments fit_kwds are used in the optimization of the transformed model. Returns ------- results : Results instance
fit_constrained
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def offset_name(self): """ Name of the offset variable if available. If offset is not a pd.Series, defaults to 'offset'. """ return self._offset_name
Name of the offset variable if available. If offset is not a pd.Series, defaults to 'offset'.
offset_name
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def exposure_name(self): """ Name of the exposure variable if available. If exposure is not a pd.Series, defaults to 'exposure'. """ return self._exposure_name
Name of the exposure variable if available. If exposure is not a pd.Series, defaults to 'exposure'.
exposure_name
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def freq_weights_name(self): """ Name of the freq weights variable if available. If freq_weights is not a pd.Series, defaults to 'freq_weights'. """ return self._freq_weights_name
Name of the freq weights variable if available. If freq_weights is not a pd.Series, defaults to 'freq_weights'.
freq_weights_name
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def var_weights_name(self): """ Name of var weights variable if available. If var_weights is not a pd.Series, defaults to 'var_weights'. """ return self._var_weights_name
Name of var weights variable if available. If var_weights is not a pd.Series, defaults to 'var_weights'.
var_weights_name
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause
def resid_response(self): """ Response residuals. The response residuals are defined as `endog` - `fittedvalues` """ return self._n_trials * (self._endog-self.mu)
Response residuals. The response residuals are defined as `endog` - `fittedvalues`
resid_response
python
statsmodels/statsmodels
statsmodels/genmod/generalized_linear_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/genmod/generalized_linear_model.py
BSD-3-Clause