code
stringlengths 26
870k
| docstring
stringlengths 1
65.6k
| func_name
stringlengths 1
194
| language
stringclasses 1
value | repo
stringlengths 8
68
| path
stringlengths 5
182
| url
stringlengths 46
251
| license
stringclasses 4
values |
---|---|---|---|---|---|---|---|
def _deriv_score_obs_dendog(self, params):
"""derivative of score_obs w.r.t. endog
Parameters
----------
params : ndarray
parameter at which score is evaluated
Returns
-------
derivative : ndarray_2d
The derivative of the score_obs with respect to endog.
"""
raise NotImplementedError
# The below currently does not work, discontinuity at zero
# see https://github.com/statsmodels/statsmodels/pull/7951#issuecomment-996355875 # noqa
from statsmodels.tools.numdiff import _approx_fprime_scalar
endog_original = self.endog
def f(y):
if y.ndim == 2 and y.shape[1] == 1:
y = y[:, 0]
self.endog = y
self.model_main.endog = y
sf = self.score_obs(params)
self.endog = endog_original
self.model_main.endog = endog_original
return sf
ds = _approx_fprime_scalar(self.endog[:, None], f, epsilon=1e-2)
return ds | derivative of score_obs w.r.t. endog
Parameters
----------
params : ndarray
parameter at which score is evaluated
Returns
-------
derivative : ndarray_2d
The derivative of the score_obs with respect to endog. | _deriv_score_obs_dendog | python | statsmodels/statsmodels | statsmodels/discrete/count_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/count_model.py | BSD-3-Clause |
def _predict_var(self, params, mu, prob_infl):
"""predict values for conditional variance V(endog | exog)
Parameters
----------
params : array_like
The model parameters. This is only used to extract extra params
like dispersion parameter.
mu : array_like
Array of mean predictions for main model.
prob_inlf : array_like
Array of predicted probabilities of zero-inflation `w`.
Returns
-------
Predicted conditional variance.
"""
w = prob_infl
var_ = (1 - w) * mu * (1 + w * mu)
return var_ | predict values for conditional variance V(endog | exog)
Parameters
----------
params : array_like
The model parameters. This is only used to extract extra params
like dispersion parameter.
mu : array_like
Array of mean predictions for main model.
prob_inlf : array_like
Array of predicted probabilities of zero-inflation `w`.
Returns
-------
Predicted conditional variance. | _predict_var | python | statsmodels/statsmodels | statsmodels/discrete/count_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/count_model.py | BSD-3-Clause |
def get_distribution(self, params, exog=None, exog_infl=None,
exposure=None, offset=None):
"""Get frozen instance of distribution based on predicted parameters.
Parameters
----------
params : array_like
The parameters of the model.
exog : ndarray, optional
Explanatory variables for the main count model.
If ``exog`` is None, then the data from the model will be used.
exog_infl : ndarray, optional
Explanatory variables for the zero-inflation model.
``exog_infl`` has to be provided if ``exog`` was provided unless
``exog_infl`` in the model is only a constant.
offset : ndarray, optional
Offset is added to the linear predictor of the mean function with
coefficient equal to 1.
Default is zero if exog is not None, and the model offset if exog
is None.
exposure : ndarray, optional
Log(exposure) is added to the linear predictor of the mean
function with coefficient equal to 1. If exposure is specified,
then it will be logged by the method. The user does not need to
log it first.
Default is one if exog is is not None, and it is the model exposure
if exog is None.
Returns
-------
Instance of frozen scipy distribution subclass.
"""
mu = self.predict(params, exog=exog, exog_infl=exog_infl,
exposure=exposure, offset=offset, which="mean-main")
w = self.predict(params, exog=exog, exog_infl=exog_infl,
exposure=exposure, offset=offset, which="prob-main")
# distr = self.distribution(mu[:, None], 1 - w[:, None])
distr = self.distribution(mu, 1 - w)
return distr | Get frozen instance of distribution based on predicted parameters.
Parameters
----------
params : array_like
The parameters of the model.
exog : ndarray, optional
Explanatory variables for the main count model.
If ``exog`` is None, then the data from the model will be used.
exog_infl : ndarray, optional
Explanatory variables for the zero-inflation model.
``exog_infl`` has to be provided if ``exog`` was provided unless
``exog_infl`` in the model is only a constant.
offset : ndarray, optional
Offset is added to the linear predictor of the mean function with
coefficient equal to 1.
Default is zero if exog is not None, and the model offset if exog
is None.
exposure : ndarray, optional
Log(exposure) is added to the linear predictor of the mean
function with coefficient equal to 1. If exposure is specified,
then it will be logged by the method. The user does not need to
log it first.
Default is one if exog is is not None, and it is the model exposure
if exog is None.
Returns
-------
Instance of frozen scipy distribution subclass. | get_distribution | python | statsmodels/statsmodels | statsmodels/discrete/count_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/count_model.py | BSD-3-Clause |
def get_influence(self):
"""
Influence and outlier measures
See notes section for influence measures that do not apply for
zero inflated models.
Returns
-------
MLEInfluence
The instance has methods to calculate the main influence and
outlier measures as attributes.
See Also
--------
statsmodels.stats.outliers_influence.MLEInfluence
Notes
-----
ZeroInflated models have functions that are not differentiable
with respect to sample endog if endog=0. This means that generalized
leverage cannot be computed in the usual definition.
Currently, both the generalized leverage, in `hat_matrix_diag`
attribute and studetized residuals are not available. In the influence
plot generalized leverage is replaced by a hat matrix diagonal that
only takes combined exog into account, computed in the same way as
for OLS. This is a measure for exog outliers but does not take
specific features of the model into account.
"""
# same as sumper in DiscreteResults, only added for docstring
from statsmodels.stats.outliers_influence import MLEInfluence
return MLEInfluence(self) | Influence and outlier measures
See notes section for influence measures that do not apply for
zero inflated models.
Returns
-------
MLEInfluence
The instance has methods to calculate the main influence and
outlier measures as attributes.
See Also
--------
statsmodels.stats.outliers_influence.MLEInfluence
Notes
-----
ZeroInflated models have functions that are not differentiable
with respect to sample endog if endog=0. This means that generalized
leverage cannot be computed in the usual definition.
Currently, both the generalized leverage, in `hat_matrix_diag`
attribute and studetized residuals are not available. In the influence
plot generalized leverage is replaced by a hat matrix diagonal that
only takes combined exog into account, computed in the same way as
for OLS. This is a measure for exog outliers but does not take
specific features of the model into account. | get_influence | python | statsmodels/statsmodels | statsmodels/discrete/count_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/count_model.py | BSD-3-Clause |
def get_margeff(self, at='overall', method='dydx', atexog=None,
dummy=False, count=False):
"""Get marginal effects of the fitted model.
Not yet implemented for Zero Inflated Models
"""
raise NotImplementedError("not yet implemented for zero inflation") | Get marginal effects of the fitted model.
Not yet implemented for Zero Inflated Models | get_margeff | python | statsmodels/statsmodels | statsmodels/discrete/count_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/count_model.py | BSD-3-Clause |
def _combine_bins(edge_index, x):
"""group columns into bins using sum
This is mainly a helper function for combining probabilities into cells.
It similar to `np.add.reduceat(x, edge_index, axis=-1)` except for the
treatment of the last index and last cell.
Parameters
----------
edge_index : array_like
This defines the (zero-based) indices for the columns that are be
combined. Each index in `edge_index` except the last is the starting
index for a bin. The largest index in a bin is the next edge_index-1.
x : 1d or 2d array
array for which columns are combined. If x is 1-dimensional that it
will be treated as a 2-d row vector.
Returns
-------
x_new : ndarray
k_li : ndarray
Count of columns combined in bin.
Examples
--------
>>> dia.combine_bins([0,1,5], np.arange(4))
(array([0, 6]), array([1, 4]))
this aggregates to two bins with the sum of 1 and 4 elements
>>> np.arange(4)[0].sum()
0
>>> np.arange(4)[1:5].sum()
6
If the rightmost index is smaller than len(x)+1, then the remaining
columns will not be included.
>>> dia.combine_bins([0,1,3], np.arange(4))
(array([0, 3]), array([1, 2]))
"""
x = np.asarray(x)
if x.ndim == 1:
is_1d = True
x = x[None, :]
else:
is_1d = False
xli = []
kli = []
for bin_idx in range(len(edge_index) - 1):
i, j = edge_index[bin_idx : bin_idx + 2]
xli.append(x[:, i:j].sum(1))
kli.append(j - i)
x_new = np.column_stack(xli)
if is_1d:
x_new = x_new.squeeze()
return x_new, np.asarray(kli) | group columns into bins using sum
This is mainly a helper function for combining probabilities into cells.
It similar to `np.add.reduceat(x, edge_index, axis=-1)` except for the
treatment of the last index and last cell.
Parameters
----------
edge_index : array_like
This defines the (zero-based) indices for the columns that are be
combined. Each index in `edge_index` except the last is the starting
index for a bin. The largest index in a bin is the next edge_index-1.
x : 1d or 2d array
array for which columns are combined. If x is 1-dimensional that it
will be treated as a 2-d row vector.
Returns
-------
x_new : ndarray
k_li : ndarray
Count of columns combined in bin.
Examples
--------
>>> dia.combine_bins([0,1,5], np.arange(4))
(array([0, 6]), array([1, 4]))
this aggregates to two bins with the sum of 1 and 4 elements
>>> np.arange(4)[0].sum()
0
>>> np.arange(4)[1:5].sum()
6
If the rightmost index is smaller than len(x)+1, then the remaining
columns will not be included.
>>> dia.combine_bins([0,1,3], np.arange(4))
(array([0, 3]), array([1, 2])) | _combine_bins | python | statsmodels/statsmodels | statsmodels/discrete/_diagnostics_count.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/_diagnostics_count.py | BSD-3-Clause |
def plot_probs(freq, probs_predicted, label='predicted', upp_xlim=None,
fig=None):
"""diagnostic plots for comparing two lists of discrete probabilities
Parameters
----------
freq, probs_predicted : nd_arrays
two arrays of probabilities, this can be any probabilities for
the same events, default is designed for comparing predicted
and observed probabilities
label : str or tuple
If string, then it will be used as the label for probs_predicted and
"freq" is used for the other probabilities.
If label is a tuple of strings, then the first is they are used as
label for both probabilities
upp_xlim : None or int
If it is not None, then the xlim of the first two plots are set to
(0, upp_xlim), otherwise the matplotlib default is used
fig : None or matplotlib figure instance
If fig is provided, then the axes will be added to it in a (3,1)
subplots, otherwise a matplotlib figure instance is created
Returns
-------
Figure
The figure contains 3 subplot with probabilities, cumulative
probabilities and a PP-plot
"""
if isinstance(label, list):
label0, label1 = label
else:
label0, label1 = 'freq', label
if fig is None:
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8,12))
ax1 = fig.add_subplot(311)
ax1.plot(freq, '-o', label=label0)
ax1.plot(probs_predicted, '-d', label=label1)
if upp_xlim is not None:
ax1.set_xlim(0, upp_xlim)
ax1.legend()
ax1.set_title('probabilities')
ax2 = fig.add_subplot(312)
ax2.plot(np.cumsum(freq), '-o', label=label0)
ax2.plot(np.cumsum(probs_predicted), '-d', label=label1)
if upp_xlim is not None:
ax2.set_xlim(0, upp_xlim)
ax2.legend()
ax2.set_title('cumulative probabilities')
ax3 = fig.add_subplot(313)
ax3.plot(np.cumsum(probs_predicted), np.cumsum(freq), 'o')
ax3.plot(np.arange(len(freq)) / len(freq), np.arange(len(freq)) / len(freq))
ax3.set_title('PP-plot')
ax3.set_xlabel(label1)
ax3.set_ylabel(label0)
return fig | diagnostic plots for comparing two lists of discrete probabilities
Parameters
----------
freq, probs_predicted : nd_arrays
two arrays of probabilities, this can be any probabilities for
the same events, default is designed for comparing predicted
and observed probabilities
label : str or tuple
If string, then it will be used as the label for probs_predicted and
"freq" is used for the other probabilities.
If label is a tuple of strings, then the first is they are used as
label for both probabilities
upp_xlim : None or int
If it is not None, then the xlim of the first two plots are set to
(0, upp_xlim), otherwise the matplotlib default is used
fig : None or matplotlib figure instance
If fig is provided, then the axes will be added to it in a (3,1)
subplots, otherwise a matplotlib figure instance is created
Returns
-------
Figure
The figure contains 3 subplot with probabilities, cumulative
probabilities and a PP-plot | plot_probs | python | statsmodels/statsmodels | statsmodels/discrete/_diagnostics_count.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/_diagnostics_count.py | BSD-3-Clause |
def test_chisquare_prob(results, probs, bin_edges=None, method=None):
"""
chisquare test for predicted probabilities using cmt-opg
Parameters
----------
results : results instance
Instance of a count regression results
probs : ndarray
Array of predicted probabilities with observations
in rows and event counts in columns
bin_edges : None or array
intervals to combine several counts into cells
see combine_bins
Returns
-------
(api not stable, replace by test-results class)
statistic : float
chisquare statistic for tes
p-value : float
p-value of test
df : int
degrees of freedom for chisquare distribution
extras : ???
currently returns a tuple with some intermediate results
(diff, res_aux)
Notes
-----
Status : experimental, no verified unit tests, needs to be generalized
currently only OPG version with auxiliary regression is implemented
Assumes counts are np.arange(probs.shape[1]), i.e. consecutive
integers starting at zero.
Auxiliary regression drops the last column of binned probs to avoid
that probabilities sum to 1.
References
----------
.. [1] Andrews, Donald W. K. 1988a. “Chi-Square Diagnostic Tests for
Econometric Models: Theory.” Econometrica 56 (6): 1419–53.
https://doi.org/10.2307/1913105.
.. [2] Andrews, Donald W. K. 1988b. “Chi-Square Diagnostic Tests for
Econometric Models.” Journal of Econometrics 37 (1): 135–56.
https://doi.org/10.1016/0304-4076(88)90079-6.
.. [3] Manjón, M., and O. Martínez. 2014. “The Chi-Squared Goodness-of-Fit
Test for Count-Data Models.” Stata Journal 14 (4): 798–816.
"""
res = results
score_obs = results.model.score_obs(results.params)
d_ind = (res.model.endog[:, None] == np.arange(probs.shape[1])).astype(int)
if bin_edges is not None:
d_ind_bins, k_bins = _combine_bins(bin_edges, d_ind)
probs_bins, k_bins = _combine_bins(bin_edges, probs)
k_bins = probs_bins.shape[-1]
else:
d_ind_bins, k_bins = d_ind, d_ind.shape[1]
probs_bins = probs
diff1 = d_ind_bins - probs_bins
# diff2 = (1 - d_ind.sum(1)) - (1 - probs_bins.sum(1))
x_aux = np.column_stack((score_obs, diff1[:, :-1])) # diff2))
nobs = x_aux.shape[0]
res_aux = OLS(np.ones(nobs), x_aux).fit()
chi2_stat = nobs * (1 - res_aux.ssr / res_aux.uncentered_tss)
df = res_aux.model.rank - score_obs.shape[1]
if df < k_bins - 1:
# not a problem in general, but it can be for OPG version
import warnings
# TODO: Warning shows up in Monte Carlo loop, skip for now
warnings.warn('auxiliary model is rank deficient')
statistic = chi2_stat
pvalue = stats.chi2.sf(chi2_stat, df)
res = HolderTuple(
statistic=statistic,
pvalue=pvalue,
df=df,
diff1=diff1,
res_aux=res_aux,
distribution="chi2",
)
return res | chisquare test for predicted probabilities using cmt-opg
Parameters
----------
results : results instance
Instance of a count regression results
probs : ndarray
Array of predicted probabilities with observations
in rows and event counts in columns
bin_edges : None or array
intervals to combine several counts into cells
see combine_bins
Returns
-------
(api not stable, replace by test-results class)
statistic : float
chisquare statistic for tes
p-value : float
p-value of test
df : int
degrees of freedom for chisquare distribution
extras : ???
currently returns a tuple with some intermediate results
(diff, res_aux)
Notes
-----
Status : experimental, no verified unit tests, needs to be generalized
currently only OPG version with auxiliary regression is implemented
Assumes counts are np.arange(probs.shape[1]), i.e. consecutive
integers starting at zero.
Auxiliary regression drops the last column of binned probs to avoid
that probabilities sum to 1.
References
----------
.. [1] Andrews, Donald W. K. 1988a. “Chi-Square Diagnostic Tests for
Econometric Models: Theory.” Econometrica 56 (6): 1419–53.
https://doi.org/10.2307/1913105.
.. [2] Andrews, Donald W. K. 1988b. “Chi-Square Diagnostic Tests for
Econometric Models.” Journal of Econometrics 37 (1): 135–56.
https://doi.org/10.1016/0304-4076(88)90079-6.
.. [3] Manjón, M., and O. Martínez. 2014. “The Chi-Squared Goodness-of-Fit
Test for Count-Data Models.” Stata Journal 14 (4): 798–816. | test_chisquare_prob | python | statsmodels/statsmodels | statsmodels/discrete/_diagnostics_count.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/_diagnostics_count.py | BSD-3-Clause |
def test_poisson_dispersion(results, method="all", _old=False):
"""Score/LM type tests for Poisson variance assumptions
Null Hypothesis is
H0: var(y) = E(y) and assuming E(y) is correctly specified
H1: var(y) ~= E(y)
The tests are based on the constrained model, i.e. the Poisson model.
The tests differ in their assumed alternatives, and in their maintained
assumptions.
Parameters
----------
results : Poisson results instance
This can be a results instance for either a discrete Poisson or a GLM
with family Poisson.
method : str
Not used yet. Currently results for all methods are returned.
_old : bool
Temporary keyword for backwards compatibility, will be removed
in future version of statsmodels.
Returns
-------
res : instance
The instance of DispersionResults has the hypothesis test results,
statistic, pvalue, method, alternative, as main attributes and a
summary_frame method that returns the results as pandas DataFrame.
"""
if method not in ["all"]:
raise ValueError(f'unknown method "{method}"')
if hasattr(results, '_results'):
results = results._results
endog = results.model.endog
nobs = endog.shape[0] # TODO: use attribute, may need to be added
fitted = results.predict()
# fitted = results.fittedvalues # discrete has linear prediction
# this assumes Poisson
resid2 = results.resid_response**2
var_resid_endog = (resid2 - endog)
var_resid_fitted = (resid2 - fitted)
std1 = np.sqrt(2 * (fitted**2).sum())
var_resid_endog_sum = var_resid_endog.sum()
dean_a = var_resid_fitted.sum() / std1
dean_b = var_resid_endog_sum / std1
dean_c = (var_resid_endog / fitted).sum() / np.sqrt(2 * nobs)
pval_dean_a = 2 * stats.norm.sf(np.abs(dean_a))
pval_dean_b = 2 * stats.norm.sf(np.abs(dean_b))
pval_dean_c = 2 * stats.norm.sf(np.abs(dean_c))
results_all = [[dean_a, pval_dean_a],
[dean_b, pval_dean_b],
[dean_c, pval_dean_c]]
description = [['Dean A', 'mu (1 + a mu)'],
['Dean B', 'mu (1 + a mu)'],
['Dean C', 'mu (1 + a)']]
# Cameron Trived auxiliary regression page 78 count book 1989
endog_v = var_resid_endog / fitted
res_ols_nb2 = OLS(endog_v, fitted).fit(use_t=False)
stat_ols_nb2 = res_ols_nb2.tvalues[0]
pval_ols_nb2 = res_ols_nb2.pvalues[0]
results_all.append([stat_ols_nb2, pval_ols_nb2])
description.append(['CT nb2', 'mu (1 + a mu)'])
res_ols_nb1 = OLS(endog_v, fitted).fit(use_t=False)
stat_ols_nb1 = res_ols_nb1.tvalues[0]
pval_ols_nb1 = res_ols_nb1.pvalues[0]
results_all.append([stat_ols_nb1, pval_ols_nb1])
description.append(['CT nb1', 'mu (1 + a)'])
endog_v = var_resid_endog / fitted
res_ols_nb2 = OLS(endog_v, fitted).fit(cov_type='HC3', use_t=False)
stat_ols_hc1_nb2 = res_ols_nb2.tvalues[0]
pval_ols_hc1_nb2 = res_ols_nb2.pvalues[0]
results_all.append([stat_ols_hc1_nb2, pval_ols_hc1_nb2])
description.append(['CT nb2 HC3', 'mu (1 + a mu)'])
res_ols_nb1 = OLS(endog_v, np.ones(len(endog_v))).fit(cov_type='HC3',
use_t=False)
stat_ols_hc1_nb1 = res_ols_nb1.tvalues[0]
pval_ols_hc1_nb1 = res_ols_nb1.pvalues[0]
results_all.append([stat_ols_hc1_nb1, pval_ols_hc1_nb1])
description.append(['CT nb1 HC3', 'mu (1 + a)'])
results_all = np.array(results_all)
if _old:
# for backwards compatibility in 0.14, remove in later versions
return results_all, description
else:
res = DispersionResults(
statistic=results_all[:, 0],
pvalue=results_all[:, 1],
method=[i[0] for i in description],
alternative=[i[1] for i in description],
name="Poisson Dispersion Test"
)
return res | Score/LM type tests for Poisson variance assumptions
Null Hypothesis is
H0: var(y) = E(y) and assuming E(y) is correctly specified
H1: var(y) ~= E(y)
The tests are based on the constrained model, i.e. the Poisson model.
The tests differ in their assumed alternatives, and in their maintained
assumptions.
Parameters
----------
results : Poisson results instance
This can be a results instance for either a discrete Poisson or a GLM
with family Poisson.
method : str
Not used yet. Currently results for all methods are returned.
_old : bool
Temporary keyword for backwards compatibility, will be removed
in future version of statsmodels.
Returns
-------
res : instance
The instance of DispersionResults has the hypothesis test results,
statistic, pvalue, method, alternative, as main attributes and a
summary_frame method that returns the results as pandas DataFrame. | test_poisson_dispersion | python | statsmodels/statsmodels | statsmodels/discrete/_diagnostics_count.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/_diagnostics_count.py | BSD-3-Clause |
def _test_poisson_dispersion_generic(
results,
exog_new_test,
exog_new_control=None,
include_score=False,
use_endog=True,
cov_type='HC3',
cov_kwds=None,
use_t=False
):
"""A variable addition test for the variance function
This uses an artificial regression to calculate a variant of an LM or
generalized score test for the specification of the variance assumption
in a Poisson model. The performed test is a Wald test on the coefficients
of the `exog_new_test`.
Warning: insufficiently tested, especially for options
"""
if hasattr(results, '_results'):
results = results._results
endog = results.model.endog
nobs = endog.shape[0] # TODO: use attribute, may need to be added
# fitted = results.fittedvalues # generic has linpred as fittedvalues
fitted = results.predict()
resid2 = results.resid_response**2
# the following assumes Poisson
if use_endog:
var_resid = (resid2 - endog)
else:
var_resid = (resid2 - fitted)
endog_v = var_resid / fitted
k_constraints = exog_new_test.shape[1]
ex_list = [exog_new_test]
if include_score:
score_obs = results.model.score_obs(results.params)
ex_list.append(score_obs)
if exog_new_control is not None:
ex_list.append(score_obs)
if len(ex_list) > 1:
ex = np.column_stack(ex_list)
use_wald = True
else:
ex = ex_list[0] # no control variables in exog
use_wald = False
res_ols = OLS(endog_v, ex).fit(cov_type=cov_type, cov_kwds=cov_kwds,
use_t=use_t)
if use_wald:
# we have controls and need to test coefficients
k_vars = ex.shape[1]
constraints = np.eye(k_constraints, k_vars)
ht = res_ols.wald_test(constraints)
stat_ols = ht.statistic
pval_ols = ht.pvalue
else:
# we do not have controls and can use overall fit
nobs = endog_v.shape[0]
rsquared_noncentered = 1 - res_ols.ssr/res_ols.uncentered_tss
stat_ols = nobs * rsquared_noncentered
pval_ols = stats.chi2.sf(stat_ols, k_constraints)
return stat_ols, pval_ols | A variable addition test for the variance function
This uses an artificial regression to calculate a variant of an LM or
generalized score test for the specification of the variance assumption
in a Poisson model. The performed test is a Wald test on the coefficients
of the `exog_new_test`.
Warning: insufficiently tested, especially for options | _test_poisson_dispersion_generic | python | statsmodels/statsmodels | statsmodels/discrete/_diagnostics_count.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/_diagnostics_count.py | BSD-3-Clause |
def test_poisson_zeroinflation_jh(results_poisson, exog_infl=None):
"""score test for zero inflation or deflation in Poisson
This implements Jansakul and Hinde 2009 score test
for excess zeros against a zero modified Poisson
alternative. They use a linear link function for the
inflation model to allow for zero deflation.
Parameters
----------
results_poisson: results instance
The test is only valid if the results instance is a Poisson
model.
exog_infl : ndarray
Explanatory variables for the zero inflated or zero modified
alternative. I exog_infl is None, then the inflation
probability is assumed to be constant.
Returns
-------
score test results based on chisquare distribution
Notes
-----
This is a score test based on the null hypothesis that
the true model is Poisson. It will also reject for
other deviations from a Poisson model if those affect
the zero probabilities, e.g. in the direction of
excess dispersion as in the Negative Binomial
or Generalized Poisson model.
Therefore, rejection in this test does not imply that
zero-inflated Poisson is the appropriate model.
Status: experimental, no verified unit tests,
TODO: If the zero modification probability is assumed
to be constant under the alternative, then we only have
a scalar test score and we can use one-sided tests to
distinguish zero inflation and deflation from the
two-sided deviations. (The general one-sided case is
difficult.)
In this case the test specializes to the test by Broek
References
----------
.. [1] Jansakul, N., and J. P. Hinde. 2002. “Score Tests for Zero-Inflated
Poisson Models.” Computational Statistics & Data Analysis 40 (1):
75–96. https://doi.org/10.1016/S0167-9473(01)00104-9.
"""
if not isinstance(results_poisson.model, Poisson):
# GLM Poisson would be also valid, not tried
import warnings
warnings.warn('Test is only valid if model is Poisson')
nobs = results_poisson.model.endog.shape[0]
if exog_infl is None:
exog_infl = np.ones((nobs, 1))
endog = results_poisson.model.endog
exog = results_poisson.model.exog
mu = results_poisson.predict()
prob_zero = np.exp(-mu)
cov_poi = results_poisson.cov_params()
cross_derivative = (exog_infl.T * (-mu)).dot(exog).T
cov_infl = (exog_infl.T * ((1 - prob_zero) / prob_zero)).dot(exog_infl)
score_obs_infl = exog_infl * (((endog == 0) - prob_zero) / prob_zero)[:,None]
#score_obs_infl = exog_infl * ((endog == 0) * (1 - prob_zero) / prob_zero - (endog>0))[:,None] #same
score_infl = score_obs_infl.sum(0)
cov_score_infl = cov_infl - cross_derivative.T.dot(cov_poi).dot(cross_derivative)
cov_score_infl_inv = np.linalg.pinv(cov_score_infl)
statistic = score_infl.dot(cov_score_infl_inv).dot(score_infl)
df2 = np.linalg.matrix_rank(cov_score_infl) # more general, maybe not needed
df = exog_infl.shape[1]
pvalue = stats.chi2.sf(statistic, df)
res = HolderTuple(
statistic=statistic,
pvalue=pvalue,
df=df,
rank_score=df2,
distribution="chi2",
)
return res | score test for zero inflation or deflation in Poisson
This implements Jansakul and Hinde 2009 score test
for excess zeros against a zero modified Poisson
alternative. They use a linear link function for the
inflation model to allow for zero deflation.
Parameters
----------
results_poisson: results instance
The test is only valid if the results instance is a Poisson
model.
exog_infl : ndarray
Explanatory variables for the zero inflated or zero modified
alternative. I exog_infl is None, then the inflation
probability is assumed to be constant.
Returns
-------
score test results based on chisquare distribution
Notes
-----
This is a score test based on the null hypothesis that
the true model is Poisson. It will also reject for
other deviations from a Poisson model if those affect
the zero probabilities, e.g. in the direction of
excess dispersion as in the Negative Binomial
or Generalized Poisson model.
Therefore, rejection in this test does not imply that
zero-inflated Poisson is the appropriate model.
Status: experimental, no verified unit tests,
TODO: If the zero modification probability is assumed
to be constant under the alternative, then we only have
a scalar test score and we can use one-sided tests to
distinguish zero inflation and deflation from the
two-sided deviations. (The general one-sided case is
difficult.)
In this case the test specializes to the test by Broek
References
----------
.. [1] Jansakul, N., and J. P. Hinde. 2002. “Score Tests for Zero-Inflated
Poisson Models.” Computational Statistics & Data Analysis 40 (1):
75–96. https://doi.org/10.1016/S0167-9473(01)00104-9. | test_poisson_zeroinflation_jh | python | statsmodels/statsmodels | statsmodels/discrete/_diagnostics_count.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/_diagnostics_count.py | BSD-3-Clause |
def test_poisson_zeroinflation_broek(results_poisson):
"""score test for zero modification in Poisson, special case
This assumes that the Poisson model has a constant and that
the zero modification probability is constant.
This is a special case of test_poisson_zeroinflation derived by
van den Broek 1995.
The test reports two sided and one sided alternatives based on
the normal distribution of the test statistic.
References
----------
.. [1] Broek, Jan van den. 1995. “A Score Test for Zero Inflation in a
Poisson Distribution.” Biometrics 51 (2): 738–43.
https://doi.org/10.2307/2532959.
"""
mu = results_poisson.predict()
prob_zero = np.exp(-mu)
endog = results_poisson.model.endog
# nobs = len(endog)
# score = ((endog == 0) / prob_zero).sum() - nobs
# var_score = (1 / prob_zero).sum() - nobs - endog.sum()
score = (((endog == 0) - prob_zero) / prob_zero).sum()
var_score = ((1 - prob_zero) / prob_zero).sum() - endog.sum()
statistic = score / np.sqrt(var_score)
pvalue_two = 2 * stats.norm.sf(np.abs(statistic))
pvalue_upp = stats.norm.sf(statistic)
pvalue_low = stats.norm.cdf(statistic)
res = HolderTuple(
statistic=statistic,
pvalue=pvalue_two,
pvalue_smaller=pvalue_upp,
pvalue_larger=pvalue_low,
chi2=statistic**2,
pvalue_chi2=stats.chi2.sf(statistic**2, 1),
df_chi2=1,
distribution="normal",
)
return res | score test for zero modification in Poisson, special case
This assumes that the Poisson model has a constant and that
the zero modification probability is constant.
This is a special case of test_poisson_zeroinflation derived by
van den Broek 1995.
The test reports two sided and one sided alternatives based on
the normal distribution of the test statistic.
References
----------
.. [1] Broek, Jan van den. 1995. “A Score Test for Zero Inflation in a
Poisson Distribution.” Biometrics 51 (2): 738–43.
https://doi.org/10.2307/2532959. | test_poisson_zeroinflation_broek | python | statsmodels/statsmodels | statsmodels/discrete/_diagnostics_count.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/_diagnostics_count.py | BSD-3-Clause |
def test_poisson_zeros(results):
"""Test for excess zeros in Poisson regression model.
The test is implemented following Tang and Tang [1]_ equ. (12) which is
based on the test derived in He et al 2019 [2]_.
References
----------
.. [1] Tang, Yi, and Wan Tang. 2018. “Testing Modified Zeros for Poisson
Regression Models:” Statistical Methods in Medical Research,
September. https://doi.org/10.1177/0962280218796253.
.. [2] He, Hua, Hui Zhang, Peng Ye, and Wan Tang. 2019. “A Test of Inflated
Zeros for Poisson Regression Models.” Statistical Methods in
Medical Research 28 (4): 1157–69.
https://doi.org/10.1177/0962280217749991.
"""
x = results.model.exog
mean = results.predict()
prob0 = np.exp(-mean)
counts = (results.model.endog == 0).astype(int)
diff = counts.sum() - prob0.sum()
var1 = prob0 @ (1 - prob0)
pm = prob0 * mean
c = np.linalg.inv(x.T * mean @ x)
pmx = pm @ x
var2 = pmx @ c @ pmx
var = var1 - var2
statistic = diff / np.sqrt(var)
pvalue_two = 2 * stats.norm.sf(np.abs(statistic))
pvalue_upp = stats.norm.sf(statistic)
pvalue_low = stats.norm.cdf(statistic)
res = HolderTuple(
statistic=statistic,
pvalue=pvalue_two,
pvalue_smaller=pvalue_upp,
pvalue_larger=pvalue_low,
chi2=statistic**2,
pvalue_chi2=stats.chi2.sf(statistic**2, 1),
df_chi2=1,
distribution="normal",
)
return res | Test for excess zeros in Poisson regression model.
The test is implemented following Tang and Tang [1]_ equ. (12) which is
based on the test derived in He et al 2019 [2]_.
References
----------
.. [1] Tang, Yi, and Wan Tang. 2018. “Testing Modified Zeros for Poisson
Regression Models:” Statistical Methods in Medical Research,
September. https://doi.org/10.1177/0962280218796253.
.. [2] He, Hua, Hui Zhang, Peng Ye, and Wan Tang. 2019. “A Test of Inflated
Zeros for Poisson Regression Models.” Statistical Methods in
Medical Research 28 (4): 1157–69.
https://doi.org/10.1177/0962280217749991. | test_poisson_zeros | python | statsmodels/statsmodels | statsmodels/discrete/_diagnostics_count.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/_diagnostics_count.py | BSD-3-Clause |
def test_chisquare_prob(self, bin_edges=None, method=None):
"""Moment test for binned probabilites using OPG.
Paramters
---------
binedges : array_like or None
This defines which counts are included in the test on frequencies
and how counts are combined in bins.
The default if bin_edges is None will change in future.
See Notes and Example sections below.
method : str
Currently only `method = "opg"` is available.
If method is None, the OPG will be used, but the default might
change in future versions.
See Notes section below.
Returns
-------
test result
Notes
-----
Warning: The current default can have many empty or nearly empty bins.
The default number of bins is given by max(endog).
Currently it is recommended to limit the number of bins explicitly,
see Examples below.
Binning will change in future and automatic binning will be added.
Currently only the outer product of gradient, OPG, method is
implemented. In many case, the OPG version of a specification test
overrejects in small samples.
Specialized tests that use observed or expected information matrix
often have better small sample properties.
The default method will change if better methods are added.
Examples
--------
The following call is a test for the probability of zeros
`test_chisquare_prob(bin_edges=np.arange(3))`
`test_chisquare_prob(bin_edges=np.arange(10))` tests the hypothesis
that the frequencies for counts up to 7 correspond to the estimated
Poisson distributions.
In this case, edges are 0, ..., 9 which defines 9 bins for
counts 0 to 8. The last bin is dropped, so the joint test hypothesis is
that the observed aggregated frequencies for counts 0 to 7 correspond
to the model prediction for those frequencies. Predicted probabilites
Prob(y_i = k | x) are aggregated over observations ``i``.
"""
kwds = {}
if bin_edges is not None:
# TODO: verify upper bound, we drop last bin (may be open, inf)
kwds["y_values"] = np.arange(bin_edges[-2] + 1)
probs = self.results.predict(which="prob", **kwds)
res = test_chisquare_prob(self.results, probs, bin_edges=bin_edges,
method=method)
return res | Moment test for binned probabilites using OPG.
Paramters
---------
binedges : array_like or None
This defines which counts are included in the test on frequencies
and how counts are combined in bins.
The default if bin_edges is None will change in future.
See Notes and Example sections below.
method : str
Currently only `method = "opg"` is available.
If method is None, the OPG will be used, but the default might
change in future versions.
See Notes section below.
Returns
-------
test result
Notes
-----
Warning: The current default can have many empty or nearly empty bins.
The default number of bins is given by max(endog).
Currently it is recommended to limit the number of bins explicitly,
see Examples below.
Binning will change in future and automatic binning will be added.
Currently only the outer product of gradient, OPG, method is
implemented. In many case, the OPG version of a specification test
overrejects in small samples.
Specialized tests that use observed or expected information matrix
often have better small sample properties.
The default method will change if better methods are added.
Examples
--------
The following call is a test for the probability of zeros
`test_chisquare_prob(bin_edges=np.arange(3))`
`test_chisquare_prob(bin_edges=np.arange(10))` tests the hypothesis
that the frequencies for counts up to 7 correspond to the estimated
Poisson distributions.
In this case, edges are 0, ..., 9 which defines 9 bins for
counts 0 to 8. The last bin is dropped, so the joint test hypothesis is
that the observed aggregated frequencies for counts 0 to 7 correspond
to the model prediction for those frequencies. Predicted probabilites
Prob(y_i = k | x) are aggregated over observations ``i``. | test_chisquare_prob | python | statsmodels/statsmodels | statsmodels/discrete/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/diagnostic.py | BSD-3-Clause |
def plot_probs(self, label='predicted', upp_xlim=None,
fig=None):
"""Plot observed versus predicted frequencies for entire sample.
"""
probs_predicted = self.probs_predicted.sum(0)
k_probs = len(probs_predicted)
freq = np.bincount(self.results.model.endog.astype(int),
minlength=k_probs)[:k_probs]
fig = plot_probs(freq, probs_predicted,
label=label, upp_xlim=upp_xlim,
fig=fig)
return fig | Plot observed versus predicted frequencies for entire sample. | plot_probs | python | statsmodels/statsmodels | statsmodels/discrete/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/diagnostic.py | BSD-3-Clause |
def test_dispersion(self):
"""Test for excess (over or under) dispersion in Poisson.
Returns
-------
dispersion results
"""
res = test_poisson_dispersion(self.results)
return res | Test for excess (over or under) dispersion in Poisson.
Returns
-------
dispersion results | test_dispersion | python | statsmodels/statsmodels | statsmodels/discrete/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/diagnostic.py | BSD-3-Clause |
def test_poisson_zeroinflation(self, method="prob", exog_infl=None):
"""Test for excess zeros, zero inflation or deflation.
Parameters
----------
method : str
Three methods ara available for the test:
- "prob" : moment test for the probability of zeros
- "broek" : score test against zero inflation with or without
explanatory variables for inflation
exog_infl : array_like or None
Optional explanatory variables under the alternative of zero
inflation, or deflation. Only used if method is "broek".
Returns
-------
results
Notes
-----
If method = "prob", then the moment test of He et al 1_ is used based
on the explicit formula in Tang and Tang 2_.
If method = "broek" and exog_infl is None, then the test by Van den
Broek 3_ is used. This is a score test against and alternative of
constant zero inflation or deflation.
If method = "broek" and exog_infl is provided, then the extension of
the broek test to varying zero inflation or deflation by Jansakul and
Hinde is used.
Warning: The Broek and the Jansakul and Hinde tests are not numerically
stable when the probability of zeros in Poisson is small, i.e. if the
conditional means of the estimated Poisson distribution are large.
In these cases, p-values will not be accurate.
"""
if method == "prob":
if exog_infl is not None:
warnings.warn('exog_infl is only used if method = "broek"')
res = test_poisson_zeros(self.results)
elif method == "broek":
if exog_infl is None:
res = test_poisson_zeroinflation_broek(self.results)
else:
exog_infl = np.asarray(exog_infl)
if exog_infl.ndim == 1:
exog_infl = exog_infl[:, None]
res = test_poisson_zeroinflation_jh(self.results,
exog_infl=exog_infl)
return res | Test for excess zeros, zero inflation or deflation.
Parameters
----------
method : str
Three methods ara available for the test:
- "prob" : moment test for the probability of zeros
- "broek" : score test against zero inflation with or without
explanatory variables for inflation
exog_infl : array_like or None
Optional explanatory variables under the alternative of zero
inflation, or deflation. Only used if method is "broek".
Returns
-------
results
Notes
-----
If method = "prob", then the moment test of He et al 1_ is used based
on the explicit formula in Tang and Tang 2_.
If method = "broek" and exog_infl is None, then the test by Van den
Broek 3_ is used. This is a score test against and alternative of
constant zero inflation or deflation.
If method = "broek" and exog_infl is provided, then the extension of
the broek test to varying zero inflation or deflation by Jansakul and
Hinde is used.
Warning: The Broek and the Jansakul and Hinde tests are not numerically
stable when the probability of zeros in Poisson is small, i.e. if the
conditional means of the estimated Poisson distribution are large.
In these cases, p-values will not be accurate. | test_poisson_zeroinflation | python | statsmodels/statsmodels | statsmodels/discrete/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/diagnostic.py | BSD-3-Clause |
def _chisquare_binned(self, sort_var=None, bins=10, k_max=None, df=None,
sort_method="quicksort", frac_upp=0.1,
alpha_nc=0.05):
"""Hosmer-Lemeshow style test for count data.
Note, this does not take into account that parameters are estimated.
The distribution of the test statistic is only an approximation.
This corresponds to the Hosmer-Lemeshow type test for an ordinal
response variable. The outcome space y = k is partitioned into bins
and treated as ordinal variable.
The observations are split into approximately equal sized groups
of observations sorted according the ``sort_var``.
"""
if sort_var is None:
sort_var = self.results.predict(which="lin")
endog = self.results.model.endog
# not sure yet how this is supposed to work
# max_count = endog.max * 2
# no option for max count in predict
# counts = (endog == np.arange(max_count)).astype(int)
expected = self.results.predict(which="prob")
counts = (endog[:, None] == np.arange(expected.shape[1])).astype(int)
# truncate upper tail
if k_max is None:
nobs = len(endog)
icumcounts_sum = nobs - counts.sum(0).cumsum(0)
k_max = np.argmax(icumcounts_sum < nobs * frac_upp) - 1
expected = expected[:, :k_max]
counts = counts[:, :k_max]
# we should correct for or include truncated upper bin
# inplace modification, we cannot reuse expected and counts anymore
expected[:, -1] += 1 - expected.sum(1)
counts[:, -1] += 1 - counts.sum(1)
# TODO: what's the correct df, same as for multinomial/ordered ?
res = test_chisquare_binning(counts, expected, sort_var=sort_var,
bins=bins, df=df, ordered=True,
sort_method=sort_method,
alpha_nc=alpha_nc)
return res | Hosmer-Lemeshow style test for count data.
Note, this does not take into account that parameters are estimated.
The distribution of the test statistic is only an approximation.
This corresponds to the Hosmer-Lemeshow type test for an ordinal
response variable. The outcome space y = k is partitioned into bins
and treated as ordinal variable.
The observations are split into approximately equal sized groups
of observations sorted according the ``sort_var``. | _chisquare_binned | python | statsmodels/statsmodels | statsmodels/discrete/diagnostic.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/diagnostic.py | BSD-3-Clause |
def loglike(self, params):
"""
Loglikelihood of Generic Truncated model
Parameters
----------
params : array-like
The parameters of the model.
Returns
-------
loglike : float
The log-likelihood function of the model evaluated at `params`.
See notes.
Notes
-----
"""
return np.sum(self.loglikeobs(params)) | Loglikelihood of Generic Truncated model
Parameters
----------
params : array-like
The parameters of the model.
Returns
-------
loglike : float
The log-likelihood function of the model evaluated at `params`.
See notes.
Notes
----- | loglike | python | statsmodels/statsmodels | statsmodels/discrete/truncated_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/truncated_model.py | BSD-3-Clause |
def loglikeobs(self, params):
"""
Loglikelihood for observations of Generic Truncated model
Parameters
----------
params : array-like
The parameters of the model.
Returns
-------
loglike : ndarray (nobs,)
The log likelihood for each observation of the model evaluated
at `params`. See Notes
Notes
-----
"""
llf_main = self.model_main.loglikeobs(params)
yt = self.trunc + 1
# equivalent ways to compute truncation probability
# pmf0 = np.zeros_like(self.endog, dtype=np.float64)
# for i in range(self.trunc + 1):
# model = self.model_main.__class__(np.ones_like(self.endog) * i,
# self.exog)
# pmf0 += np.exp(model.loglikeobs(params))
#
# pmf1 = self.model_main.predict(
# params, which="prob", y_values=np.arange(yt)).sum(-1)
pmf = self.predict(
params, which="prob-base", y_values=np.arange(yt)).sum(-1)
# Skip pmf = 1 to avoid warnings
log_1_m_pmf = np.full_like(pmf, -np.inf)
loc = pmf > 1
log_1_m_pmf[loc] = np.nan
loc = pmf < 1
log_1_m_pmf[loc] = np.log(1 - pmf[loc])
llf = llf_main - log_1_m_pmf
return llf | Loglikelihood for observations of Generic Truncated model
Parameters
----------
params : array-like
The parameters of the model.
Returns
-------
loglike : ndarray (nobs,)
The log likelihood for each observation of the model evaluated
at `params`. See Notes
Notes
----- | loglikeobs | python | statsmodels/statsmodels | statsmodels/discrete/truncated_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/truncated_model.py | BSD-3-Clause |
def score_obs(self, params):
"""
Generic Truncated model score (gradient) vector of the log-likelihood
Parameters
----------
params : array-like
The parameters of the model
Returns
-------
score : ndarray, 1-D
The score vector of the model, i.e. the first derivative of the
loglikelihood function, evaluated at `params`
"""
score_main = self.model_main.score_obs(params)
pmf = np.zeros_like(self.endog, dtype=np.float64)
# TODO: can we rewrite to following without creating new models
score_trunc = np.zeros_like(score_main, dtype=np.float64)
for i in range(self.trunc + 1):
model = self.model_main.__class__(
np.ones_like(self.endog) * i,
self.exog,
offset=getattr(self, "offset", None),
exposure=getattr(self, "exposure", None),
)
pmf_i = np.exp(model.loglikeobs(params))
score_trunc += (model.score_obs(params).T * pmf_i).T
pmf += pmf_i
dparams = score_main + (score_trunc.T / (1 - pmf)).T
return dparams | Generic Truncated model score (gradient) vector of the log-likelihood
Parameters
----------
params : array-like
The parameters of the model
Returns
-------
score : ndarray, 1-D
The score vector of the model, i.e. the first derivative of the
loglikelihood function, evaluated at `params` | score_obs | python | statsmodels/statsmodels | statsmodels/discrete/truncated_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/truncated_model.py | BSD-3-Clause |
def hessian(self, params):
"""
Generic Truncated model Hessian matrix of the loglikelihood
Parameters
----------
params : array-like
The parameters of the model
Returns
-------
hess : ndarray, (k_vars, k_vars)
The Hessian, second derivative of loglikelihood function,
evaluated at `params`
Notes
-----
"""
return approx_hess(params, self.loglike) | Generic Truncated model Hessian matrix of the loglikelihood
Parameters
----------
params : array-like
The parameters of the model
Returns
-------
hess : ndarray, (k_vars, k_vars)
The Hessian, second derivative of loglikelihood function,
evaluated at `params`
Notes
----- | hessian | python | statsmodels/statsmodels | statsmodels/discrete/truncated_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/truncated_model.py | BSD-3-Clause |
def predict(self, params, exog=None, exposure=None, offset=None,
which='mean', y_values=None):
"""
Predict response variable or other statistic given exogenous variables.
Parameters
----------
params : array_like
The parameters of the model.
exog : ndarray, optional
Explanatory variables for the main count model.
If ``exog`` is None, then the data from the model will be used.
offset : ndarray, optional
Offset is added to the linear predictor of the mean function with
coefficient equal to 1.
Default is zero if exog is not None, and the model offset if exog
is None.
exposure : ndarray, optional
Log(exposure) is added to the linear predictor with coefficient
equal to 1. If exposure is specified, then it will be logged by
the method. The user does not need to log it first.
Default is one if exog is is not None, and it is the model exposure
if exog is None.
which : str (optional)
Statitistic to predict. Default is 'mean'.
- 'mean' : the conditional expectation of endog E(y | x)
- 'mean-main' : mean parameter of truncated count model.
Note, this is not the mean of the truncated distribution.
- 'linear' : the linear predictor of the truncated count model.
- 'var' : returns the estimated variance of endog implied by the
model.
- 'prob-trunc' : probability of truncation. This is the probability
of observing a zero count implied
by the truncation model.
- 'prob' : probabilities of each count from 0 to max(endog), or
for y_values if those are provided. This is a multivariate
return (2-dim when predicting for several observations).
The probabilities in the truncated region are zero.
- 'prob-base' : probabilities for untruncated base distribution.
The probabilities are for each count from 0 to max(endog), or
for y_values if those are provided. This is a multivariate
return (2-dim when predicting for several observations).
y_values : array_like
Values of the random variable endog at which pmf is evaluated.
Only used if ``which="prob"``
Returns
-------
predicted values
Notes
-----
If exposure is specified, then it will be logged by the method.
The user does not need to log it first.
"""
exog, offset, exposure = self._get_predict_arrays(
exog=exog,
offset=offset,
exposure=exposure
)
fitted = np.dot(exog, params[:exog.shape[1]])
linpred = fitted + exposure + offset
if which == 'mean':
mu = np.exp(linpred)
if self.truncation == 0:
prob_main = self.model_main._prob_nonzero(mu, params)
return mu / prob_main
elif self.truncation == -1:
return mu
elif self.truncation > 0:
counts = np.atleast_2d(np.arange(0, self.truncation + 1))
# next is same as in prob-main below
probs = self.model_main.predict(
params, exog=exog, exposure=np.exp(exposure),
offset=offset, which="prob", y_values=counts)
prob_tregion = probs.sum(1)
mean_tregion = (np.arange(self.truncation + 1) * probs).sum(1)
mean = (mu - mean_tregion) / (1 - prob_tregion)
return mean
else:
raise ValueError("unsupported self.truncation")
elif which == 'linear':
return linpred
elif which == 'mean-main':
return np.exp(linpred)
elif which == 'prob':
if y_values is not None:
counts = np.atleast_2d(y_values)
else:
counts = np.atleast_2d(np.arange(0, np.max(self.endog)+1))
mu = np.exp(linpred)[:, None]
if self.k_extra == 0:
# poisson, no extra params
probs = self.model_dist.pmf(counts, mu, self.trunc)
elif self.k_extra == 1:
p = self.model_main.parameterization
probs = self.model_dist.pmf(counts, mu, params[-1],
p, self.trunc)
else:
raise ValueError("k_extra is not 0 or 1")
return probs
elif which == 'prob-base':
if y_values is not None:
counts = np.asarray(y_values)
else:
counts = np.arange(0, np.max(self.endog)+1)
probs = self.model_main.predict(
params, exog=exog, exposure=np.exp(exposure),
offset=offset, which="prob", y_values=counts)
return probs
elif which == 'var':
mu = np.exp(linpred)
counts = np.atleast_2d(np.arange(0, self.truncation + 1))
# next is same as in prob-main below
probs = self.model_main.predict(
params, exog=exog, exposure=np.exp(exposure),
offset=offset, which="prob", y_values=counts)
prob_tregion = probs.sum(1)
mean_tregion = (np.arange(self.truncation + 1) * probs).sum(1)
mean = (mu - mean_tregion) / (1 - prob_tregion)
mnc2_tregion = (np.arange(self.truncation + 1)**2 *
probs).sum(1)
vm = self.model_main._var(mu, params)
# uncentered 2nd moment
mnc2 = (mu**2 + vm - mnc2_tregion) / (1 - prob_tregion)
v = mnc2 - mean**2
return v
else:
raise ValueError(
"argument which == %s not handled" % which) | Predict response variable or other statistic given exogenous variables.
Parameters
----------
params : array_like
The parameters of the model.
exog : ndarray, optional
Explanatory variables for the main count model.
If ``exog`` is None, then the data from the model will be used.
offset : ndarray, optional
Offset is added to the linear predictor of the mean function with
coefficient equal to 1.
Default is zero if exog is not None, and the model offset if exog
is None.
exposure : ndarray, optional
Log(exposure) is added to the linear predictor with coefficient
equal to 1. If exposure is specified, then it will be logged by
the method. The user does not need to log it first.
Default is one if exog is is not None, and it is the model exposure
if exog is None.
which : str (optional)
Statitistic to predict. Default is 'mean'.
- 'mean' : the conditional expectation of endog E(y | x)
- 'mean-main' : mean parameter of truncated count model.
Note, this is not the mean of the truncated distribution.
- 'linear' : the linear predictor of the truncated count model.
- 'var' : returns the estimated variance of endog implied by the
model.
- 'prob-trunc' : probability of truncation. This is the probability
of observing a zero count implied
by the truncation model.
- 'prob' : probabilities of each count from 0 to max(endog), or
for y_values if those are provided. This is a multivariate
return (2-dim when predicting for several observations).
The probabilities in the truncated region are zero.
- 'prob-base' : probabilities for untruncated base distribution.
The probabilities are for each count from 0 to max(endog), or
for y_values if those are provided. This is a multivariate
return (2-dim when predicting for several observations).
y_values : array_like
Values of the random variable endog at which pmf is evaluated.
Only used if ``which="prob"``
Returns
-------
predicted values
Notes
-----
If exposure is specified, then it will be logged by the method.
The user does not need to log it first. | predict | python | statsmodels/statsmodels | statsmodels/discrete/truncated_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/truncated_model.py | BSD-3-Clause |
def _predict_mom_trunc0(self, params, mu):
"""Predict mean and variance of zero-truncated distribution.
experimental api, will likely be replaced by other methods
Parameters
----------
params : array_like
The model parameters. This is only used to extract extra params
like dispersion parameter.
mu : array_like
Array of mean predictions for main model.
Returns
-------
Predicted conditional variance.
"""
w = (1 - np.exp(-mu)) # prob of no truncation, 1 - P(y=0)
m = mu / w
var_ = m - (1 - w) * m**2
return m, var_ | Predict mean and variance of zero-truncated distribution.
experimental api, will likely be replaced by other methods
Parameters
----------
params : array_like
The model parameters. This is only used to extract extra params
like dispersion parameter.
mu : array_like
Array of mean predictions for main model.
Returns
-------
Predicted conditional variance. | _predict_mom_trunc0 | python | statsmodels/statsmodels | statsmodels/discrete/truncated_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/truncated_model.py | BSD-3-Clause |
def loglike(self, params):
"""
Loglikelihood of Generic Censored model
Parameters
----------
params : array-like
The parameters of the model.
Returns
-------
loglike : float
The log-likelihood function of the model evaluated at `params`.
See notes.
Notes
-----
"""
return np.sum(self.loglikeobs(params)) | Loglikelihood of Generic Censored model
Parameters
----------
params : array-like
The parameters of the model.
Returns
-------
loglike : float
The log-likelihood function of the model evaluated at `params`.
See notes.
Notes
----- | loglike | python | statsmodels/statsmodels | statsmodels/discrete/truncated_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/truncated_model.py | BSD-3-Clause |
def loglikeobs(self, params):
"""
Loglikelihood for observations of Generic Censored model
Parameters
----------
params : array-like
The parameters of the model.
Returns
-------
loglike : ndarray (nobs,)
The log likelihood for each observation of the model evaluated
at `params`. See Notes
Notes
-----
"""
llf_main = self.model_main.loglikeobs(params)
llf = np.concatenate(
(llf_main[self.zero_idx],
np.log(1 - np.exp(llf_main[self.nonzero_idx])))
)
return llf | Loglikelihood for observations of Generic Censored model
Parameters
----------
params : array-like
The parameters of the model.
Returns
-------
loglike : ndarray (nobs,)
The log likelihood for each observation of the model evaluated
at `params`. See Notes
Notes
----- | loglikeobs | python | statsmodels/statsmodels | statsmodels/discrete/truncated_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/truncated_model.py | BSD-3-Clause |
def score_obs(self, params):
"""
Generic Censored model score (gradient) vector of the log-likelihood
Parameters
----------
params : array-like
The parameters of the model
Returns
-------
score : ndarray, 1-D
The score vector of the model, i.e. the first derivative of the
loglikelihood function, evaluated at `params`
"""
score_main = self.model_main.score_obs(params)
llf_main = self.model_main.loglikeobs(params)
score = np.concatenate((
score_main[self.zero_idx],
(score_main[self.nonzero_idx].T *
-np.exp(llf_main[self.nonzero_idx]) /
(1 - np.exp(llf_main[self.nonzero_idx]))).T
))
return score | Generic Censored model score (gradient) vector of the log-likelihood
Parameters
----------
params : array-like
The parameters of the model
Returns
-------
score : ndarray, 1-D
The score vector of the model, i.e. the first derivative of the
loglikelihood function, evaluated at `params` | score_obs | python | statsmodels/statsmodels | statsmodels/discrete/truncated_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/truncated_model.py | BSD-3-Clause |
def hessian(self, params):
"""
Generic Censored model Hessian matrix of the loglikelihood
Parameters
----------
params : array-like
The parameters of the model
Returns
-------
hess : ndarray, (k_vars, k_vars)
The Hessian, second derivative of loglikelihood function,
evaluated at `params`
Notes
-----
"""
return approx_hess(params, self.loglike) | Generic Censored model Hessian matrix of the loglikelihood
Parameters
----------
params : array-like
The parameters of the model
Returns
-------
hess : ndarray, (k_vars, k_vars)
The Hessian, second derivative of loglikelihood function,
evaluated at `params`
Notes
----- | hessian | python | statsmodels/statsmodels | statsmodels/discrete/truncated_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/truncated_model.py | BSD-3-Clause |
def _prob_nonzero(self, mu, params):
"""Probability that count is not zero
internal use in Censored model, will be refactored or removed
"""
prob_nz = self.model_main._prob_nonzero(mu, params)
return prob_nz | Probability that count is not zero
internal use in Censored model, will be refactored or removed | _prob_nonzero | python | statsmodels/statsmodels | statsmodels/discrete/truncated_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/truncated_model.py | BSD-3-Clause |
def loglike(self, params):
"""
Loglikelihood of Generic Hurdle model
Parameters
----------
params : array-like
The parameters of the model.
Returns
-------
loglike : float
The log-likelihood function of the model evaluated at `params`.
See notes.
Notes
-----
"""
k = int((len(params) - self.k_extra1 - self.k_extra2) / 2
) + self.k_extra1
return (self.model1.loglike(params[:k]) +
self.model2.loglike(params[k:])) | Loglikelihood of Generic Hurdle model
Parameters
----------
params : array-like
The parameters of the model.
Returns
-------
loglike : float
The log-likelihood function of the model evaluated at `params`.
See notes.
Notes
----- | loglike | python | statsmodels/statsmodels | statsmodels/discrete/truncated_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/truncated_model.py | BSD-3-Clause |
def predict(self, params, exog=None, exposure=None,
offset=None, which='mean', y_values=None):
"""
Predict response variable or other statistic given exogenous variables.
Parameters
----------
params : array_like
The parameters of the model.
exog : ndarray, optional
Explanatory variables for the main count model.
If ``exog`` is None, then the data from the model will be used.
exog_infl : ndarray, optional
Explanatory variables for the zero-inflation model.
``exog_infl`` has to be provided if ``exog`` was provided unless
``exog_infl`` in the model is only a constant.
offset : ndarray, optional
Offset is added to the linear predictor of the mean function with
coefficient equal to 1.
Default is zero if exog is not None, and the model offset if exog
is None.
exposure : ndarray, optional
Log(exposure) is added to the linear predictor with coefficient
equal to 1. If exposure is specified, then it will be logged by
the method. The user does not need to log it first.
Default is one if exog is is not None, and it is the model exposure
if exog is None.
which : str (optional)
Statitistic to predict. Default is 'mean'.
- 'mean' : the conditional expectation of endog E(y | x)
- 'mean-main' : mean parameter of truncated count model.
Note, this is not the mean of the truncated distribution.
- 'linear' : the linear predictor of the truncated count model.
- 'var' : returns the estimated variance of endog implied by the
model.
- 'prob-main' : probability of selecting the main model which is
the probability of observing a nonzero count P(y > 0 | x).
- 'prob-zero' : probability of observing a zero count. P(y=0 | x).
This is equal to is ``1 - prob-main``
- 'prob-trunc' : probability of truncation of the truncated count
model. This is the probability of observing a zero count implied
by the truncation model.
- 'mean-nonzero' : expected value conditional on having observation
larger than zero, E(y | X, y>0)
- 'prob' : probabilities of each count from 0 to max(endog), or
for y_values if those are provided. This is a multivariate
return (2-dim when predicting for several observations).
y_values : array_like
Values of the random variable endog at which pmf is evaluated.
Only used if ``which="prob"``
Returns
-------
predicted values
Notes
-----
'prob-zero' / 'prob-trunc' is the ratio of probabilities of observing
a zero count between hurdle model and the truncated count model.
If this ratio is larger than one, then the hurdle model has an inflated
number of zeros compared to the count model. If it is smaller than one,
then the number of zeros is deflated.
"""
which = which.lower() # make it case insensitive
no_exog = True if exog is None else False
exog, offset, exposure = self._get_predict_arrays(
exog=exog,
offset=offset,
exposure=exposure
)
exog_zero = None # not yet
if exog_zero is None:
if no_exog:
exog_zero = self.exog
else:
exog_zero = exog
k_zeros = int((len(params) - self.k_extra1 - self.k_extra2) / 2
) + self.k_extra1
params_zero = params[:k_zeros]
params_main = params[k_zeros:]
lin_pred = (np.dot(exog, params_main[:self.exog.shape[1]]) +
exposure + offset)
# this currently is mean_main, offset, exposure for zero part ?
mu1 = self.model1.predict(params_zero, exog=exog)
# prob that count model applies y>0 from zero model predict
prob_main = self.model1.model_main._prob_nonzero(mu1, params_zero)
prob_zero = (1 - prob_main)
mu2 = np.exp(lin_pred)
prob_ntrunc = self.model2.model_main._prob_nonzero(mu2, params_main)
if which == 'mean':
return prob_main * np.exp(lin_pred) / prob_ntrunc
elif which == 'mean-main':
return np.exp(lin_pred)
elif which == 'linear':
return lin_pred
elif which == 'mean-nonzero':
return np.exp(lin_pred) / prob_ntrunc
elif which == 'prob-zero':
return prob_zero
elif which == 'prob-main':
return prob_main
elif which == 'prob-trunc':
return 1 - prob_ntrunc
# not yet supported
elif which == 'var':
# generic computation using results from submodels
mu = np.exp(lin_pred)
mt, vt = self.model2._predict_mom_trunc0(params_main, mu)
var_ = prob_main * vt + prob_main * (1 - prob_main) * mt**2
return var_
elif which == 'prob':
probs_main = self.model2.predict(
params_main, exog, np.exp(exposure), offset, which="prob",
y_values=y_values)
probs_main *= prob_main[:, None]
probs_main[:, 0] = prob_zero
return probs_main
else:
raise ValueError('which = %s is not available' % which) | Predict response variable or other statistic given exogenous variables.
Parameters
----------
params : array_like
The parameters of the model.
exog : ndarray, optional
Explanatory variables for the main count model.
If ``exog`` is None, then the data from the model will be used.
exog_infl : ndarray, optional
Explanatory variables for the zero-inflation model.
``exog_infl`` has to be provided if ``exog`` was provided unless
``exog_infl`` in the model is only a constant.
offset : ndarray, optional
Offset is added to the linear predictor of the mean function with
coefficient equal to 1.
Default is zero if exog is not None, and the model offset if exog
is None.
exposure : ndarray, optional
Log(exposure) is added to the linear predictor with coefficient
equal to 1. If exposure is specified, then it will be logged by
the method. The user does not need to log it first.
Default is one if exog is is not None, and it is the model exposure
if exog is None.
which : str (optional)
Statitistic to predict. Default is 'mean'.
- 'mean' : the conditional expectation of endog E(y | x)
- 'mean-main' : mean parameter of truncated count model.
Note, this is not the mean of the truncated distribution.
- 'linear' : the linear predictor of the truncated count model.
- 'var' : returns the estimated variance of endog implied by the
model.
- 'prob-main' : probability of selecting the main model which is
the probability of observing a nonzero count P(y > 0 | x).
- 'prob-zero' : probability of observing a zero count. P(y=0 | x).
This is equal to is ``1 - prob-main``
- 'prob-trunc' : probability of truncation of the truncated count
model. This is the probability of observing a zero count implied
by the truncation model.
- 'mean-nonzero' : expected value conditional on having observation
larger than zero, E(y | X, y>0)
- 'prob' : probabilities of each count from 0 to max(endog), or
for y_values if those are provided. This is a multivariate
return (2-dim when predicting for several observations).
y_values : array_like
Values of the random variable endog at which pmf is evaluated.
Only used if ``which="prob"``
Returns
-------
predicted values
Notes
-----
'prob-zero' / 'prob-trunc' is the ratio of probabilities of observing
a zero count between hurdle model and the truncated count model.
If this ratio is larger than one, then the hurdle model has an inflated
number of zeros compared to the count model. If it is smaller than one,
then the number of zeros is deflated. | predict | python | statsmodels/statsmodels | statsmodels/discrete/truncated_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/truncated_model.py | BSD-3-Clause |
def fit_regularized(self,
method="elastic_net",
alpha=0.,
start_params=None,
refit=False,
**kwargs):
"""
Return a regularized fit to a linear regression model.
Parameters
----------
method : {'elastic_net'}
Only the `elastic_net` approach is currently implemented.
alpha : scalar or array_like
The penalty weight. If a scalar, the same penalty weight
applies to all variables in the model. If a vector, it
must have the same length as `params`, and contains a
penalty weight for each coefficient.
start_params : array_like
Starting values for `params`.
refit : bool
If True, the model is refit using only the variables that
have non-zero coefficients in the regularized fit. The
refitted model is not regularized.
**kwargs
Additional keyword argument that are used when fitting the model.
Returns
-------
Results
A results instance.
"""
from statsmodels.base.elastic_net import fit_elasticnet
if method != "elastic_net":
raise ValueError("method for fit_regularized must be elastic_net")
defaults = {"maxiter": 50, "L1_wt": 1, "cnvrg_tol": 1e-10,
"zero_tol": 1e-10}
defaults.update(kwargs)
return fit_elasticnet(self, method=method,
alpha=alpha,
start_params=start_params,
refit=refit,
**defaults) | Return a regularized fit to a linear regression model.
Parameters
----------
method : {'elastic_net'}
Only the `elastic_net` approach is currently implemented.
alpha : scalar or array_like
The penalty weight. If a scalar, the same penalty weight
applies to all variables in the model. If a vector, it
must have the same length as `params`, and contains a
penalty weight for each coefficient.
start_params : array_like
Starting values for `params`.
refit : bool
If True, the model is refit using only the variables that
have non-zero coefficients in the regularized fit. The
refitted model is not regularized.
**kwargs
Additional keyword argument that are used when fitting the model.
Returns
-------
Results
A results instance. | fit_regularized | python | statsmodels/statsmodels | statsmodels/discrete/conditional_models.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/conditional_models.py | BSD-3-Clause |
def summary(self, yname=None, xname=None, title=None, alpha=.05):
"""
Summarize the fitted model.
Parameters
----------
yname : str, optional
Default is `y`
xname : list[str], optional
Names for the exogenous variables, default is "var_xx".
Must match the number of parameters in the model
title : str, optional
Title for the top table. If not None, then this replaces the
default title
alpha : float
Significance level for the confidence intervals
Returns
-------
smry : Summary instance
This holds the summary tables and text, which can be printed or
converted to various output formats.
See Also
--------
statsmodels.iolib.summary.Summary : class to hold summary
results
"""
top_left = [
('Dep. Variable:', None),
('Model:', None),
('Log-Likelihood:', None),
('Method:', [self.method]),
('Date:', None),
('Time:', None),
]
top_right = [
('No. Observations:', None),
('No. groups:', [self.n_groups]),
('Min group size:', [self._group_stats[0]]),
('Max group size:', [self._group_stats[1]]),
('Mean group size:', [self._group_stats[2]]),
]
if title is None:
title = "Conditional Logit Model Regression Results"
# create summary tables
from statsmodels.iolib.summary import Summary
smry = Summary()
smry.add_table_2cols(
self,
gleft=top_left,
gright=top_right, # [],
yname=yname,
xname=xname,
title=title)
smry.add_table_params(
self, yname=yname, xname=xname, alpha=alpha, use_t=self.use_t)
return smry | Summarize the fitted model.
Parameters
----------
yname : str, optional
Default is `y`
xname : list[str], optional
Names for the exogenous variables, default is "var_xx".
Must match the number of parameters in the model
title : str, optional
Title for the top table. If not None, then this replaces the
default title
alpha : float
Significance level for the confidence intervals
Returns
-------
smry : Summary instance
This holds the summary tables and text, which can be printed or
converted to various output formats.
See Also
--------
statsmodels.iolib.summary.Summary : class to hold summary
results | summary | python | statsmodels/statsmodels | statsmodels/discrete/conditional_models.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/conditional_models.py | BSD-3-Clause |
def _check_margeff_args(at, method):
"""
Checks valid options for margeff
"""
if at not in ['overall','mean','median','zero','all']:
raise ValueError("%s not a valid option for `at`." % at)
if method not in ['dydx','eyex','dyex','eydx']:
raise ValueError("method is not understood. Got %s" % method) | Checks valid options for margeff | _check_margeff_args | python | statsmodels/statsmodels | statsmodels/discrete/discrete_margins.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_margins.py | BSD-3-Clause |
def _check_discrete_args(at, method):
"""
Checks the arguments for margeff if the exogenous variables are discrete.
"""
if method in ['dyex','eyex']:
raise ValueError("%s not allowed for discrete variables" % method)
if at in ['median', 'zero']:
raise ValueError("%s not allowed for discrete variables" % at) | Checks the arguments for margeff if the exogenous variables are discrete. | _check_discrete_args | python | statsmodels/statsmodels | statsmodels/discrete/discrete_margins.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_margins.py | BSD-3-Clause |
def _get_const_index(exog):
"""
Returns a boolean array of non-constant column indices in exog and
an scalar array of where the constant is or None
"""
effects_idx = exog.var(0) != 0
if np.any(~effects_idx):
const_idx = np.where(~effects_idx)[0]
else:
const_idx = None
return effects_idx, const_idx | Returns a boolean array of non-constant column indices in exog and
an scalar array of where the constant is or None | _get_const_index | python | statsmodels/statsmodels | statsmodels/discrete/discrete_margins.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_margins.py | BSD-3-Clause |
def _isdummy(X):
"""
Given an array X, returns the column indices for the dummy variables.
Parameters
----------
X : array_like
A 1d or 2d array of numbers
Examples
--------
>>> X = np.random.randint(0, 2, size=(15,5)).astype(float)
>>> X[:,1:3] = np.random.randn(15,2)
>>> ind = _isdummy(X)
>>> ind
array([0, 3, 4])
"""
X = np.asarray(X)
if X.ndim > 1:
ind = np.zeros(X.shape[1]).astype(bool)
max = (np.max(X, axis=0) == 1)
min = (np.min(X, axis=0) == 0)
remainder = np.all(X % 1. == 0, axis=0)
ind = min & max & remainder
if X.ndim == 1:
ind = np.asarray([ind])
return np.where(ind)[0] | Given an array X, returns the column indices for the dummy variables.
Parameters
----------
X : array_like
A 1d or 2d array of numbers
Examples
--------
>>> X = np.random.randint(0, 2, size=(15,5)).astype(float)
>>> X[:,1:3] = np.random.randn(15,2)
>>> ind = _isdummy(X)
>>> ind
array([0, 3, 4]) | _isdummy | python | statsmodels/statsmodels | statsmodels/discrete/discrete_margins.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_margins.py | BSD-3-Clause |
def _iscount(X):
"""
Given an array X, returns the column indices for count variables.
Parameters
----------
X : array_like
A 1d or 2d array of numbers
Examples
--------
>>> X = np.random.randint(0, 10, size=(15,5)).astype(float)
>>> X[:,1:3] = np.random.randn(15,2)
>>> ind = _iscount(X)
>>> ind
array([0, 3, 4])
"""
X = np.asarray(X)
remainder = np.logical_and(np.logical_and(np.all(X % 1. == 0, axis = 0),
X.var(0) != 0), np.all(X >= 0, axis=0))
dummy = _isdummy(X)
remainder = np.where(remainder)[0].tolist()
for idx in dummy:
remainder.remove(idx)
return np.array(remainder) | Given an array X, returns the column indices for count variables.
Parameters
----------
X : array_like
A 1d or 2d array of numbers
Examples
--------
>>> X = np.random.randint(0, 10, size=(15,5)).astype(float)
>>> X[:,1:3] = np.random.randn(15,2)
>>> ind = _iscount(X)
>>> ind
array([0, 3, 4]) | _iscount | python | statsmodels/statsmodels | statsmodels/discrete/discrete_margins.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_margins.py | BSD-3-Clause |
def _get_count_effects(effects, exog, count_ind, method, model, params):
"""
If there's a count variable, the predicted difference is taken by
subtracting one and adding one to exog then averaging the difference
"""
# this is the index for the effect and the index for count col in exog
for i in count_ind:
exog0 = exog.copy()
exog0[:, i] -= 1
effect0 = model.predict(params, exog0)
exog0[:, i] += 2
effect1 = model.predict(params, exog0)
#NOTE: done by analogy with dummy effects but untested bc
# stata does not handle both count and eydx anywhere
if 'ey' in method:
effect0 = np.log(effect0)
effect1 = np.log(effect1)
effects[:, i] = ((effect1 - effect0)/2)
return effects | If there's a count variable, the predicted difference is taken by
subtracting one and adding one to exog then averaging the difference | _get_count_effects | python | statsmodels/statsmodels | statsmodels/discrete/discrete_margins.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_margins.py | BSD-3-Clause |
def _get_dummy_effects(effects, exog, dummy_ind, method, model, params):
"""
If there's a dummy variable, the predicted difference is taken at
0 and 1
"""
# this is the index for the effect and the index for dummy col in exog
for i in dummy_ind:
exog0 = exog.copy() # only copy once, can we avoid a copy?
exog0[:,i] = 0
effect0 = model.predict(params, exog0)
#fittedvalues0 = np.dot(exog0,params)
exog0[:,i] = 1
effect1 = model.predict(params, exog0)
if 'ey' in method:
effect0 = np.log(effect0)
effect1 = np.log(effect1)
effects[:, i] = (effect1 - effect0)
return effects | If there's a dummy variable, the predicted difference is taken at
0 and 1 | _get_dummy_effects | python | statsmodels/statsmodels | statsmodels/discrete/discrete_margins.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_margins.py | BSD-3-Clause |
def margeff_cov_params(model, params, exog, cov_params, at, derivative,
dummy_ind, count_ind, method, J):
"""
Computes the variance-covariance of marginal effects by the delta method.
Parameters
----------
model : model instance
The model that returned the fitted results. Its pdf method is used
for computing the Jacobian of discrete variables in dummy_ind and
count_ind
params : array_like
estimated model parameters
exog : array_like
exogenous variables at which to calculate the derivative
cov_params : array_like
The variance-covariance of the parameters
at : str
Options are:
- 'overall', The average of the marginal effects at each
observation.
- 'mean', The marginal effects at the mean of each regressor.
- 'median', The marginal effects at the median of each regressor.
- 'zero', The marginal effects at zero for each regressor.
- 'all', The marginal effects at each observation.
Only overall has any effect here.you
derivative : function or array_like
If a function, it returns the marginal effects of the model with
respect to the exogenous variables evaluated at exog. Expected to be
called derivative(params, exog). This will be numerically
differentiated. Otherwise, it can be the Jacobian of the marginal
effects with respect to the parameters.
dummy_ind : array_like
Indices of the columns of exog that contain dummy variables
count_ind : array_like
Indices of the columns of exog that contain count variables
Notes
-----
For continuous regressors, the variance-covariance is given by
Asy. Var[MargEff] = [d margeff / d params] V [d margeff / d params]'
where V is the parameter variance-covariance.
The outer Jacobians are computed via numerical differentiation if
derivative is a function.
"""
if callable(derivative):
from statsmodels.tools.numdiff import approx_fprime_cs
params = params.ravel('F') # for Multinomial
try:
jacobian_mat = approx_fprime_cs(params, derivative,
args=(exog,method))
except TypeError: # norm.cdf does not take complex values
from statsmodels.tools.numdiff import approx_fprime
jacobian_mat = approx_fprime(params, derivative,
args=(exog,method))
if at == 'overall':
jacobian_mat = np.mean(jacobian_mat, axis=1)
else:
jacobian_mat = jacobian_mat.squeeze() # exog was 2d row vector
if dummy_ind is not None:
jacobian_mat = _margeff_cov_params_dummy(model, jacobian_mat,
params, exog, dummy_ind, method, J)
if count_ind is not None:
jacobian_mat = _margeff_cov_params_count(model, jacobian_mat,
params, exog, count_ind, method, J)
else:
jacobian_mat = derivative
#NOTE: this will not go through for at == 'all'
return np.dot(np.dot(jacobian_mat, cov_params), jacobian_mat.T) | Computes the variance-covariance of marginal effects by the delta method.
Parameters
----------
model : model instance
The model that returned the fitted results. Its pdf method is used
for computing the Jacobian of discrete variables in dummy_ind and
count_ind
params : array_like
estimated model parameters
exog : array_like
exogenous variables at which to calculate the derivative
cov_params : array_like
The variance-covariance of the parameters
at : str
Options are:
- 'overall', The average of the marginal effects at each
observation.
- 'mean', The marginal effects at the mean of each regressor.
- 'median', The marginal effects at the median of each regressor.
- 'zero', The marginal effects at zero for each regressor.
- 'all', The marginal effects at each observation.
Only overall has any effect here.you
derivative : function or array_like
If a function, it returns the marginal effects of the model with
respect to the exogenous variables evaluated at exog. Expected to be
called derivative(params, exog). This will be numerically
differentiated. Otherwise, it can be the Jacobian of the marginal
effects with respect to the parameters.
dummy_ind : array_like
Indices of the columns of exog that contain dummy variables
count_ind : array_like
Indices of the columns of exog that contain count variables
Notes
-----
For continuous regressors, the variance-covariance is given by
Asy. Var[MargEff] = [d margeff / d params] V [d margeff / d params]'
where V is the parameter variance-covariance.
The outer Jacobians are computed via numerical differentiation if
derivative is a function. | margeff_cov_params | python | statsmodels/statsmodels | statsmodels/discrete/discrete_margins.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_margins.py | BSD-3-Clause |
def margeff_cov_with_se(model, params, exog, cov_params, at, derivative,
dummy_ind, count_ind, method, J):
"""
See margeff_cov_params.
Same function but returns both the covariance of the marginal effects
and their standard errors.
"""
cov_me = margeff_cov_params(model, params, exog, cov_params, at,
derivative, dummy_ind,
count_ind, method, J)
return cov_me, np.sqrt(np.diag(cov_me)) | See margeff_cov_params.
Same function but returns both the covariance of the marginal effects
and their standard errors. | margeff_cov_with_se | python | statsmodels/statsmodels | statsmodels/discrete/discrete_margins.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_margins.py | BSD-3-Clause |
def summary_frame(self, alpha=.05):
"""
Returns a DataFrame summarizing the marginal effects.
Parameters
----------
alpha : float
Number between 0 and 1. The confidence intervals have the
probability 1-alpha.
Returns
-------
frame : DataFrames
A DataFrame summarizing the marginal effects.
Notes
-----
The dataframe is created on each call and not cached, as are the
tables build in `summary()`
"""
_check_at_is_all(self.margeff_options)
results = self.results
model = self.results.model
from pandas import DataFrame, MultiIndex
names = [_transform_names[self.margeff_options['method']],
'Std. Err.', 'z', 'Pr(>|z|)',
'Conf. Int. Low', 'Cont. Int. Hi.']
ind = self.results.model.exog.var(0) != 0 # True if not a constant
exog_names = self.results.model.exog_names
k_extra = getattr(model, 'k_extra', 0)
if k_extra > 0:
exog_names = exog_names[:-k_extra]
var_names = [name for i,name in enumerate(exog_names) if ind[i]]
if self.margeff.ndim == 2:
# MNLogit case
ci = self.conf_int(alpha)
table = np.column_stack([i.ravel("F") for i in
[self.margeff, self.margeff_se, self.tvalues,
self.pvalues, ci[:, 0, :], ci[:, 1, :]]])
_, yname_list = results._get_endog_name(model.endog_names,
None, all=True)
ynames = np.repeat(yname_list, len(var_names))
xnames = np.tile(var_names, len(yname_list))
index = MultiIndex.from_tuples(list(zip(ynames, xnames)),
names=['endog', 'exog'])
else:
table = np.column_stack((self.margeff, self.margeff_se, self.tvalues,
self.pvalues, self.conf_int(alpha)))
index=var_names
return DataFrame(table, columns=names, index=index) | Returns a DataFrame summarizing the marginal effects.
Parameters
----------
alpha : float
Number between 0 and 1. The confidence intervals have the
probability 1-alpha.
Returns
-------
frame : DataFrames
A DataFrame summarizing the marginal effects.
Notes
-----
The dataframe is created on each call and not cached, as are the
tables build in `summary()` | summary_frame | python | statsmodels/statsmodels | statsmodels/discrete/discrete_margins.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_margins.py | BSD-3-Clause |
def conf_int(self, alpha=.05):
"""
Returns the confidence intervals of the marginal effects
Parameters
----------
alpha : float
Number between 0 and 1. The confidence intervals have the
probability 1-alpha.
Returns
-------
conf_int : ndarray
An array with lower, upper confidence intervals for the marginal
effects.
"""
_check_at_is_all(self.margeff_options)
me_se = self.margeff_se
q = norm.ppf(1 - alpha / 2)
lower = self.margeff - q * me_se
upper = self.margeff + q * me_se
return np.asarray(lzip(lower, upper)) | Returns the confidence intervals of the marginal effects
Parameters
----------
alpha : float
Number between 0 and 1. The confidence intervals have the
probability 1-alpha.
Returns
-------
conf_int : ndarray
An array with lower, upper confidence intervals for the marginal
effects. | conf_int | python | statsmodels/statsmodels | statsmodels/discrete/discrete_margins.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_margins.py | BSD-3-Clause |
def summary(self, alpha=.05):
"""
Returns a summary table for marginal effects
Parameters
----------
alpha : float
Number between 0 and 1. The confidence intervals have the
probability 1-alpha.
Returns
-------
Summary : SummaryTable
A SummaryTable instance
"""
_check_at_is_all(self.margeff_options)
results = self.results
model = results.model
title = model.__class__.__name__ + " Marginal Effects"
method = self.margeff_options['method']
top_left = [('Dep. Variable:', [model.endog_names]),
('Method:', [method]),
('At:', [self.margeff_options['at']]),]
from statsmodels.iolib.summary import (
Summary,
summary_params,
table_extend,
)
exog_names = model.exog_names[:] # copy
smry = Summary()
# TODO: sigh, we really need to hold on to this in _data...
_, const_idx = _get_const_index(model.exog)
if const_idx is not None:
exog_names.pop(const_idx[0])
if getattr(model, 'k_extra', 0) > 0:
exog_names = exog_names[:-model.k_extra]
J = int(getattr(model, "J", 1))
if J > 1:
yname, yname_list = results._get_endog_name(model.endog_names,
None, all=True)
else:
yname = model.endog_names
yname_list = [yname]
smry.add_table_2cols(self, gleft=top_left, gright=[],
yname=yname, xname=exog_names, title=title)
# NOTE: add_table_params is not general enough yet for margeff
# could use a refactor with getattr instead of hard-coded params
# tvalues etc.
table = []
conf_int = self.conf_int(alpha)
margeff = self.margeff
margeff_se = self.margeff_se
tvalues = self.tvalues
pvalues = self.pvalues
if J > 1:
for eq in range(J):
restup = (results, margeff[:,eq], margeff_se[:,eq],
tvalues[:,eq], pvalues[:,eq], conf_int[:,:,eq])
tble = summary_params(restup, yname=yname_list[eq],
xname=exog_names, alpha=alpha, use_t=False,
skip_header=True)
tble.title = yname_list[eq]
# overwrite coef with method name
header = ['', _transform_names[method], 'std err', 'z',
'P>|z|', '[' + str(alpha/2), str(1-alpha/2) + ']']
tble.insert_header_row(0, header)
table.append(tble)
table = table_extend(table, keep_headers=True)
else:
restup = (results, margeff, margeff_se, tvalues, pvalues, conf_int)
table = summary_params(restup, yname=yname, xname=exog_names,
alpha=alpha, use_t=False, skip_header=True)
header = ['', _transform_names[method], 'std err', 'z',
'P>|z|', '[' + str(alpha/2), str(1-alpha/2) + ']']
table.insert_header_row(0, header)
smry.tables.append(table)
return smry | Returns a summary table for marginal effects
Parameters
----------
alpha : float
Number between 0 and 1. The confidence intervals have the
probability 1-alpha.
Returns
-------
Summary : SummaryTable
A SummaryTable instance | summary | python | statsmodels/statsmodels | statsmodels/discrete/discrete_margins.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_margins.py | BSD-3-Clause |
def get_margeff(self, at='overall', method='dydx', atexog=None,
dummy=False, count=False):
"""Get marginal effects of the fitted model.
Parameters
----------
at : str, optional
Options are:
- 'overall', The average of the marginal effects at each
observation.
- 'mean', The marginal effects at the mean of each regressor.
- 'median', The marginal effects at the median of each regressor.
- 'zero', The marginal effects at zero for each regressor.
- 'all', The marginal effects at each observation. If `at` is all
only margeff will be available.
Note that if `exog` is specified, then marginal effects for all
variables not specified by `exog` are calculated using the `at`
option.
method : str, optional
Options are:
- 'dydx' - dy/dx - No transformation is made and marginal effects
are returned. This is the default.
- 'eyex' - estimate elasticities of variables in `exog` --
d(lny)/d(lnx)
- 'dyex' - estimate semi-elasticity -- dy/d(lnx)
- 'eydx' - estimate semi-elasticity -- d(lny)/dx
Note that tranformations are done after each observation is
calculated. Semi-elasticities for binary variables are computed
using the midpoint method. 'dyex' and 'eyex' do not make sense
for discrete variables.
atexog : array_like, optional
Optionally, you can provide the exogenous variables over which to
get the marginal effects. This should be a dictionary with the key
as the zero-indexed column number and the value of the dictionary.
Default is None for all independent variables less the constant.
dummy : bool, optional
If False, treats binary variables (if present) as continuous. This
is the default. Else if True, treats binary variables as
changing from 0 to 1. Note that any variable that is either 0 or 1
is treated as binary. Each binary variable is treated separately
for now.
count : bool, optional
If False, treats count variables (if present) as continuous. This
is the default. Else if True, the marginal effect is the
change in probabilities when each observation is increased by one.
Returns
-------
effects : ndarray
the marginal effect corresponding to the input options
Notes
-----
When using after Poisson, returns the expected number of events
per period, assuming that the model is loglinear.
"""
self._reset() # always reset the cache when this is called
#TODO: if at is not all or overall, we can also put atexog values
# in summary table head
method = method.lower()
at = at.lower()
_check_margeff_args(at, method)
self.margeff_options = dict(method=method, at=at)
results = self.results
model = results.model
params = results.params
exog = model.exog.copy() # copy because values are changed
effects_idx, const_idx = _get_const_index(exog)
if dummy:
_check_discrete_args(at, method)
dummy_idx, dummy = _get_dummy_index(exog, const_idx)
else:
dummy_idx = None
if count:
_check_discrete_args(at, method)
count_idx, count = _get_count_index(exog, const_idx)
else:
count_idx = None
# attach dummy_idx and cout_idx
self.dummy_idx = dummy_idx
self.count_idx = count_idx
# get the exogenous variables
exog = _get_margeff_exog(exog, at, atexog, effects_idx)
# get base marginal effects, handled by sub-classes
effects = model._derivative_exog(params, exog, method,
dummy_idx, count_idx)
J = getattr(model, 'J', 1)
effects_idx = np.tile(effects_idx, J) # adjust for multi-equation.
effects = _effects_at(effects, at)
if at == 'all':
if J > 1:
K = model.K - np.any(~effects_idx) # subtract constant
self.margeff = effects[:, effects_idx].reshape(-1, K, J,
order='F')
else:
self.margeff = effects[:, effects_idx]
else:
# Set standard error of the marginal effects by Delta method.
margeff_cov, margeff_se = margeff_cov_with_se(model, params, exog,
results.cov_params(), at,
model._derivative_exog,
dummy_idx, count_idx,
method, J)
# reshape for multi-equation
if J > 1:
K = model.K - np.any(~effects_idx) # subtract constant
self.margeff = effects[effects_idx].reshape(K, J, order='F')
self.margeff_se = margeff_se[effects_idx].reshape(K, J,
order='F')
self.margeff_cov = margeff_cov[effects_idx][:, effects_idx]
else:
# do not care about at constant
# hack truncate effects_idx again if necessary
# if eyex, then effects is truncated to be without extra params
effects_idx = effects_idx[:len(effects)]
self.margeff_cov = margeff_cov[effects_idx][:, effects_idx]
self.margeff_se = margeff_se[effects_idx]
self.margeff = effects[effects_idx] | Get marginal effects of the fitted model.
Parameters
----------
at : str, optional
Options are:
- 'overall', The average of the marginal effects at each
observation.
- 'mean', The marginal effects at the mean of each regressor.
- 'median', The marginal effects at the median of each regressor.
- 'zero', The marginal effects at zero for each regressor.
- 'all', The marginal effects at each observation. If `at` is all
only margeff will be available.
Note that if `exog` is specified, then marginal effects for all
variables not specified by `exog` are calculated using the `at`
option.
method : str, optional
Options are:
- 'dydx' - dy/dx - No transformation is made and marginal effects
are returned. This is the default.
- 'eyex' - estimate elasticities of variables in `exog` --
d(lny)/d(lnx)
- 'dyex' - estimate semi-elasticity -- dy/d(lnx)
- 'eydx' - estimate semi-elasticity -- d(lny)/dx
Note that tranformations are done after each observation is
calculated. Semi-elasticities for binary variables are computed
using the midpoint method. 'dyex' and 'eyex' do not make sense
for discrete variables.
atexog : array_like, optional
Optionally, you can provide the exogenous variables over which to
get the marginal effects. This should be a dictionary with the key
as the zero-indexed column number and the value of the dictionary.
Default is None for all independent variables less the constant.
dummy : bool, optional
If False, treats binary variables (if present) as continuous. This
is the default. Else if True, treats binary variables as
changing from 0 to 1. Note that any variable that is either 0 or 1
is treated as binary. Each binary variable is treated separately
for now.
count : bool, optional
If False, treats count variables (if present) as continuous. This
is the default. Else if True, the marginal effect is the
change in probabilities when each observation is increased by one.
Returns
-------
effects : ndarray
the marginal effect corresponding to the input options
Notes
-----
When using after Poisson, returns the expected number of events
per period, assuming that the model is loglinear. | get_margeff | python | statsmodels/statsmodels | statsmodels/discrete/discrete_margins.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_margins.py | BSD-3-Clause |
def _validate_l1_method(method):
"""
As of 0.10.0, the supported values for `method` in `fit_regularized`
are "l1" and "l1_cvxopt_cp". If an invalid value is passed, raise
with a helpful error message
Parameters
----------
method : str
Raises
------
ValueError
"""
if method not in ['l1', 'l1_cvxopt_cp']:
raise ValueError('`method` = {method} is not supported, use either '
'"l1" or "l1_cvxopt_cp"'.format(method=method)) | As of 0.10.0, the supported values for `method` in `fit_regularized`
are "l1" and "l1_cvxopt_cp". If an invalid value is passed, raise
with a helpful error message
Parameters
----------
method : str
Raises
------
ValueError | _validate_l1_method | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def initialize(self):
"""
Initialize is called by
statsmodels.model.LikelihoodModel.__init__
and should contain any preprocessing that needs to be done for a model.
"""
if self._check_rank:
# assumes constant
rank = tools.matrix_rank(self.exog, method="qr")
else:
# If rank check is skipped, assume full
rank = self.exog.shape[1]
self.df_model = float(rank - 1)
self.df_resid = float(self.exog.shape[0] - rank) | Initialize is called by
statsmodels.model.LikelihoodModel.__init__
and should contain any preprocessing that needs to be done for a model. | initialize | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def cdf(self, X):
"""
The cumulative distribution function of the model.
"""
raise NotImplementedError | The cumulative distribution function of the model. | cdf | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def pdf(self, X):
"""
The probability density (mass) function of the model.
"""
raise NotImplementedError | The probability density (mass) function of the model. | pdf | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def fit(self, start_params=None, method='newton', maxiter=35,
full_output=1, disp=1, callback=None, **kwargs):
"""
Fit the model using maximum likelihood.
The rest of the docstring is from
statsmodels.base.model.LikelihoodModel.fit
"""
if callback is None:
callback = self._check_perfect_pred
else:
pass # TODO: make a function factory to have multiple call-backs
mlefit = super().fit(start_params=start_params,
method=method,
maxiter=maxiter,
full_output=full_output,
disp=disp,
callback=callback,
**kwargs)
return mlefit # It is up to subclasses to wrap results | Fit the model using maximum likelihood.
The rest of the docstring is from
statsmodels.base.model.LikelihoodModel.fit | fit | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def fit_regularized(self, start_params=None, method='l1',
maxiter='defined_by_method', full_output=1, disp=True,
callback=None, alpha=0, trim_mode='auto',
auto_trim_tol=0.01, size_trim_tol=1e-4, qc_tol=0.03,
qc_verbose=False, **kwargs):
"""
Fit the model using a regularized maximum likelihood.
The regularization method AND the solver used is determined by the
argument method.
Parameters
----------
start_params : array_like, optional
Initial guess of the solution for the loglikelihood maximization.
The default is an array of zeros.
method : 'l1' or 'l1_cvxopt_cp'
See notes for details.
maxiter : {int, 'defined_by_method'}
Maximum number of iterations to perform.
If 'defined_by_method', then use method defaults (see notes).
full_output : bool
Set to True to have all available output in the Results object's
mle_retvals attribute. The output is dependent on the solver.
See LikelihoodModelResults notes section for more information.
disp : bool
Set to True to print convergence messages.
fargs : tuple
Extra arguments passed to the likelihood function, i.e.,
loglike(x,*args).
callback : callable callback(xk)
Called after each iteration, as callback(xk), where xk is the
current parameter vector.
retall : bool
Set to True to return list of solutions at each iteration.
Available in Results object's mle_retvals attribute.
alpha : non-negative scalar or numpy array (same size as parameters)
The weight multiplying the l1 penalty term.
trim_mode : 'auto, 'size', or 'off'
If not 'off', trim (set to zero) parameters that would have been
zero if the solver reached the theoretical minimum.
If 'auto', trim params using the Theory above.
If 'size', trim params if they have very small absolute value.
size_trim_tol : float or 'auto' (default = 'auto')
Tolerance used when trim_mode == 'size'.
auto_trim_tol : float
Tolerance used when trim_mode == 'auto'.
qc_tol : float
Print warning and do not allow auto trim when (ii) (above) is
violated by this much.
qc_verbose : bool
If true, print out a full QC report upon failure.
**kwargs
Additional keyword arguments used when fitting the model.
Returns
-------
Results
A results instance.
Notes
-----
Using 'l1_cvxopt_cp' requires the cvxopt module.
Extra parameters are not penalized if alpha is given as a scalar.
An example is the shape parameter in NegativeBinomial `nb1` and `nb2`.
Optional arguments for the solvers (available in Results.mle_settings)::
'l1'
acc : float (default 1e-6)
Requested accuracy as used by slsqp
'l1_cvxopt_cp'
abstol : float
absolute accuracy (default: 1e-7).
reltol : float
relative accuracy (default: 1e-6).
feastol : float
tolerance for feasibility conditions (default: 1e-7).
refinement : int
number of iterative refinement steps when solving KKT
equations (default: 1).
Optimization methodology
With :math:`L` the negative log likelihood, we solve the convex but
non-smooth problem
.. math:: \\min_\\beta L(\\beta) + \\sum_k\\alpha_k |\\beta_k|
via the transformation to the smooth, convex, constrained problem
in twice as many variables (adding the "added variables" :math:`u_k`)
.. math:: \\min_{\\beta,u} L(\\beta) + \\sum_k\\alpha_k u_k,
subject to
.. math:: -u_k \\leq \\beta_k \\leq u_k.
With :math:`\\partial_k L` the derivative of :math:`L` in the
:math:`k^{th}` parameter direction, theory dictates that, at the
minimum, exactly one of two conditions holds:
(i) :math:`|\\partial_k L| = \\alpha_k` and :math:`\\beta_k \\neq 0`
(ii) :math:`|\\partial_k L| \\leq \\alpha_k` and :math:`\\beta_k = 0`
"""
_validate_l1_method(method)
# Set attributes based on method
cov_params_func = self.cov_params_func_l1
### Bundle up extra kwargs for the dictionary kwargs. These are
### passed through super(...).fit() as kwargs and unpacked at
### appropriate times
alpha = np.array(alpha)
assert alpha.min() >= 0
try:
kwargs['alpha'] = alpha
except TypeError:
kwargs = dict(alpha=alpha)
kwargs['alpha_rescaled'] = kwargs['alpha'] / float(self.endog.shape[0])
kwargs['trim_mode'] = trim_mode
kwargs['size_trim_tol'] = size_trim_tol
kwargs['auto_trim_tol'] = auto_trim_tol
kwargs['qc_tol'] = qc_tol
kwargs['qc_verbose'] = qc_verbose
### Define default keyword arguments to be passed to super(...).fit()
if maxiter == 'defined_by_method':
if method == 'l1':
maxiter = 1000
elif method == 'l1_cvxopt_cp':
maxiter = 70
## Parameters to pass to super(...).fit()
# For the 'extra' parameters, pass all that are available,
# even if we know (at this point) we will only use one.
extra_fit_funcs = {'l1': fit_l1_slsqp}
if have_cvxopt and method == 'l1_cvxopt_cp':
from statsmodels.base.l1_cvxopt import fit_l1_cvxopt_cp
extra_fit_funcs['l1_cvxopt_cp'] = fit_l1_cvxopt_cp
elif method.lower() == 'l1_cvxopt_cp':
raise ValueError("Cannot use l1_cvxopt_cp as cvxopt "
"was not found (install it, or use method='l1' instead)")
if callback is None:
callback = self._check_perfect_pred
else:
pass # make a function factory to have multiple call-backs
mlefit = super().fit(start_params=start_params,
method=method,
maxiter=maxiter,
full_output=full_output,
disp=disp,
callback=callback,
extra_fit_funcs=extra_fit_funcs,
cov_params_func=cov_params_func,
**kwargs)
return mlefit # up to subclasses to wrap results | Fit the model using a regularized maximum likelihood.
The regularization method AND the solver used is determined by the
argument method.
Parameters
----------
start_params : array_like, optional
Initial guess of the solution for the loglikelihood maximization.
The default is an array of zeros.
method : 'l1' or 'l1_cvxopt_cp'
See notes for details.
maxiter : {int, 'defined_by_method'}
Maximum number of iterations to perform.
If 'defined_by_method', then use method defaults (see notes).
full_output : bool
Set to True to have all available output in the Results object's
mle_retvals attribute. The output is dependent on the solver.
See LikelihoodModelResults notes section for more information.
disp : bool
Set to True to print convergence messages.
fargs : tuple
Extra arguments passed to the likelihood function, i.e.,
loglike(x,*args).
callback : callable callback(xk)
Called after each iteration, as callback(xk), where xk is the
current parameter vector.
retall : bool
Set to True to return list of solutions at each iteration.
Available in Results object's mle_retvals attribute.
alpha : non-negative scalar or numpy array (same size as parameters)
The weight multiplying the l1 penalty term.
trim_mode : 'auto, 'size', or 'off'
If not 'off', trim (set to zero) parameters that would have been
zero if the solver reached the theoretical minimum.
If 'auto', trim params using the Theory above.
If 'size', trim params if they have very small absolute value.
size_trim_tol : float or 'auto' (default = 'auto')
Tolerance used when trim_mode == 'size'.
auto_trim_tol : float
Tolerance used when trim_mode == 'auto'.
qc_tol : float
Print warning and do not allow auto trim when (ii) (above) is
violated by this much.
qc_verbose : bool
If true, print out a full QC report upon failure.
**kwargs
Additional keyword arguments used when fitting the model.
Returns
-------
Results
A results instance.
Notes
-----
Using 'l1_cvxopt_cp' requires the cvxopt module.
Extra parameters are not penalized if alpha is given as a scalar.
An example is the shape parameter in NegativeBinomial `nb1` and `nb2`.
Optional arguments for the solvers (available in Results.mle_settings)::
'l1'
acc : float (default 1e-6)
Requested accuracy as used by slsqp
'l1_cvxopt_cp'
abstol : float
absolute accuracy (default: 1e-7).
reltol : float
relative accuracy (default: 1e-6).
feastol : float
tolerance for feasibility conditions (default: 1e-7).
refinement : int
number of iterative refinement steps when solving KKT
equations (default: 1).
Optimization methodology
With :math:`L` the negative log likelihood, we solve the convex but
non-smooth problem
.. math:: \\min_\\beta L(\\beta) + \\sum_k\\alpha_k |\\beta_k|
via the transformation to the smooth, convex, constrained problem
in twice as many variables (adding the "added variables" :math:`u_k`)
.. math:: \\min_{\\beta,u} L(\\beta) + \\sum_k\\alpha_k u_k,
subject to
.. math:: -u_k \\leq \\beta_k \\leq u_k.
With :math:`\\partial_k L` the derivative of :math:`L` in the
:math:`k^{th}` parameter direction, theory dictates that, at the
minimum, exactly one of two conditions holds:
(i) :math:`|\\partial_k L| = \\alpha_k` and :math:`\\beta_k \\neq 0`
(ii) :math:`|\\partial_k L| \\leq \\alpha_k` and :math:`\\beta_k = 0` | fit_regularized | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def cov_params_func_l1(self, likelihood_model, xopt, retvals):
"""
Computes cov_params on a reduced parameter space
corresponding to the nonzero parameters resulting from the
l1 regularized fit.
Returns a full cov_params matrix, with entries corresponding
to zero'd values set to np.nan.
"""
H = likelihood_model.hessian(xopt)
trimmed = retvals['trimmed']
nz_idx = np.nonzero(~trimmed)[0]
nnz_params = (~trimmed).sum()
if nnz_params > 0:
H_restricted = H[nz_idx[:, None], nz_idx]
# Covariance estimate for the nonzero params
H_restricted_inv = np.linalg.inv(-H_restricted)
else:
H_restricted_inv = np.zeros(0)
cov_params = np.nan * np.ones(H.shape)
cov_params[nz_idx[:, None], nz_idx] = H_restricted_inv
return cov_params | Computes cov_params on a reduced parameter space
corresponding to the nonzero parameters resulting from the
l1 regularized fit.
Returns a full cov_params matrix, with entries corresponding
to zero'd values set to np.nan. | cov_params_func_l1 | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def predict(self, params, exog=None, which="mean", linear=None):
"""
Predict response variable of a model given exogenous variables.
"""
raise NotImplementedError | Predict response variable of a model given exogenous variables. | predict | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def _derivative_exog(self, params, exog=None, dummy_idx=None,
count_idx=None):
"""
This should implement the derivative of the non-linear function
"""
raise NotImplementedError | This should implement the derivative of the non-linear function | _derivative_exog | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def _derivative_exog_helper(self, margeff, params, exog, dummy_idx,
count_idx, transform):
"""
Helper for _derivative_exog to wrap results appropriately
"""
from .discrete_margins import _get_count_effects, _get_dummy_effects
if count_idx is not None:
margeff = _get_count_effects(margeff, exog, count_idx, transform,
self, params)
if dummy_idx is not None:
margeff = _get_dummy_effects(margeff, exog, dummy_idx, transform,
self, params)
return margeff | Helper for _derivative_exog to wrap results appropriately | _derivative_exog_helper | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def predict(self, params, exog=None, which="mean", linear=None,
offset=None):
"""
Predict response variable of a model given exogenous variables.
Parameters
----------
params : array_like
Fitted parameters of the model.
exog : array_like
1d or 2d array of exogenous values. If not supplied, the
whole exog attribute of the model is used.
which : {'mean', 'linear', 'var', 'prob'}, optional
Statistic to predict. Default is 'mean'.
- 'mean' returns the conditional expectation of endog E(y | x),
i.e. exp of linear predictor.
- 'linear' returns the linear predictor of the mean function.
- 'var' returns the estimated variance of endog implied by the
model.
.. versionadded: 0.14
``which`` replaces and extends the deprecated ``linear``
argument.
linear : bool
If True, returns the linear predicted values. If False or None,
then the statistic specified by ``which`` will be returned.
.. deprecated: 0.14
The ``linear` keyword is deprecated and will be removed,
use ``which`` keyword instead.
Returns
-------
array
Fitted values at exog.
"""
if linear is not None:
msg = 'linear keyword is deprecated, use which="linear"'
warnings.warn(msg, FutureWarning)
if linear is True:
which = "linear"
# Use fit offset if appropriate
if offset is None and exog is None and hasattr(self, 'offset'):
offset = self.offset
elif offset is None:
offset = 0.
if exog is None:
exog = self.exog
linpred = np.dot(exog, params) + offset
if which == "mean":
return self.cdf(linpred)
elif which == "linear":
return linpred
if which == "var":
mu = self.cdf(linpred)
var_ = mu * (1 - mu)
return var_
else:
raise ValueError('Only `which` is "mean", "linear" or "var" are'
' available.') | Predict response variable of a model given exogenous variables.
Parameters
----------
params : array_like
Fitted parameters of the model.
exog : array_like
1d or 2d array of exogenous values. If not supplied, the
whole exog attribute of the model is used.
which : {'mean', 'linear', 'var', 'prob'}, optional
Statistic to predict. Default is 'mean'.
- 'mean' returns the conditional expectation of endog E(y | x),
i.e. exp of linear predictor.
- 'linear' returns the linear predictor of the mean function.
- 'var' returns the estimated variance of endog implied by the
model.
.. versionadded: 0.14
``which`` replaces and extends the deprecated ``linear``
argument.
linear : bool
If True, returns the linear predicted values. If False or None,
then the statistic specified by ``which`` will be returned.
.. deprecated: 0.14
The ``linear` keyword is deprecated and will be removed,
use ``which`` keyword instead.
Returns
-------
array
Fitted values at exog. | predict | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def _derivative_predict(self, params, exog=None, transform='dydx',
offset=None):
"""
For computing marginal effects standard errors.
This is used only in the case of discrete and count regressors to
get the variance-covariance of the marginal effects. It returns
[d F / d params] where F is the predict.
Transform can be 'dydx' or 'eydx'. Checking is done in margeff
computations for appropriate transform.
"""
if exog is None:
exog = self.exog
linpred = self.predict(params, exog, offset=offset, which="linear")
dF = self.pdf(linpred)[:,None] * exog
if 'ey' in transform:
dF /= self.predict(params, exog, offset=offset)[:,None]
return dF | For computing marginal effects standard errors.
This is used only in the case of discrete and count regressors to
get the variance-covariance of the marginal effects. It returns
[d F / d params] where F is the predict.
Transform can be 'dydx' or 'eydx'. Checking is done in margeff
computations for appropriate transform. | _derivative_predict | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def _derivative_exog(self, params, exog=None, transform='dydx',
dummy_idx=None, count_idx=None, offset=None):
"""
For computing marginal effects returns dF(XB) / dX where F(.) is
the predicted probabilities
transform can be 'dydx', 'dyex', 'eydx', or 'eyex'.
Not all of these make sense in the presence of discrete regressors,
but checks are done in the results in get_margeff.
"""
# Note: this form should be appropriate for
# group 1 probit, logit, logistic, cloglog, heckprob, xtprobit
if exog is None:
exog = self.exog
linpred = self.predict(params, exog, offset=offset, which="linear")
margeff = np.dot(self.pdf(linpred)[:,None],
params[None,:])
if 'ex' in transform:
margeff *= exog
if 'ey' in transform:
margeff /= self.predict(params, exog)[:, None]
return self._derivative_exog_helper(margeff, params, exog,
dummy_idx, count_idx, transform) | For computing marginal effects returns dF(XB) / dX where F(.) is
the predicted probabilities
transform can be 'dydx', 'dyex', 'eydx', or 'eyex'.
Not all of these make sense in the presence of discrete regressors,
but checks are done in the results in get_margeff. | _derivative_exog | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def get_distribution(self, params, exog=None, offset=None):
"""Get frozen instance of distribution based on predicted parameters.
Parameters
----------
params : array_like
The parameters of the model.
exog : ndarray, optional
Explanatory variables for the main count model.
If ``exog`` is None, then the data from the model will be used.
offset : ndarray, optional
Offset is added to the linear predictor of the mean function with
coefficient equal to 1.
Default is zero if exog is not None, and the model offset if exog
is None.
exposure : ndarray, optional
Log(exposure) is added to the linear predictor of the mean
function with coefficient equal to 1. If exposure is specified,
then it will be logged by the method. The user does not need to
log it first.
Default is one if exog is is not None, and it is the model exposure
if exog is None.
Returns
-------
Instance of frozen scipy distribution.
"""
mu = self.predict(params, exog=exog, offset=offset)
# distr = stats.bernoulli(mu[:, None])
distr = stats.bernoulli(mu)
return distr | Get frozen instance of distribution based on predicted parameters.
Parameters
----------
params : array_like
The parameters of the model.
exog : ndarray, optional
Explanatory variables for the main count model.
If ``exog`` is None, then the data from the model will be used.
offset : ndarray, optional
Offset is added to the linear predictor of the mean function with
coefficient equal to 1.
Default is zero if exog is not None, and the model offset if exog
is None.
exposure : ndarray, optional
Log(exposure) is added to the linear predictor of the mean
function with coefficient equal to 1. If exposure is specified,
then it will be logged by the method. The user does not need to
log it first.
Default is one if exog is is not None, and it is the model exposure
if exog is None.
Returns
-------
Instance of frozen scipy distribution. | get_distribution | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def initialize(self):
"""
Preprocesses the data for MNLogit.
"""
super().initialize()
# This is also a "whiten" method in other models (eg regression)
self.endog = self.endog.argmax(1) # turn it into an array of col idx
self.J = self.wendog.shape[1]
self.K = self.exog.shape[1]
self.df_model *= (self.J-1) # for each J - 1 equation.
self.df_resid = self.exog.shape[0] - self.df_model - (self.J-1) | Preprocesses the data for MNLogit. | initialize | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def predict(self, params, exog=None, which="mean", linear=None):
"""
Predict response variable of a model given exogenous variables.
Parameters
----------
params : array_like
2d array of fitted parameters of the model. Should be in the
order returned from the model.
exog : array_like
1d or 2d array of exogenous values. If not supplied, the
whole exog attribute of the model is used. If a 1d array is given
it assumed to be 1 row of exogenous variables. If you only have
one regressor and would like to do prediction, you must provide
a 2d array with shape[1] == 1.
which : {'mean', 'linear', 'var', 'prob'}, optional
Statistic to predict. Default is 'mean'.
- 'mean' returns the conditional expectation of endog E(y | x),
i.e. exp of linear predictor.
- 'linear' returns the linear predictor of the mean function.
- 'var' returns the estimated variance of endog implied by the
model.
.. versionadded: 0.14
``which`` replaces and extends the deprecated ``linear``
argument.
linear : bool
If True, returns the linear predicted values. If False or None,
then the statistic specified by ``which`` will be returned.
.. deprecated: 0.14
The ``linear` keyword is deprecated and will be removed,
use ``which`` keyword instead.
Notes
-----
Column 0 is the base case, the rest conform to the rows of params
shifted up one for the base case.
"""
if linear is not None:
msg = 'linear keyword is deprecated, use which="linear"'
warnings.warn(msg, FutureWarning)
if linear is True:
which = "linear"
if exog is None: # do here to accommodate user-given exog
exog = self.exog
if exog.ndim == 1:
exog = exog[None]
pred = super().predict(params, exog, which=which)
if which == "linear":
pred = np.column_stack((np.zeros(len(exog)), pred))
return pred | Predict response variable of a model given exogenous variables.
Parameters
----------
params : array_like
2d array of fitted parameters of the model. Should be in the
order returned from the model.
exog : array_like
1d or 2d array of exogenous values. If not supplied, the
whole exog attribute of the model is used. If a 1d array is given
it assumed to be 1 row of exogenous variables. If you only have
one regressor and would like to do prediction, you must provide
a 2d array with shape[1] == 1.
which : {'mean', 'linear', 'var', 'prob'}, optional
Statistic to predict. Default is 'mean'.
- 'mean' returns the conditional expectation of endog E(y | x),
i.e. exp of linear predictor.
- 'linear' returns the linear predictor of the mean function.
- 'var' returns the estimated variance of endog implied by the
model.
.. versionadded: 0.14
``which`` replaces and extends the deprecated ``linear``
argument.
linear : bool
If True, returns the linear predicted values. If False or None,
then the statistic specified by ``which`` will be returned.
.. deprecated: 0.14
The ``linear` keyword is deprecated and will be removed,
use ``which`` keyword instead.
Notes
-----
Column 0 is the base case, the rest conform to the rows of params
shifted up one for the base case. | predict | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def _derivative_predict(self, params, exog=None, transform='dydx'):
"""
For computing marginal effects standard errors.
This is used only in the case of discrete and count regressors to
get the variance-covariance of the marginal effects. It returns
[d F / d params] where F is the predicted probabilities for each
choice. dFdparams is of shape nobs x (J*K) x (J-1)*K.
The zero derivatives for the base category are not included.
Transform can be 'dydx' or 'eydx'. Checking is done in margeff
computations for appropriate transform.
"""
if exog is None:
exog = self.exog
if params.ndim == 1: # will get flatted from approx_fprime
params = params.reshape(self.K, self.J-1, order='F')
eXB = np.exp(np.dot(exog, params))
sum_eXB = (1 + eXB.sum(1))[:,None]
J = int(self.J)
K = int(self.K)
repeat_eXB = np.repeat(eXB, J, axis=1)
X = np.tile(exog, J-1)
# this is the derivative wrt the base level
F0 = -repeat_eXB * X / sum_eXB ** 2
# this is the derivative wrt the other levels when
# dF_j / dParams_j (ie., own equation)
#NOTE: this computes too much, any easy way to cut down?
F1 = eXB.T[:,:,None]*X * (sum_eXB - repeat_eXB) / (sum_eXB**2)
F1 = F1.transpose((1,0,2)) # put the nobs index first
# other equation index
other_idx = ~np.kron(np.eye(J-1), np.ones(K)).astype(bool)
F1[:, other_idx] = (-eXB.T[:,:,None]*X*repeat_eXB / \
(sum_eXB**2)).transpose((1,0,2))[:, other_idx]
dFdX = np.concatenate((F0[:, None,:], F1), axis=1)
if 'ey' in transform:
dFdX /= self.predict(params, exog)[:, :, None]
return dFdX | For computing marginal effects standard errors.
This is used only in the case of discrete and count regressors to
get the variance-covariance of the marginal effects. It returns
[d F / d params] where F is the predicted probabilities for each
choice. dFdparams is of shape nobs x (J*K) x (J-1)*K.
The zero derivatives for the base category are not included.
Transform can be 'dydx' or 'eydx'. Checking is done in margeff
computations for appropriate transform. | _derivative_predict | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def _derivative_exog(self, params, exog=None, transform='dydx',
dummy_idx=None, count_idx=None):
"""
For computing marginal effects returns dF(XB) / dX where F(.) is
the predicted probabilities
transform can be 'dydx', 'dyex', 'eydx', or 'eyex'.
Not all of these make sense in the presence of discrete regressors,
but checks are done in the results in get_margeff.
For Multinomial models the marginal effects are
P[j] * (params[j] - sum_k P[k]*params[k])
It is returned unshaped, so that each row contains each of the J
equations. This makes it easier to take derivatives of this for
standard errors. If you want average marginal effects you can do
margeff.reshape(nobs, K, J, order='F).mean(0) and the marginal effects
for choice J are in column J
"""
J = int(self.J) # number of alternative choices
K = int(self.K) # number of variables
# Note: this form should be appropriate for
# group 1 probit, logit, logistic, cloglog, heckprob, xtprobit
if exog is None:
exog = self.exog
if params.ndim == 1: # will get flatted from approx_fprime
params = params.reshape(K, J-1, order='F')
zeroparams = np.c_[np.zeros(K), params] # add base in
cdf = self.cdf(np.dot(exog, params))
# TODO: meaningful interpretation for `iterm`?
iterm = np.array([cdf[:, [i]] * zeroparams[:, i]
for i in range(int(J))]).sum(0)
margeff = np.array([cdf[:, [j]] * (zeroparams[:, j] - iterm)
for j in range(J)])
# swap the axes to make sure margeff are in order nobs, K, J
margeff = np.transpose(margeff, (1, 2, 0))
if 'ex' in transform:
margeff *= exog
if 'ey' in transform:
margeff /= self.predict(params, exog)[:,None,:]
margeff = self._derivative_exog_helper(margeff, params, exog,
dummy_idx, count_idx, transform)
return margeff.reshape(len(exog), -1, order='F') | For computing marginal effects returns dF(XB) / dX where F(.) is
the predicted probabilities
transform can be 'dydx', 'dyex', 'eydx', or 'eyex'.
Not all of these make sense in the presence of discrete regressors,
but checks are done in the results in get_margeff.
For Multinomial models the marginal effects are
P[j] * (params[j] - sum_k P[k]*params[k])
It is returned unshaped, so that each row contains each of the J
equations. This makes it easier to take derivatives of this for
standard errors. If you want average marginal effects you can do
margeff.reshape(nobs, K, J, order='F).mean(0) and the marginal effects
for choice J are in column J | _derivative_exog | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def get_distribution(self, params, exog=None, offset=None):
"""get frozen instance of distribution
"""
raise NotImplementedError | get frozen instance of distribution | get_distribution | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def predict(self, params, exog=None, exposure=None, offset=None,
which='mean', linear=None):
"""
Predict response variable of a count model given exogenous variables
Parameters
----------
params : array_like
Model parameters
exog : array_like, optional
Design / exogenous data. Is exog is None, model exog is used.
exposure : array_like, optional
Log(exposure) is added to the linear prediction with
coefficient equal to 1. If exposure is not provided and exog
is None, uses the model's exposure if present. If not, uses
0 as the default value.
offset : array_like, optional
Offset is added to the linear prediction with coefficient
equal to 1. If offset is not provided and exog
is None, uses the model's offset if present. If not, uses
0 as the default value.
which : 'mean', 'linear', 'var', 'prob' (optional)
Statitistic to predict. Default is 'mean'.
- 'mean' returns the conditional expectation of endog E(y | x),
i.e. exp of linear predictor.
- 'linear' returns the linear predictor of the mean function.
- 'var' variance of endog implied by the likelihood model
- 'prob' predicted probabilities for counts.
.. versionadded: 0.14
``which`` replaces and extends the deprecated ``linear``
argument.
linear : bool
If True, returns the linear predicted values. If False or None,
then the statistic specified by ``which`` will be returned.
.. deprecated: 0.14
The ``linear` keyword is deprecated and will be removed,
use ``which`` keyword instead.
Notes
-----
If exposure is specified, then it will be logged by the method.
The user does not need to log it first.
"""
if linear is not None:
msg = 'linear keyword is deprecated, use which="linear"'
warnings.warn(msg, FutureWarning)
if linear is True:
which = "linear"
# the following is copied from GLM predict (without family/link check)
# Use fit offset if appropriate
if offset is None and exog is None and hasattr(self, 'offset'):
offset = self.offset
elif offset is None:
offset = 0.
# Use fit exposure if appropriate
if exposure is None and exog is None and hasattr(self, 'exposure'):
# Already logged
exposure = self.exposure
elif exposure is None:
exposure = 0.
else:
exposure = np.log(exposure)
if exog is None:
exog = self.exog
fitted = np.dot(exog, params[:exog.shape[1]])
linpred = fitted + exposure + offset
if which == "mean":
return np.exp(linpred)
elif which.startswith("lin"):
return linpred
else:
raise ValueError('keyword which has to be "mean" and "linear"') | Predict response variable of a count model given exogenous variables
Parameters
----------
params : array_like
Model parameters
exog : array_like, optional
Design / exogenous data. Is exog is None, model exog is used.
exposure : array_like, optional
Log(exposure) is added to the linear prediction with
coefficient equal to 1. If exposure is not provided and exog
is None, uses the model's exposure if present. If not, uses
0 as the default value.
offset : array_like, optional
Offset is added to the linear prediction with coefficient
equal to 1. If offset is not provided and exog
is None, uses the model's offset if present. If not, uses
0 as the default value.
which : 'mean', 'linear', 'var', 'prob' (optional)
Statitistic to predict. Default is 'mean'.
- 'mean' returns the conditional expectation of endog E(y | x),
i.e. exp of linear predictor.
- 'linear' returns the linear predictor of the mean function.
- 'var' variance of endog implied by the likelihood model
- 'prob' predicted probabilities for counts.
.. versionadded: 0.14
``which`` replaces and extends the deprecated ``linear``
argument.
linear : bool
If True, returns the linear predicted values. If False or None,
then the statistic specified by ``which`` will be returned.
.. deprecated: 0.14
The ``linear` keyword is deprecated and will be removed,
use ``which`` keyword instead.
Notes
-----
If exposure is specified, then it will be logged by the method.
The user does not need to log it first. | predict | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def _derivative_exog(self, params, exog=None, transform="dydx",
dummy_idx=None, count_idx=None):
"""
For computing marginal effects. These are the marginal effects
d F(XB) / dX
For the Poisson model F(XB) is the predicted counts rather than
the probabilities.
transform can be 'dydx', 'dyex', 'eydx', or 'eyex'.
Not all of these make sense in the presence of discrete regressors,
but checks are done in the results in get_margeff.
"""
# group 3 poisson, nbreg, zip, zinb
if exog is None:
exog = self.exog
k_extra = getattr(self, 'k_extra', 0)
params_exog = params if k_extra == 0 else params[:-k_extra]
margeff = self.predict(params, exog)[:,None] * params_exog[None,:]
if 'ex' in transform:
margeff *= exog
if 'ey' in transform:
margeff /= self.predict(params, exog)[:,None]
return self._derivative_exog_helper(margeff, params, exog,
dummy_idx, count_idx, transform) | For computing marginal effects. These are the marginal effects
d F(XB) / dX
For the Poisson model F(XB) is the predicted counts rather than
the probabilities.
transform can be 'dydx', 'dyex', 'eydx', or 'eyex'.
Not all of these make sense in the presence of discrete regressors,
but checks are done in the results in get_margeff. | _derivative_exog | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def cdf(self, X):
"""
Poisson model cumulative distribution function
Parameters
----------
X : array_like
`X` is the linear predictor of the model. See notes.
Returns
-------
The value of the Poisson CDF at each point.
Notes
-----
The CDF is defined as
.. math:: \\exp\\left(-\\lambda\\right)\\sum_{i=0}^{y}\\frac{\\lambda^{i}}{i!}
where :math:`\\lambda` assumes the loglinear model. I.e.,
.. math:: \\ln\\lambda_{i}=X\\beta
The parameter `X` is :math:`X\\beta` in the above formula.
"""
y = self.endog
return stats.poisson.cdf(y, np.exp(X)) | Poisson model cumulative distribution function
Parameters
----------
X : array_like
`X` is the linear predictor of the model. See notes.
Returns
-------
The value of the Poisson CDF at each point.
Notes
-----
The CDF is defined as
.. math:: \\exp\\left(-\\lambda\\right)\\sum_{i=0}^{y}\\frac{\\lambda^{i}}{i!}
where :math:`\\lambda` assumes the loglinear model. I.e.,
.. math:: \\ln\\lambda_{i}=X\\beta
The parameter `X` is :math:`X\\beta` in the above formula. | cdf | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def pdf(self, X):
"""
Poisson model probability mass function
Parameters
----------
X : array_like
`X` is the linear predictor of the model. See notes.
Returns
-------
pdf : ndarray
The value of the Poisson probability mass function, PMF, for each
point of X.
Notes
-----
The PMF is defined as
.. math:: \\frac{e^{-\\lambda_{i}}\\lambda_{i}^{y_{i}}}{y_{i}!}
where :math:`\\lambda` assumes the loglinear model. I.e.,
.. math:: \\ln\\lambda_{i}=x_{i}\\beta
The parameter `X` is :math:`x_{i}\\beta` in the above formula.
"""
y = self.endog
return np.exp(stats.poisson.logpmf(y, np.exp(X))) | Poisson model probability mass function
Parameters
----------
X : array_like
`X` is the linear predictor of the model. See notes.
Returns
-------
pdf : ndarray
The value of the Poisson probability mass function, PMF, for each
point of X.
Notes
-----
The PMF is defined as
.. math:: \\frac{e^{-\\lambda_{i}}\\lambda_{i}^{y_{i}}}{y_{i}!}
where :math:`\\lambda` assumes the loglinear model. I.e.,
.. math:: \\ln\\lambda_{i}=x_{i}\\beta
The parameter `X` is :math:`x_{i}\\beta` in the above formula. | pdf | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def loglike(self, params):
"""
Loglikelihood of Poisson model
Parameters
----------
params : array_like
The parameters of the model.
Returns
-------
loglike : float
The log-likelihood function of the model evaluated at `params`.
See notes.
Notes
-----
.. math:: \\ln L=\\sum_{i=1}^{n}\\left[-\\lambda_{i}+y_{i}x_{i}^{\\prime}\\beta-\\ln y_{i}!\\right]
"""
offset = getattr(self, "offset", 0)
exposure = getattr(self, "exposure", 0)
XB = np.dot(self.exog, params) + offset + exposure
endog = self.endog
return np.sum(
-np.exp(np.clip(XB, None, EXP_UPPER_LIMIT))
+ endog * XB
- gammaln(endog + 1)
) | Loglikelihood of Poisson model
Parameters
----------
params : array_like
The parameters of the model.
Returns
-------
loglike : float
The log-likelihood function of the model evaluated at `params`.
See notes.
Notes
-----
.. math:: \\ln L=\\sum_{i=1}^{n}\\left[-\\lambda_{i}+y_{i}x_{i}^{\\prime}\\beta-\\ln y_{i}!\\right] | loglike | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def loglikeobs(self, params):
"""
Loglikelihood for observations of Poisson model
Parameters
----------
params : array_like
The parameters of the model.
Returns
-------
loglike : array_like
The log likelihood for each observation of the model evaluated
at `params`. See Notes
Notes
-----
.. math:: \\ln L_{i}=\\left[-\\lambda_{i}+y_{i}x_{i}^{\\prime}\\beta-\\ln y_{i}!\\right]
for observations :math:`i=1,...,n`
"""
offset = getattr(self, "offset", 0)
exposure = getattr(self, "exposure", 0)
XB = np.dot(self.exog, params) + offset + exposure
endog = self.endog
#np.sum(stats.poisson.logpmf(endog, np.exp(XB)))
return -np.exp(XB) + endog*XB - gammaln(endog+1) | Loglikelihood for observations of Poisson model
Parameters
----------
params : array_like
The parameters of the model.
Returns
-------
loglike : array_like
The log likelihood for each observation of the model evaluated
at `params`. See Notes
Notes
-----
.. math:: \\ln L_{i}=\\left[-\\lambda_{i}+y_{i}x_{i}^{\\prime}\\beta-\\ln y_{i}!\\right]
for observations :math:`i=1,...,n` | loglikeobs | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def fit_constrained(self, constraints, start_params=None, **fit_kwds):
"""fit the model subject to linear equality constraints
The constraints are of the form `R params = q`
where R is the constraint_matrix and q is the vector of
constraint_values.
The estimation creates a new model with transformed design matrix,
exog, and converts the results back to the original parameterization.
Parameters
----------
constraints : formula expression or tuple
If it is a tuple, then the constraint needs to be given by two
arrays (constraint_matrix, constraint_value), i.e. (R, q).
Otherwise, the constraints can be given as strings or list of
strings.
see t_test for details
start_params : None or array_like
starting values for the optimization. `start_params` needs to be
given in the original parameter space and are internally
transformed.
**fit_kwds : keyword arguments
fit_kwds are used in the optimization of the transformed model.
Returns
-------
results : Results instance
"""
#constraints = (R, q)
# TODO: temporary trailing underscore to not overwrite the monkey
# patched version
# TODO: decide whether to move the imports
from statsmodels.base._constraints import (
LinearConstraints,
fit_constrained,
)
# same pattern as in base.LikelihoodModel.t_test
from statsmodels.formula._manager import FormulaManager
mgr = FormulaManager()
lc = mgr.get_linear_constraints(constraints, self.exog_names)
R, q = lc.constraint_matrix, lc.constraint_values
# TODO: add start_params option, need access to tranformation
# fit_constrained needs to do the transformation
params, cov, res_constr = fit_constrained(self, R, q,
start_params=start_params,
fit_kwds=fit_kwds)
#create dummy results Instance, TODO: wire up properly
res = self.fit(maxiter=0, method='nm', disp=0,
warn_convergence=False) # we get a wrapper back
res.mle_retvals['fcall'] = res_constr.mle_retvals.get('fcall', np.nan)
res.mle_retvals['iterations'] = res_constr.mle_retvals.get(
'iterations', np.nan)
res.mle_retvals['converged'] = res_constr.mle_retvals['converged']
res._results.params = params
res._results.cov_params_default = cov
cov_type = fit_kwds.get('cov_type', 'nonrobust')
if cov_type != 'nonrobust':
res._results.normalized_cov_params = cov # assume scale=1
else:
res._results.normalized_cov_params = None
k_constr = len(q)
res._results.df_resid += k_constr
res._results.df_model -= k_constr
res._results.constraints = LinearConstraints.from_formula_parser(lc)
res._results.k_constr = k_constr
res._results.results_constrained = res_constr
return res | fit the model subject to linear equality constraints
The constraints are of the form `R params = q`
where R is the constraint_matrix and q is the vector of
constraint_values.
The estimation creates a new model with transformed design matrix,
exog, and converts the results back to the original parameterization.
Parameters
----------
constraints : formula expression or tuple
If it is a tuple, then the constraint needs to be given by two
arrays (constraint_matrix, constraint_value), i.e. (R, q).
Otherwise, the constraints can be given as strings or list of
strings.
see t_test for details
start_params : None or array_like
starting values for the optimization. `start_params` needs to be
given in the original parameter space and are internally
transformed.
**fit_kwds : keyword arguments
fit_kwds are used in the optimization of the transformed model.
Returns
-------
results : Results instance | fit_constrained | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def score(self, params):
"""
Poisson model score (gradient) vector of the log-likelihood
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
score : ndarray, 1-D
The score vector of the model, i.e. the first derivative of the
loglikelihood function, evaluated at `params`
Notes
-----
.. math:: \\frac{\\partial\\ln L}{\\partial\\beta}=\\sum_{i=1}^{n}\\left(y_{i}-\\lambda_{i}\\right)x_{i}
where the loglinear model is assumed
.. math:: \\ln\\lambda_{i}=x_{i}\\beta
"""
offset = getattr(self, "offset", 0)
exposure = getattr(self, "exposure", 0)
X = self.exog
L = np.exp(np.dot(X,params) + offset + exposure)
return np.dot(self.endog - L, X) | Poisson model score (gradient) vector of the log-likelihood
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
score : ndarray, 1-D
The score vector of the model, i.e. the first derivative of the
loglikelihood function, evaluated at `params`
Notes
-----
.. math:: \\frac{\\partial\\ln L}{\\partial\\beta}=\\sum_{i=1}^{n}\\left(y_{i}-\\lambda_{i}\\right)x_{i}
where the loglinear model is assumed
.. math:: \\ln\\lambda_{i}=x_{i}\\beta | score | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def score_obs(self, params):
"""
Poisson model Jacobian of the log-likelihood for each observation
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
score : array_like
The score vector (nobs, k_vars) of the model evaluated at `params`
Notes
-----
.. math:: \\frac{\\partial\\ln L_{i}}{\\partial\\beta}=\\left(y_{i}-\\lambda_{i}\\right)x_{i}
for observations :math:`i=1,...,n`
where the loglinear model is assumed
.. math:: \\ln\\lambda_{i}=x_{i}\\beta
"""
offset = getattr(self, "offset", 0)
exposure = getattr(self, "exposure", 0)
X = self.exog
L = np.exp(np.dot(X,params) + offset + exposure)
return (self.endog - L)[:,None] * X | Poisson model Jacobian of the log-likelihood for each observation
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
score : array_like
The score vector (nobs, k_vars) of the model evaluated at `params`
Notes
-----
.. math:: \\frac{\\partial\\ln L_{i}}{\\partial\\beta}=\\left(y_{i}-\\lambda_{i}\\right)x_{i}
for observations :math:`i=1,...,n`
where the loglinear model is assumed
.. math:: \\ln\\lambda_{i}=x_{i}\\beta | score_obs | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def score_factor(self, params):
"""
Poisson model score_factor for each observation
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
score : array_like
The score factor (nobs, ) of the model evaluated at `params`
Notes
-----
.. math:: \\frac{\\partial\\ln L_{i}}{\\partial\\beta}=\\left(y_{i}-\\lambda_{i}\\right)
for observations :math:`i=1,...,n`
where the loglinear model is assumed
.. math:: \\ln\\lambda_{i}=x_{i}\\beta
"""
offset = getattr(self, "offset", 0)
exposure = getattr(self, "exposure", 0)
X = self.exog
L = np.exp(np.dot(X,params) + offset + exposure)
return (self.endog - L) | Poisson model score_factor for each observation
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
score : array_like
The score factor (nobs, ) of the model evaluated at `params`
Notes
-----
.. math:: \\frac{\\partial\\ln L_{i}}{\\partial\\beta}=\\left(y_{i}-\\lambda_{i}\\right)
for observations :math:`i=1,...,n`
where the loglinear model is assumed
.. math:: \\ln\\lambda_{i}=x_{i}\\beta | score_factor | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def hessian(self, params):
"""
Poisson model Hessian matrix of the loglikelihood
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
hess : ndarray, (k_vars, k_vars)
The Hessian, second derivative of loglikelihood function,
evaluated at `params`
Notes
-----
.. math:: \\frac{\\partial^{2}\\ln L}{\\partial\\beta\\partial\\beta^{\\prime}}=-\\sum_{i=1}^{n}\\lambda_{i}x_{i}x_{i}^{\\prime}
where the loglinear model is assumed
.. math:: \\ln\\lambda_{i}=x_{i}\\beta
"""
offset = getattr(self, "offset", 0)
exposure = getattr(self, "exposure", 0)
X = self.exog
L = np.exp(np.dot(X,params) + exposure + offset)
return -np.dot(L*X.T, X) | Poisson model Hessian matrix of the loglikelihood
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
hess : ndarray, (k_vars, k_vars)
The Hessian, second derivative of loglikelihood function,
evaluated at `params`
Notes
-----
.. math:: \\frac{\\partial^{2}\\ln L}{\\partial\\beta\\partial\\beta^{\\prime}}=-\\sum_{i=1}^{n}\\lambda_{i}x_{i}x_{i}^{\\prime}
where the loglinear model is assumed
.. math:: \\ln\\lambda_{i}=x_{i}\\beta | hessian | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def hessian_factor(self, params):
"""
Poisson model Hessian factor
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
hess : ndarray, (nobs,)
The Hessian factor, second derivative of loglikelihood function
with respect to the linear predictor evaluated at `params`
Notes
-----
.. math:: \\frac{\\partial^{2}\\ln L}{\\partial\\beta\\partial\\beta^{\\prime}}=-\\sum_{i=1}^{n}\\lambda_{i}
where the loglinear model is assumed
.. math:: \\ln\\lambda_{i}=x_{i}\\beta
"""
offset = getattr(self, "offset", 0)
exposure = getattr(self, "exposure", 0)
X = self.exog
L = np.exp(np.dot(X,params) + exposure + offset)
return -L | Poisson model Hessian factor
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
hess : ndarray, (nobs,)
The Hessian factor, second derivative of loglikelihood function
with respect to the linear predictor evaluated at `params`
Notes
-----
.. math:: \\frac{\\partial^{2}\\ln L}{\\partial\\beta\\partial\\beta^{\\prime}}=-\\sum_{i=1}^{n}\\lambda_{i}
where the loglinear model is assumed
.. math:: \\ln\\lambda_{i}=x_{i}\\beta | hessian_factor | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def _deriv_score_obs_dendog(self, params, scale=None):
"""derivative of score_obs w.r.t. endog
Parameters
----------
params : ndarray
parameter at which score is evaluated
scale : None or float
If scale is None, then the default scale will be calculated.
Default scale is defined by `self.scaletype` and set in fit.
If scale is not None, then it is used as a fixed scale.
Returns
-------
derivative : ndarray_2d
The derivative of the score_obs with respect to endog. This
can is given by `score_factor0[:, None] * exog` where
`score_factor0` is the score_factor without the residual.
"""
return self.exog | derivative of score_obs w.r.t. endog
Parameters
----------
params : ndarray
parameter at which score is evaluated
scale : None or float
If scale is None, then the default scale will be calculated.
Default scale is defined by `self.scaletype` and set in fit.
If scale is not None, then it is used as a fixed scale.
Returns
-------
derivative : ndarray_2d
The derivative of the score_obs with respect to endog. This
can is given by `score_factor0[:, None] * exog` where
`score_factor0` is the score_factor without the residual. | _deriv_score_obs_dendog | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def predict(self, params, exog=None, exposure=None, offset=None,
which='mean', linear=None, y_values=None):
"""
Predict response variable of a model given exogenous variables.
Parameters
----------
params : array_like
2d array of fitted parameters of the model. Should be in the
order returned from the model.
exog : array_like, optional
1d or 2d array of exogenous values. If not supplied, then the
exog attribute of the model is used. If a 1d array is given
it assumed to be 1 row of exogenous variables. If you only have
one regressor and would like to do prediction, you must provide
a 2d array with shape[1] == 1.
offset : array_like, optional
Offset is added to the linear predictor with coefficient equal
to 1.
Default is zero if exog is not None, and the model offset if exog
is None.
exposure : array_like, optional
Log(exposure) is added to the linear prediction with coefficient
equal to 1.
Default is one if exog is is not None, and is the model exposure
if exog is None.
which : 'mean', 'linear', 'var', 'prob' (optional)
Statitistic to predict. Default is 'mean'.
- 'mean' returns the conditional expectation of endog E(y | x),
i.e. exp of linear predictor.
- 'linear' returns the linear predictor of the mean function.
- 'var' returns the estimated variance of endog implied by the
model.
- 'prob' return probabilities for counts from 0 to max(endog) or
for y_values if those are provided.
.. versionadded: 0.14
``which`` replaces and extends the deprecated ``linear``
argument.
linear : bool
The ``linear` keyword is deprecated and will be removed,
use ``which`` keyword instead.
If True, returns the linear predicted values. If False or None,
then the statistic specified by ``which`` will be returned.
.. deprecated: 0.14
The ``linear` keyword is deprecated and will be removed,
use ``which`` keyword instead.
y_values : array_like
Values of the random variable endog at which pmf is evaluated.
Only used if ``which="prob"``
"""
# Note docstring is reused by other count models
if linear is not None:
msg = 'linear keyword is deprecated, use which="linear"'
warnings.warn(msg, FutureWarning)
if linear is True:
which = "linear"
if which.startswith("lin"):
which = "linear"
if which in ["mean", "linear"]:
return super().predict(params, exog=exog, exposure=exposure,
offset=offset,
which=which, linear=linear)
# TODO: add full set of which
elif which == "var":
mu = self.predict(params, exog=exog,
exposure=exposure, offset=offset,
)
return mu
elif which == "prob":
if y_values is not None:
y_values = np.atleast_2d(y_values)
else:
y_values = np.atleast_2d(
np.arange(0, np.max(self.endog) + 1))
mu = self.predict(params, exog=exog,
exposure=exposure, offset=offset,
)[:, None]
# uses broadcasting
return stats.poisson._pmf(y_values, mu)
else:
raise ValueError('Value of the `which` option is not recognized') | Predict response variable of a model given exogenous variables.
Parameters
----------
params : array_like
2d array of fitted parameters of the model. Should be in the
order returned from the model.
exog : array_like, optional
1d or 2d array of exogenous values. If not supplied, then the
exog attribute of the model is used. If a 1d array is given
it assumed to be 1 row of exogenous variables. If you only have
one regressor and would like to do prediction, you must provide
a 2d array with shape[1] == 1.
offset : array_like, optional
Offset is added to the linear predictor with coefficient equal
to 1.
Default is zero if exog is not None, and the model offset if exog
is None.
exposure : array_like, optional
Log(exposure) is added to the linear prediction with coefficient
equal to 1.
Default is one if exog is is not None, and is the model exposure
if exog is None.
which : 'mean', 'linear', 'var', 'prob' (optional)
Statitistic to predict. Default is 'mean'.
- 'mean' returns the conditional expectation of endog E(y | x),
i.e. exp of linear predictor.
- 'linear' returns the linear predictor of the mean function.
- 'var' returns the estimated variance of endog implied by the
model.
- 'prob' return probabilities for counts from 0 to max(endog) or
for y_values if those are provided.
.. versionadded: 0.14
``which`` replaces and extends the deprecated ``linear``
argument.
linear : bool
The ``linear` keyword is deprecated and will be removed,
use ``which`` keyword instead.
If True, returns the linear predicted values. If False or None,
then the statistic specified by ``which`` will be returned.
.. deprecated: 0.14
The ``linear` keyword is deprecated and will be removed,
use ``which`` keyword instead.
y_values : array_like
Values of the random variable endog at which pmf is evaluated.
Only used if ``which="prob"`` | predict | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def _var(self, mu, params=None):
"""variance implied by the distribution
internal use, will be refactored or removed
"""
return mu | variance implied by the distribution
internal use, will be refactored or removed | _var | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def get_distribution(self, params, exog=None, exposure=None, offset=None):
"""Get frozen instance of distribution based on predicted parameters.
Parameters
----------
params : array_like
The parameters of the model.
exog : ndarray, optional
Explanatory variables for the main count model.
If ``exog`` is None, then the data from the model will be used.
offset : ndarray, optional
Offset is added to the linear predictor of the mean function with
coefficient equal to 1.
Default is zero if exog is not None, and the model offset if exog
is None.
exposure : ndarray, optional
Log(exposure) is added to the linear predictor of the mean
function with coefficient equal to 1. If exposure is specified,
then it will be logged by the method. The user does not need to
log it first.
Default is one if exog is is not None, and it is the model exposure
if exog is None.
Returns
-------
Instance of frozen scipy distribution subclass.
"""
mu = self.predict(params, exog=exog, exposure=exposure, offset=offset)
distr = stats.poisson(mu)
return distr | Get frozen instance of distribution based on predicted parameters.
Parameters
----------
params : array_like
The parameters of the model.
exog : ndarray, optional
Explanatory variables for the main count model.
If ``exog`` is None, then the data from the model will be used.
offset : ndarray, optional
Offset is added to the linear predictor of the mean function with
coefficient equal to 1.
Default is zero if exog is not None, and the model offset if exog
is None.
exposure : ndarray, optional
Log(exposure) is added to the linear predictor of the mean
function with coefficient equal to 1. If exposure is specified,
then it will be logged by the method. The user does not need to
log it first.
Default is one if exog is is not None, and it is the model exposure
if exog is None.
Returns
-------
Instance of frozen scipy distribution subclass. | get_distribution | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def loglike(self, params):
"""
Loglikelihood of Generalized Poisson model
Parameters
----------
params : array_like
The parameters of the model.
Returns
-------
loglike : float
The log-likelihood function of the model evaluated at `params`.
See notes.
Notes
-----
.. math:: \\ln L=\\sum_{i=1}^{n}\\left[\\mu_{i}+(y_{i}-1)*ln(\\mu_{i}+
\\alpha*\\mu_{i}^{p-1}*y_{i})-y_{i}*ln(1+\\alpha*\\mu_{i}^{p-1})-
ln(y_{i}!)-\\frac{\\mu_{i}+\\alpha*\\mu_{i}^{p-1}*y_{i}}{1+\\alpha*
\\mu_{i}^{p-1}}\\right]
"""
return np.sum(self.loglikeobs(params)) | Loglikelihood of Generalized Poisson model
Parameters
----------
params : array_like
The parameters of the model.
Returns
-------
loglike : float
The log-likelihood function of the model evaluated at `params`.
See notes.
Notes
-----
.. math:: \\ln L=\\sum_{i=1}^{n}\\left[\\mu_{i}+(y_{i}-1)*ln(\\mu_{i}+
\\alpha*\\mu_{i}^{p-1}*y_{i})-y_{i}*ln(1+\\alpha*\\mu_{i}^{p-1})-
ln(y_{i}!)-\\frac{\\mu_{i}+\\alpha*\\mu_{i}^{p-1}*y_{i}}{1+\\alpha*
\\mu_{i}^{p-1}}\\right] | loglike | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def loglikeobs(self, params):
"""
Loglikelihood for observations of Generalized Poisson model
Parameters
----------
params : array_like
The parameters of the model.
Returns
-------
loglike : ndarray
The log likelihood for each observation of the model evaluated
at `params`. See Notes
Notes
-----
.. math:: \\ln L=\\sum_{i=1}^{n}\\left[\\mu_{i}+(y_{i}-1)*ln(\\mu_{i}+
\\alpha*\\mu_{i}^{p-1}*y_{i})-y_{i}*ln(1+\\alpha*\\mu_{i}^{p-1})-
ln(y_{i}!)-\\frac{\\mu_{i}+\\alpha*\\mu_{i}^{p-1}*y_{i}}{1+\\alpha*
\\mu_{i}^{p-1}}\\right]
for observations :math:`i=1,...,n`
"""
if self._transparams:
alpha = np.exp(params[-1])
else:
alpha = params[-1]
params = params[:-1]
p = self.parameterization
endog = self.endog
mu = self.predict(params)
mu_p = np.power(mu, p)
a1 = 1 + alpha * mu_p
a2 = mu + (a1 - 1) * endog
a1 = np.maximum(1e-20, a1)
a2 = np.maximum(1e-20, a2)
return (np.log(mu) + (endog - 1) * np.log(a2) - endog *
np.log(a1) - gammaln(endog + 1) - a2 / a1) | Loglikelihood for observations of Generalized Poisson model
Parameters
----------
params : array_like
The parameters of the model.
Returns
-------
loglike : ndarray
The log likelihood for each observation of the model evaluated
at `params`. See Notes
Notes
-----
.. math:: \\ln L=\\sum_{i=1}^{n}\\left[\\mu_{i}+(y_{i}-1)*ln(\\mu_{i}+
\\alpha*\\mu_{i}^{p-1}*y_{i})-y_{i}*ln(1+\\alpha*\\mu_{i}^{p-1})-
ln(y_{i}!)-\\frac{\\mu_{i}+\\alpha*\\mu_{i}^{p-1}*y_{i}}{1+\\alpha*
\\mu_{i}^{p-1}}\\right]
for observations :math:`i=1,...,n` | loglikeobs | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def _score_p(self, params):
"""
Generalized Poisson model derivative of the log-likelihood by p-parameter
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
dldp : float
dldp is first derivative of the loglikelihood function,
evaluated at `p-parameter`.
"""
if self._transparams:
alpha = np.exp(params[-1])
else:
alpha = params[-1]
params = params[:-1]
p = self.parameterization
y = self.endog[:,None]
mu = self.predict(params)[:,None]
mu_p = np.power(mu, p)
a1 = 1 + alpha * mu_p
a2 = mu + alpha * mu_p * y
dp = np.sum(np.log(mu) * ((a2 - mu) * ((y - 1) / a2 - 2 / a1) +
(a1 - 1) * a2 / a1 ** 2))
return dp | Generalized Poisson model derivative of the log-likelihood by p-parameter
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
dldp : float
dldp is first derivative of the loglikelihood function,
evaluated at `p-parameter`. | _score_p | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def hessian(self, params):
"""
Generalized Poisson model Hessian matrix of the loglikelihood
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
hess : ndarray, (k_vars, k_vars)
The Hessian, second derivative of loglikelihood function,
evaluated at `params`
"""
if self._transparams:
alpha = np.exp(params[-1])
else:
alpha = params[-1]
params = params[:-1]
p = self.parameterization
exog = self.exog
y = self.endog[:,None]
mu = self.predict(params)[:,None]
mu_p = np.power(mu, p)
a1 = 1 + alpha * mu_p
a2 = mu + alpha * mu_p * y
a3 = alpha * p * mu ** (p - 1)
a4 = a3 * y
a5 = p * mu ** (p - 1)
dmudb = mu * exog
# for dl/dparams dparams
dim = exog.shape[1]
hess_arr = np.empty((dim+1,dim+1))
for i in range(dim):
for j in range(i + 1):
hess_val = np.sum(mu * exog[:,i,None] * exog[:,j,None] *
(mu * (a3 * a4 / a1**2 -
2 * a3**2 * a2 / a1**3 +
2 * a3 * (a4 + 1) / a1**2 -
a4 * p / (mu * a1) +
a3 * p * a2 / (mu * a1**2) +
(y - 1) * a4 * (p - 1) / (a2 * mu) -
(y - 1) * (1 + a4)**2 / a2**2 -
a4 * (p - 1) / (a1 * mu)) +
((y - 1) * (1 + a4) / a2 -
(1 + a4) / a1)), axis=0)
hess_arr[i, j] = np.squeeze(hess_val)
tri_idx = np.triu_indices(dim, k=1)
hess_arr[tri_idx] = hess_arr.T[tri_idx]
# for dl/dparams dalpha
dldpda = np.sum((2 * a4 * mu_p / a1**2 -
2 * a3 * mu_p * a2 / a1**3 -
mu_p * y * (y - 1) * (1 + a4) / a2**2 +
mu_p * (1 + a4) / a1**2 +
a5 * y * (y - 1) / a2 -
2 * a5 * y / a1 +
a5 * a2 / a1**2) * dmudb,
axis=0)
hess_arr[-1,:-1] = dldpda
hess_arr[:-1,-1] = dldpda
# for dl/dalpha dalpha
dldada = mu_p**2 * (3 * y / a1**2 -
(y / a2)**2. * (y - 1) -
2 * a2 / a1**3)
hess_arr[-1,-1] = dldada.sum()
return hess_arr | Generalized Poisson model Hessian matrix of the loglikelihood
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
hess : ndarray, (k_vars, k_vars)
The Hessian, second derivative of loglikelihood function,
evaluated at `params` | hessian | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def hessian_factor(self, params):
"""
Generalized Poisson model Hessian matrix of the loglikelihood
Parameters
----------
params : array-like
The parameters of the model
Returns
-------
hess : ndarray, (nobs, 3)
The Hessian factor, second derivative of loglikelihood function
with respect to linear predictor and dispersion parameter
evaluated at `params`
The first column contains the second derivative w.r.t. linpred,
the second column contains the cross derivative, and the
third column contains the second derivative w.r.t. the dispersion
parameter.
"""
params = np.asarray(params)
if self._transparams:
alpha = np.exp(params[-1])
else:
alpha = params[-1]
params = params[:-1]
p = self.parameterization
y = self.endog
mu = self.predict(params)
mu_p = np.power(mu, p)
a1 = 1 + alpha * mu_p
a2 = mu + alpha * mu_p * y
a3 = alpha * p * mu ** (p - 1)
a4 = a3 * y
a5 = p * mu ** (p - 1)
dmudb = mu
dbb = mu * (
mu * (a3 * a4 / a1**2 -
2 * a3**2 * a2 / a1**3 +
2 * a3 * (a4 + 1) / a1**2 -
a4 * p / (mu * a1) +
a3 * p * a2 / (mu * a1**2) +
a4 / (mu * a1) -
a3 * a2 / (mu * a1**2) +
(y - 1) * a4 * (p - 1) / (a2 * mu) -
(y - 1) * (1 + a4)**2 / a2**2 -
a4 * (p - 1) / (a1 * mu) -
1 / mu**2) +
(-a4 / a1 +
a3 * a2 / a1**2 +
(y - 1) * (1 + a4) / a2 -
(1 + a4) / a1 +
1 / mu))
# for dl/dlinpred dalpha
dba = ((2 * a4 * mu_p / a1**2 -
2 * a3 * mu_p * a2 / a1**3 -
mu_p * y * (y - 1) * (1 + a4) / a2**2 +
mu_p * (1 + a4) / a1**2 +
a5 * y * (y - 1) / a2 -
2 * a5 * y / a1 +
a5 * a2 / a1**2) * dmudb)
# for dl/dalpha dalpha
daa = mu_p**2 * (3 * y / a1**2 -
(y / a2)**2. * (y - 1) -
2 * a2 / a1**3)
return dbb, dba, daa | Generalized Poisson model Hessian matrix of the loglikelihood
Parameters
----------
params : array-like
The parameters of the model
Returns
-------
hess : ndarray, (nobs, 3)
The Hessian factor, second derivative of loglikelihood function
with respect to linear predictor and dispersion parameter
evaluated at `params`
The first column contains the second derivative w.r.t. linpred,
the second column contains the cross derivative, and the
third column contains the second derivative w.r.t. the dispersion
parameter. | hessian_factor | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def cdf(self, X):
"""
The logistic cumulative distribution function
Parameters
----------
X : array_like
`X` is the linear predictor of the logit model. See notes.
Returns
-------
1/(1 + exp(-X))
Notes
-----
In the logit model,
.. math:: \\Lambda\\left(x^{\\prime}\\beta\\right)=
\\text{Prob}\\left(Y=1|x\\right)=
\\frac{e^{x^{\\prime}\\beta}}{1+e^{x^{\\prime}\\beta}}
"""
X = np.asarray(X)
return 1/(1+np.exp(-X)) | The logistic cumulative distribution function
Parameters
----------
X : array_like
`X` is the linear predictor of the logit model. See notes.
Returns
-------
1/(1 + exp(-X))
Notes
-----
In the logit model,
.. math:: \\Lambda\\left(x^{\\prime}\\beta\\right)=
\\text{Prob}\\left(Y=1|x\\right)=
\\frac{e^{x^{\\prime}\\beta}}{1+e^{x^{\\prime}\\beta}} | cdf | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def pdf(self, X):
"""
The logistic probability density function
Parameters
----------
X : array_like
`X` is the linear predictor of the logit model. See notes.
Returns
-------
pdf : ndarray
The value of the Logit probability mass function, PMF, for each
point of X. ``np.exp(-x)/(1+np.exp(-X))**2``
Notes
-----
In the logit model,
.. math:: \\lambda\\left(x^{\\prime}\\beta\\right)=\\frac{e^{-x^{\\prime}\\beta}}{\\left(1+e^{-x^{\\prime}\\beta}\\right)^{2}}
"""
X = np.asarray(X)
return np.exp(-X)/(1+np.exp(-X))**2 | The logistic probability density function
Parameters
----------
X : array_like
`X` is the linear predictor of the logit model. See notes.
Returns
-------
pdf : ndarray
The value of the Logit probability mass function, PMF, for each
point of X. ``np.exp(-x)/(1+np.exp(-X))**2``
Notes
-----
In the logit model,
.. math:: \\lambda\\left(x^{\\prime}\\beta\\right)=\\frac{e^{-x^{\\prime}\\beta}}{\\left(1+e^{-x^{\\prime}\\beta}\\right)^{2}} | pdf | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def loglike(self, params):
"""
Log-likelihood of logit model.
Parameters
----------
params : array_like
The parameters of the logit model.
Returns
-------
loglike : float
The log-likelihood function of the model evaluated at `params`.
See notes.
Notes
-----
.. math::
\\ln L=\\sum_{i}\\ln\\Lambda
\\left(q_{i}x_{i}^{\\prime}\\beta\\right)
Where :math:`q=2y-1`. This simplification comes from the fact that the
logistic distribution is symmetric.
"""
q = 2*self.endog - 1
linpred = self.predict(params, which="linear")
return np.sum(np.log(self.cdf(q * linpred))) | Log-likelihood of logit model.
Parameters
----------
params : array_like
The parameters of the logit model.
Returns
-------
loglike : float
The log-likelihood function of the model evaluated at `params`.
See notes.
Notes
-----
.. math::
\\ln L=\\sum_{i}\\ln\\Lambda
\\left(q_{i}x_{i}^{\\prime}\\beta\\right)
Where :math:`q=2y-1`. This simplification comes from the fact that the
logistic distribution is symmetric. | loglike | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def loglikeobs(self, params):
"""
Log-likelihood of logit model for each observation.
Parameters
----------
params : array_like
The parameters of the logit model.
Returns
-------
loglike : ndarray
The log likelihood for each observation of the model evaluated
at `params`. See Notes
Notes
-----
.. math::
\\ln L=\\sum_{i}\\ln\\Lambda
\\left(q_{i}x_{i}^{\\prime}\\beta\\right)
for observations :math:`i=1,...,n`
where :math:`q=2y-1`. This simplification comes from the fact that the
logistic distribution is symmetric.
"""
q = 2*self.endog - 1
linpred = self.predict(params, which="linear")
return np.log(self.cdf(q * linpred)) | Log-likelihood of logit model for each observation.
Parameters
----------
params : array_like
The parameters of the logit model.
Returns
-------
loglike : ndarray
The log likelihood for each observation of the model evaluated
at `params`. See Notes
Notes
-----
.. math::
\\ln L=\\sum_{i}\\ln\\Lambda
\\left(q_{i}x_{i}^{\\prime}\\beta\\right)
for observations :math:`i=1,...,n`
where :math:`q=2y-1`. This simplification comes from the fact that the
logistic distribution is symmetric. | loglikeobs | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def score(self, params):
"""
Logit model score (gradient) vector of the log-likelihood
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
score : ndarray, 1-D
The score vector of the model, i.e. the first derivative of the
loglikelihood function, evaluated at `params`
Notes
-----
.. math:: \\frac{\\partial\\ln L}{\\partial\\beta}=\\sum_{i=1}^{n}\\left(y_{i}-\\Lambda_{i}\\right)x_{i}
"""
y = self.endog
X = self.exog
fitted = self.predict(params)
return np.dot(y - fitted, X) | Logit model score (gradient) vector of the log-likelihood
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
score : ndarray, 1-D
The score vector of the model, i.e. the first derivative of the
loglikelihood function, evaluated at `params`
Notes
-----
.. math:: \\frac{\\partial\\ln L}{\\partial\\beta}=\\sum_{i=1}^{n}\\left(y_{i}-\\Lambda_{i}\\right)x_{i} | score | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def score_obs(self, params):
"""
Logit model Jacobian of the log-likelihood for each observation
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
jac : array_like
The derivative of the loglikelihood for each observation evaluated
at `params`.
Notes
-----
.. math:: \\frac{\\partial\\ln L_{i}}{\\partial\\beta}=\\left(y_{i}-\\Lambda_{i}\\right)x_{i}
for observations :math:`i=1,...,n`
"""
y = self.endog
X = self.exog
fitted = self.predict(params)
return (y - fitted)[:,None] * X | Logit model Jacobian of the log-likelihood for each observation
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
jac : array_like
The derivative of the loglikelihood for each observation evaluated
at `params`.
Notes
-----
.. math:: \\frac{\\partial\\ln L_{i}}{\\partial\\beta}=\\left(y_{i}-\\Lambda_{i}\\right)x_{i}
for observations :math:`i=1,...,n` | score_obs | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def score_factor(self, params):
"""
Logit model derivative of the log-likelihood with respect to linpred.
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
score_factor : array_like
The derivative of the loglikelihood for each observation evaluated
at `params`.
Notes
-----
.. math:: \\frac{\\partial\\ln L_{i}}{\\partial\\beta}=\\left(y_{i}-\\lambda_{i}\\right)
for observations :math:`i=1,...,n`
where the loglinear model is assumed
.. math:: \\ln\\lambda_{i}=x_{i}\\beta
"""
y = self.endog
fitted = self.predict(params)
return (y - fitted) | Logit model derivative of the log-likelihood with respect to linpred.
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
score_factor : array_like
The derivative of the loglikelihood for each observation evaluated
at `params`.
Notes
-----
.. math:: \\frac{\\partial\\ln L_{i}}{\\partial\\beta}=\\left(y_{i}-\\lambda_{i}\\right)
for observations :math:`i=1,...,n`
where the loglinear model is assumed
.. math:: \\ln\\lambda_{i}=x_{i}\\beta | score_factor | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def hessian(self, params):
"""
Logit model Hessian matrix of the log-likelihood
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
hess : ndarray, (k_vars, k_vars)
The Hessian, second derivative of loglikelihood function,
evaluated at `params`
Notes
-----
.. math:: \\frac{\\partial^{2}\\ln L}{\\partial\\beta\\partial\\beta^{\\prime}}=-\\sum_{i}\\Lambda_{i}\\left(1-\\Lambda_{i}\\right)x_{i}x_{i}^{\\prime}
"""
X = self.exog
L = self.predict(params)
return -np.dot(L*(1-L)*X.T,X) | Logit model Hessian matrix of the log-likelihood
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
hess : ndarray, (k_vars, k_vars)
The Hessian, second derivative of loglikelihood function,
evaluated at `params`
Notes
-----
.. math:: \\frac{\\partial^{2}\\ln L}{\\partial\\beta\\partial\\beta^{\\prime}}=-\\sum_{i}\\Lambda_{i}\\left(1-\\Lambda_{i}\\right)x_{i}x_{i}^{\\prime} | hessian | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def hessian_factor(self, params):
"""
Logit model Hessian factor
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
hess : ndarray, (nobs,)
The Hessian factor, second derivative of loglikelihood function
with respect to the linear predictor evaluated at `params`
"""
L = self.predict(params)
return -L * (1 - L) | Logit model Hessian factor
Parameters
----------
params : array_like
The parameters of the model
Returns
-------
hess : ndarray, (nobs,)
The Hessian factor, second derivative of loglikelihood function
with respect to the linear predictor evaluated at `params` | hessian_factor | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def _deriv_score_obs_dendog(self, params):
"""derivative of score_obs w.r.t. endog
Parameters
----------
params : ndarray
parameter at which score is evaluated
Returns
-------
derivative : ndarray_2d
The derivative of the score_obs with respect to endog. This
can is given by `score_factor0[:, None] * exog` where
`score_factor0` is the score_factor without the residual.
"""
return self.exog | derivative of score_obs w.r.t. endog
Parameters
----------
params : ndarray
parameter at which score is evaluated
Returns
-------
derivative : ndarray_2d
The derivative of the score_obs with respect to endog. This
can is given by `score_factor0[:, None] * exog` where
`score_factor0` is the score_factor without the residual. | _deriv_score_obs_dendog | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def cdf(self, X):
"""
Probit (Normal) cumulative distribution function
Parameters
----------
X : array_like
The linear predictor of the model (XB).
Returns
-------
cdf : ndarray
The cdf evaluated at `X`.
Notes
-----
This function is just an alias for scipy.stats.norm.cdf
"""
return stats.norm._cdf(X) | Probit (Normal) cumulative distribution function
Parameters
----------
X : array_like
The linear predictor of the model (XB).
Returns
-------
cdf : ndarray
The cdf evaluated at `X`.
Notes
-----
This function is just an alias for scipy.stats.norm.cdf | cdf | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def pdf(self, X):
"""
Probit (Normal) probability density function
Parameters
----------
X : array_like
The linear predictor of the model (XB).
Returns
-------
pdf : ndarray
The value of the normal density function for each point of X.
Notes
-----
This function is just an alias for scipy.stats.norm.pdf
"""
X = np.asarray(X)
return stats.norm._pdf(X) | Probit (Normal) probability density function
Parameters
----------
X : array_like
The linear predictor of the model (XB).
Returns
-------
pdf : ndarray
The value of the normal density function for each point of X.
Notes
-----
This function is just an alias for scipy.stats.norm.pdf | pdf | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def loglike(self, params):
"""
Log-likelihood of probit model (i.e., the normal distribution).
Parameters
----------
params : array_like
The parameters of the model.
Returns
-------
loglike : float
The log-likelihood function of the model evaluated at `params`.
See notes.
Notes
-----
.. math:: \\ln L=\\sum_{i}\\ln\\Phi\\left(q_{i}x_{i}^{\\prime}\\beta\\right)
Where :math:`q=2y-1`. This simplification comes from the fact that the
normal distribution is symmetric.
"""
q = 2*self.endog - 1
linpred = self.predict(params, which="linear")
return np.sum(np.log(np.clip(self.cdf(q * linpred), FLOAT_EPS, 1))) | Log-likelihood of probit model (i.e., the normal distribution).
Parameters
----------
params : array_like
The parameters of the model.
Returns
-------
loglike : float
The log-likelihood function of the model evaluated at `params`.
See notes.
Notes
-----
.. math:: \\ln L=\\sum_{i}\\ln\\Phi\\left(q_{i}x_{i}^{\\prime}\\beta\\right)
Where :math:`q=2y-1`. This simplification comes from the fact that the
normal distribution is symmetric. | loglike | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
def loglikeobs(self, params):
"""
Log-likelihood of probit model for each observation
Parameters
----------
params : array_like
The parameters of the model.
Returns
-------
loglike : array_like
The log likelihood for each observation of the model evaluated
at `params`. See Notes
Notes
-----
.. math:: \\ln L_{i}=\\ln\\Phi\\left(q_{i}x_{i}^{\\prime}\\beta\\right)
for observations :math:`i=1,...,n`
where :math:`q=2y-1`. This simplification comes from the fact that the
normal distribution is symmetric.
"""
q = 2*self.endog - 1
linpred = self.predict(params, which="linear")
return np.log(np.clip(self.cdf(q*linpred), FLOAT_EPS, 1)) | Log-likelihood of probit model for each observation
Parameters
----------
params : array_like
The parameters of the model.
Returns
-------
loglike : array_like
The log likelihood for each observation of the model evaluated
at `params`. See Notes
Notes
-----
.. math:: \\ln L_{i}=\\ln\\Phi\\left(q_{i}x_{i}^{\\prime}\\beta\\right)
for observations :math:`i=1,...,n`
where :math:`q=2y-1`. This simplification comes from the fact that the
normal distribution is symmetric. | loglikeobs | python | statsmodels/statsmodels | statsmodels/discrete/discrete_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/discrete/discrete_model.py | BSD-3-Clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.