code
stringlengths 26
870k
| docstring
stringlengths 1
65.6k
| func_name
stringlengths 1
194
| language
stringclasses 1
value | repo
stringlengths 8
68
| path
stringlengths 5
194
| url
stringlengths 46
254
| license
stringclasses 4
values |
---|---|---|---|---|---|---|---|
def test_serial_correlation(self, lags=None, model_df=None):
"""
Ljung-Box test for residual serial correlation
Parameters
----------
lags : int
The maximum number of lags to use in the test. Jointly tests that
all autocorrelations up to and including lag j are zero for
j = 1, 2, ..., lags. If None, uses min(10, nobs // 5).
model_df : int
The model degree of freedom to use when adjusting computing the
test statistic to account for parameter estimation. If None, uses
the number of AR lags included in the model.
Returns
-------
output : DataFrame
DataFrame containing three columns: the test statistic, the
p-value of the test, and the degree of freedom used in the test.
Notes
-----
Null hypothesis is no serial correlation.
The the test degree-of-freedom is 0 or negative once accounting for
model_df, then the test statistic's p-value is missing.
See Also
--------
statsmodels.stats.diagnostic.acorr_ljungbox
Ljung-Box test for serial correlation.
"""
# Deferred to prevent circular import
from statsmodels.stats.diagnostic import acorr_ljungbox
lags = int_like(lags, "lags", optional=True)
model_df = int_like(model_df, "df_model", optional=True)
model_df = self.df_model if model_df is None else model_df
nobs_effective = self.resid.shape[0]
if lags is None:
lags = min(nobs_effective // 5, 10)
test_stats = acorr_ljungbox(
self.resid,
lags=lags,
boxpierce=False,
model_df=model_df,
)
cols = ["Ljung-Box", "LB P-value", "DF"]
if lags == 1:
df = max(0, 1 - model_df)
else:
df = np.clip(np.arange(1, lags + 1) - model_df, 0, np.inf)
df = df.astype(int)
test_stats["df"] = df
index = pd.RangeIndex(1, lags + 1, name="Lag")
return pd.DataFrame(test_stats, columns=cols, index=index) | Ljung-Box test for residual serial correlation
Parameters
----------
lags : int
The maximum number of lags to use in the test. Jointly tests that
all autocorrelations up to and including lag j are zero for
j = 1, 2, ..., lags. If None, uses min(10, nobs // 5).
model_df : int
The model degree of freedom to use when adjusting computing the
test statistic to account for parameter estimation. If None, uses
the number of AR lags included in the model.
Returns
-------
output : DataFrame
DataFrame containing three columns: the test statistic, the
p-value of the test, and the degree of freedom used in the test.
Notes
-----
Null hypothesis is no serial correlation.
The the test degree-of-freedom is 0 or negative once accounting for
model_df, then the test statistic's p-value is missing.
See Also
--------
statsmodels.stats.diagnostic.acorr_ljungbox
Ljung-Box test for serial correlation. | test_serial_correlation | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def test_normality(self):
"""
Test for normality of standardized residuals.
Returns
-------
Series
Series containing four values, the test statistic and its p-value,
the skewness and the kurtosis.
Notes
-----
Null hypothesis is normality.
See Also
--------
statsmodels.stats.stattools.jarque_bera
The Jarque-Bera test of normality.
"""
# Deferred to prevent circular import
from statsmodels.stats.stattools import jarque_bera
index = ["Jarque-Bera", "P-value", "Skewness", "Kurtosis"]
return pd.Series(jarque_bera(self.resid), index=index) | Test for normality of standardized residuals.
Returns
-------
Series
Series containing four values, the test statistic and its p-value,
the skewness and the kurtosis.
Notes
-----
Null hypothesis is normality.
See Also
--------
statsmodels.stats.stattools.jarque_bera
The Jarque-Bera test of normality. | test_normality | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def test_heteroskedasticity(self, lags=None):
"""
ARCH-LM test of residual heteroskedasticity
Parameters
----------
lags : int
The maximum number of lags to use in the test. Jointly tests that
all squared autocorrelations up to and including lag j are zero for
j = 1, 2, ..., lags. If None, uses lag=12*(nobs/100)^{1/4}.
Returns
-------
Series
Series containing the test statistic and its p-values.
See Also
--------
statsmodels.stats.diagnostic.het_arch
ARCH-LM test.
statsmodels.stats.diagnostic.acorr_lm
LM test for autocorrelation.
"""
from statsmodels.stats.diagnostic import het_arch
lags = int_like(lags, "lags", optional=True)
nobs_effective = self.resid.shape[0]
if lags is None:
lags = min(nobs_effective // 5, 10)
out = []
for lag in range(1, lags + 1):
res = het_arch(self.resid, nlags=lag)
out.append([res[0], res[1], lag])
index = pd.RangeIndex(1, lags + 1, name="Lag")
cols = ["ARCH-LM", "P-value", "DF"]
return pd.DataFrame(out, columns=cols, index=index) | ARCH-LM test of residual heteroskedasticity
Parameters
----------
lags : int
The maximum number of lags to use in the test. Jointly tests that
all squared autocorrelations up to and including lag j are zero for
j = 1, 2, ..., lags. If None, uses lag=12*(nobs/100)^{1/4}.
Returns
-------
Series
Series containing the test statistic and its p-values.
See Also
--------
statsmodels.stats.diagnostic.het_arch
ARCH-LM test.
statsmodels.stats.diagnostic.acorr_lm
LM test for autocorrelation. | test_heteroskedasticity | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def diagnostic_summary(self):
"""
Returns a summary containing standard model diagnostic tests
Returns
-------
Summary
A summary instance with panels for serial correlation tests,
normality tests and heteroskedasticity tests.
See Also
--------
test_serial_correlation
Test models residuals for serial correlation.
test_normality
Test models residuals for deviations from normality.
test_heteroskedasticity
Test models residuals for conditional heteroskedasticity.
"""
from statsmodels.iolib.table import SimpleTable
spacer = SimpleTable([""])
smry = Summary()
sc = self.test_serial_correlation()
sc = sc.loc[sc.DF > 0]
values = [[i + 1] + row for i, row in enumerate(sc.values.tolist())]
data_fmts = ("%10d", "%10.3f", "%10.3f", "%10d")
if sc.shape[0]:
tab = SimpleTable(
values,
headers=["Lag"] + list(sc.columns),
title="Test of No Serial Correlation",
header_align="r",
data_fmts=data_fmts,
)
smry.tables.append(tab)
smry.tables.append(spacer)
jb = self.test_normality()
data_fmts = ("%10.3f", "%10.3f", "%10.3f", "%10.3f")
tab = SimpleTable(
[jb.values],
headers=list(jb.index),
title="Test of Normality",
header_align="r",
data_fmts=data_fmts,
)
smry.tables.append(tab)
smry.tables.append(spacer)
arch_lm = self.test_heteroskedasticity()
values = [
[i + 1] + row for i, row in enumerate(arch_lm.values.tolist())
]
data_fmts = ("%10d", "%10.3f", "%10.3f", "%10d")
tab = SimpleTable(
values,
headers=["Lag"] + list(arch_lm.columns),
title="Test of Conditional Homoskedasticity",
header_align="r",
data_fmts=data_fmts,
)
smry.tables.append(tab)
return smry | Returns a summary containing standard model diagnostic tests
Returns
-------
Summary
A summary instance with panels for serial correlation tests,
normality tests and heteroskedasticity tests.
See Also
--------
test_serial_correlation
Test models residuals for serial correlation.
test_normality
Test models residuals for deviations from normality.
test_heteroskedasticity
Test models residuals for conditional heteroskedasticity. | diagnostic_summary | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def get_prediction(
self, start=None, end=None, dynamic=False, exog=None, exog_oos=None
):
"""
Predictions and prediction intervals
Parameters
----------
start : int, str, or datetime, optional
Zero-indexed observation number at which to start forecasting,
i.e., the first forecast is start. Can also be a date string to
parse or a datetime type. Default is the the zeroth observation.
end : int, str, or datetime, optional
Zero-indexed observation number at which to end forecasting, i.e.,
the last forecast is end. Can also be a date string to
parse or a datetime type. However, if the dates index does not
have a fixed frequency, end must be an integer index if you
want out-of-sample prediction. Default is the last observation in
the sample. Unlike standard python slices, end is inclusive so
that all the predictions [start, start+1, ..., end-1, end] are
returned.
dynamic : {bool, int, str, datetime, Timestamp}, optional
Integer offset relative to `start` at which to begin dynamic
prediction. Prior to this observation, true endogenous values
will be used for prediction; starting with this observation and
continuing through the end of prediction, forecasted endogenous
values will be used instead. Datetime-like objects are not
interpreted as offsets. They are instead used to find the index
location of `dynamic` which is then used to to compute the offset.
exog : array_like
A replacement exogenous array. Must have the same shape as the
exogenous data array used when the model was created.
exog_oos : array_like
An array containing out-of-sample values of the exogenous variable.
Must has the same number of columns as the exog used when the
model was created, and at least as many rows as the number of
out-of-sample forecasts.
Returns
-------
PredictionResults
Prediction results with mean and prediction intervals
"""
mean = self.predict(
start=start, end=end, dynamic=dynamic, exog=exog, exog_oos=exog_oos
)
mean_var = np.full_like(mean, self.sigma2)
mean_var[np.isnan(mean)] = np.nan
start = 0 if start is None else start
end = self.model._index[-1] if end is None else end
_, _, oos, _ = self.model._get_prediction_index(start, end)
if oos > 0:
ar_params = self._lag_repr()
ma = arma2ma(ar_params, np.ones(1), lags=oos)
mean_var[-oos:] = self.sigma2 * np.cumsum(ma**2)
if isinstance(mean, pd.Series):
mean_var = pd.Series(mean_var, index=mean.index)
return PredictionResults(mean, mean_var) | Predictions and prediction intervals
Parameters
----------
start : int, str, or datetime, optional
Zero-indexed observation number at which to start forecasting,
i.e., the first forecast is start. Can also be a date string to
parse or a datetime type. Default is the the zeroth observation.
end : int, str, or datetime, optional
Zero-indexed observation number at which to end forecasting, i.e.,
the last forecast is end. Can also be a date string to
parse or a datetime type. However, if the dates index does not
have a fixed frequency, end must be an integer index if you
want out-of-sample prediction. Default is the last observation in
the sample. Unlike standard python slices, end is inclusive so
that all the predictions [start, start+1, ..., end-1, end] are
returned.
dynamic : {bool, int, str, datetime, Timestamp}, optional
Integer offset relative to `start` at which to begin dynamic
prediction. Prior to this observation, true endogenous values
will be used for prediction; starting with this observation and
continuing through the end of prediction, forecasted endogenous
values will be used instead. Datetime-like objects are not
interpreted as offsets. They are instead used to find the index
location of `dynamic` which is then used to to compute the offset.
exog : array_like
A replacement exogenous array. Must have the same shape as the
exogenous data array used when the model was created.
exog_oos : array_like
An array containing out-of-sample values of the exogenous variable.
Must has the same number of columns as the exog used when the
model was created, and at least as many rows as the number of
out-of-sample forecasts.
Returns
-------
PredictionResults
Prediction results with mean and prediction intervals | get_prediction | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def forecast(self, steps=1, exog=None):
"""
Out-of-sample forecasts
Parameters
----------
steps : {int, str, datetime}, default 1
If an integer, the number of steps to forecast from the end of the
sample. Can also be a date string to parse or a datetime type.
However, if the dates index does not have a fixed frequency,
steps must be an integer.
exog : {ndarray, DataFrame}
Exogenous values to use out-of-sample. Must have same number of
columns as original exog data and at least `steps` rows
Returns
-------
array_like
Array of out of in-sample predictions and / or out-of-sample
forecasts.
See Also
--------
AutoRegResults.predict
In- and out-of-sample predictions
AutoRegResults.get_prediction
In- and out-of-sample predictions and confidence intervals
"""
start = self.model.data.orig_endog.shape[0]
if isinstance(steps, (int, np.integer)):
end = start + steps - 1
else:
end = steps
return self.predict(start=start, end=end, dynamic=False, exog_oos=exog) | Out-of-sample forecasts
Parameters
----------
steps : {int, str, datetime}, default 1
If an integer, the number of steps to forecast from the end of the
sample. Can also be a date string to parse or a datetime type.
However, if the dates index does not have a fixed frequency,
steps must be an integer.
exog : {ndarray, DataFrame}
Exogenous values to use out-of-sample. Must have same number of
columns as original exog data and at least `steps` rows
Returns
-------
array_like
Array of out of in-sample predictions and / or out-of-sample
forecasts.
See Also
--------
AutoRegResults.predict
In- and out-of-sample predictions
AutoRegResults.get_prediction
In- and out-of-sample predictions and confidence intervals | forecast | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def _plot_predictions(
self,
predictions,
start,
end,
alpha,
in_sample,
fig,
figsize,
):
"""Shared helper for plotting predictions"""
from statsmodels.graphics.utils import _import_mpl, create_mpl_fig
_import_mpl()
fig = create_mpl_fig(fig, figsize)
start = 0 if start is None else start
end = self.model._index[-1] if end is None else end
_, _, oos, _ = self.model._get_prediction_index(start, end)
ax = fig.add_subplot(111)
mean = predictions.predicted_mean
if not in_sample and oos:
if isinstance(mean, pd.Series):
mean = mean.iloc[-oos:]
elif not in_sample:
raise ValueError(
"in_sample is False but there are no"
"out-of-sample forecasts to plot."
)
ax.plot(mean, zorder=2, label="Forecast")
if oos and alpha is not None:
ci = np.asarray(predictions.conf_int(alpha))
lower, upper = ci[-oos:, 0], ci[-oos:, 1]
label = f"{1 - alpha:.0%} confidence interval"
x = ax.get_lines()[-1].get_xdata()
ax.fill_between(
x[-oos:],
lower,
upper,
color="gray",
alpha=0.5,
label=label,
zorder=1,
)
ax.legend(loc="best")
return fig | Shared helper for plotting predictions | _plot_predictions | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def plot_predict(
self,
start=None,
end=None,
dynamic=False,
exog=None,
exog_oos=None,
alpha=0.05,
in_sample=True,
fig=None,
figsize=None,
):
"""
Plot in- and out-of-sample predictions
Parameters
----------\n%(predict_params)s
alpha : {float, None}
The tail probability not covered by the confidence interval. Must
be in (0, 1). Confidence interval is constructed assuming normally
distributed shocks. If None, figure will not show the confidence
interval.
in_sample : bool
Flag indicating whether to include the in-sample period in the
plot.
fig : Figure
An existing figure handle. If not provided, a new figure is
created.
figsize: tuple[float, float]
Tuple containing the figure size values.
Returns
-------
Figure
Figure handle containing the plot.
"""
predictions = self.get_prediction(
start=start, end=end, dynamic=dynamic, exog=exog, exog_oos=exog_oos
)
return self._plot_predictions(
predictions, start, end, alpha, in_sample, fig, figsize
) | Plot in- and out-of-sample predictions
Parameters
----------\n%(predict_params)s
alpha : {float, None}
The tail probability not covered by the confidence interval. Must
be in (0, 1). Confidence interval is constructed assuming normally
distributed shocks. If None, figure will not show the confidence
interval.
in_sample : bool
Flag indicating whether to include the in-sample period in the
plot.
fig : Figure
An existing figure handle. If not provided, a new figure is
created.
figsize: tuple[float, float]
Tuple containing the figure size values.
Returns
-------
Figure
Figure handle containing the plot. | plot_predict | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def plot_diagnostics(self, lags=10, fig=None, figsize=None):
"""
Diagnostic plots for standardized residuals
Parameters
----------
lags : int, optional
Number of lags to include in the correlogram. Default is 10.
fig : Figure, optional
If given, subplots are created in this figure instead of in a new
figure. Note that the 2x2 grid will be created in the provided
figure using `fig.add_subplot()`.
figsize : tuple, optional
If a figure is created, this argument allows specifying a size.
The tuple is (width, height).
Notes
-----
Produces a 2x2 plot grid with the following plots (ordered clockwise
from top left):
1. Standardized residuals over time
2. Histogram plus estimated density of standardized residuals, along
with a Normal(0,1) density plotted for reference.
3. Normal Q-Q plot, with Normal reference line.
4. Correlogram
See Also
--------
statsmodels.graphics.gofplots.qqplot
statsmodels.graphics.tsaplots.plot_acf
"""
from statsmodels.graphics.utils import _import_mpl, create_mpl_fig
_import_mpl()
fig = create_mpl_fig(fig, figsize)
# Eliminate residuals associated with burned or diffuse likelihoods
resid = self.resid
# Top-left: residuals vs time
ax = fig.add_subplot(221)
if hasattr(self.model.data, "dates") and self.data.dates is not None:
x = self.model.data.dates._mpl_repr()
x = x[self.model.hold_back :]
else:
hold_back = self.model.hold_back
x = hold_back + np.arange(self.resid.shape[0])
std_resid = resid / np.sqrt(self.sigma2)
ax.plot(x, std_resid)
ax.hlines(0, x[0], x[-1], alpha=0.5)
ax.set_xlim(x[0], x[-1])
ax.set_title("Standardized residual")
# Top-right: histogram, Gaussian kernel density, Normal density
# Can only do histogram and Gaussian kernel density on the non-null
# elements
std_resid_nonmissing = std_resid[~(np.isnan(resid))]
ax = fig.add_subplot(222)
ax.hist(std_resid_nonmissing, density=True, label="Hist")
kde = gaussian_kde(std_resid)
xlim = (-1.96 * 2, 1.96 * 2)
x = np.linspace(xlim[0], xlim[1])
ax.plot(x, kde(x), label="KDE")
ax.plot(x, norm.pdf(x), label="N(0,1)")
ax.set_xlim(xlim)
ax.legend()
ax.set_title("Histogram plus estimated density")
# Bottom-left: QQ plot
ax = fig.add_subplot(223)
from statsmodels.graphics.gofplots import qqplot
qqplot(std_resid, line="s", ax=ax)
ax.set_title("Normal Q-Q")
# Bottom-right: Correlogram
ax = fig.add_subplot(224)
from statsmodels.graphics.tsaplots import plot_acf
plot_acf(resid, ax=ax, lags=lags)
ax.set_title("Correlogram")
ax.set_ylim(-1, 1)
return fig | Diagnostic plots for standardized residuals
Parameters
----------
lags : int, optional
Number of lags to include in the correlogram. Default is 10.
fig : Figure, optional
If given, subplots are created in this figure instead of in a new
figure. Note that the 2x2 grid will be created in the provided
figure using `fig.add_subplot()`.
figsize : tuple, optional
If a figure is created, this argument allows specifying a size.
The tuple is (width, height).
Notes
-----
Produces a 2x2 plot grid with the following plots (ordered clockwise
from top left):
1. Standardized residuals over time
2. Histogram plus estimated density of standardized residuals, along
with a Normal(0,1) density plotted for reference.
3. Normal Q-Q plot, with Normal reference line.
4. Correlogram
See Also
--------
statsmodels.graphics.gofplots.qqplot
statsmodels.graphics.tsaplots.plot_acf | plot_diagnostics | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def summary(self, alpha=0.05):
"""
Summarize the Model
Parameters
----------
alpha : float, optional
Significance level for the confidence intervals.
Returns
-------
smry : Summary instance
This holds the summary table and text, which can be printed or
converted to various output formats.
See Also
--------
statsmodels.iolib.summary.Summary
"""
model = self.model
title = model.__class__.__name__ + " Model Results"
method = "Conditional MLE"
# get sample
start = self._hold_back
if self.data.dates is not None:
dates = self.data.dates
sample = [dates[start].strftime("%m-%d-%Y")]
sample += ["- " + dates[-1].strftime("%m-%d-%Y")]
else:
sample = [str(start), str(len(self.data.orig_endog))]
model = model.__class__.__name__
if self.model.seasonal:
model = "Seas. " + model
if self.ar_lags is not None and len(self.ar_lags) < self._max_lag:
model = "Restr. " + model
if self.model.exog is not None:
model += "-X"
order = f"({self._max_lag})"
dep_name = str(self.model.endog_names)
top_left = [
("Dep. Variable:", [dep_name]),
("Model:", [model + order]),
("Method:", [method]),
("Date:", None),
("Time:", None),
("Sample:", [sample[0]]),
("", [sample[1]]),
]
top_right = [
("No. Observations:", [str(len(self.model.endog))]),
("Log Likelihood", ["%#5.3f" % self.llf]),
("S.D. of innovations", ["%#5.3f" % self.sigma2**0.5]),
("AIC", ["%#5.3f" % self.aic]),
("BIC", ["%#5.3f" % self.bic]),
("HQIC", ["%#5.3f" % self.hqic]),
]
smry = Summary()
smry.add_table_2cols(
self, gleft=top_left, gright=top_right, title=title
)
smry.add_table_params(self, alpha=alpha, use_t=False)
# Make the roots table
from statsmodels.iolib.table import SimpleTable
if self._max_lag:
arstubs = ["AR.%d" % i for i in range(1, self._max_lag + 1)]
stubs = arstubs
roots = self.roots
freq = self.arfreq
modulus = np.abs(roots)
data = np.column_stack((roots.real, roots.imag, modulus, freq))
roots_table = SimpleTable(
[
(
"%17.4f" % row[0],
"%+17.4fj" % row[1],
"%17.4f" % row[2],
"%17.4f" % row[3],
)
for row in data
],
headers=[
" Real",
" Imaginary",
" Modulus",
" Frequency",
],
title="Roots",
stubs=stubs,
)
smry.tables.append(roots_table)
if self._summary_text:
extra_txt = smry.extra_txt if smry.extra_txt is not None else []
smry.add_extra_txt(extra_txt + [self._summary_text])
return smry | Summarize the Model
Parameters
----------
alpha : float, optional
Significance level for the confidence intervals.
Returns
-------
smry : Summary instance
This holds the summary table and text, which can be printed or
converted to various output formats.
See Also
--------
statsmodels.iolib.summary.Summary | summary | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def apply(self, endog, exog=None, refit=False, fit_kwargs=None):
"""
Apply the fitted parameters to new data unrelated to the original data
Creates a new result object using the current fitted parameters,
applied to a completely new dataset that is assumed to be unrelated to
the model's original data. The new results can then be used for
analysis or forecasting.
Parameters
----------
endog : array_like
New observations from the modeled time-series process.
exog : array_like, optional
New observations of exogenous regressors, if applicable.
refit : bool, optional
Whether to re-fit the parameters, using the new dataset.
Default is False (so parameters from the current results object
are used to create the new results object).
fit_kwargs : dict, optional
Keyword arguments to pass to `fit` (if `refit=True`).
Returns
-------
AutoRegResults
Updated results object containing results for the new dataset.
See Also
--------
AutoRegResults.append
statsmodels.tsa.statespace.mlemodel.MLEResults.apply
Notes
-----
The `endog` argument to this method should consist of new observations
that are not necessarily related to the original model's `endog`
dataset.
Care is needed when using deterministic processes with cyclical
components such as seasonal dummies or Fourier series. These
deterministic components will align to the first observation
in the data and so it is essential that any new data have the
same initial period.
Examples
--------
>>> import pandas as pd
>>> from statsmodels.tsa.ar_model import AutoReg
>>> index = pd.period_range(start='2000', periods=3, freq='Y')
>>> original_observations = pd.Series([1.2, 1.5, 1.8], index=index)
>>> mod = AutoReg(original_observations, lags=1, trend="n")
>>> res = mod.fit()
>>> print(res.params)
y.L1 1.219512
dtype: float64
>>> print(res.fittedvalues)
2001 1.463415
2002 1.829268
Freq: A-DEC, dtype: float64
>>> print(res.forecast(1))
2003 2.195122
Freq: A-DEC, dtype: float64
>>> new_index = pd.period_range(start='1980', periods=3, freq='Y')
>>> new_observations = pd.Series([1.4, 0.3, 1.2], index=new_index)
>>> new_res = res.apply(new_observations)
>>> print(new_res.params)
y.L1 1.219512
dtype: float64
>>> print(new_res.fittedvalues)
1981 1.707317
1982 0.365854
Freq: A-DEC, dtype: float64
>>> print(new_res.forecast(1))
1983 1.463415
Freq: A-DEC, dtype: float64
"""
existing = self.model
try:
deterministic = existing.deterministic
if deterministic is not None:
if isinstance(endog, (pd.Series, pd.DataFrame)):
index = endog.index
else:
index = np.arange(endog.shape[0])
deterministic = deterministic.apply(index)
mod = AutoReg(
endog,
lags=existing.ar_lags,
trend=existing.trend,
seasonal=existing.seasonal,
exog=exog,
hold_back=existing.hold_back,
period=existing.period,
deterministic=deterministic,
old_names=False,
)
except Exception as exc:
error = (
"An exception occured during the creation of the cloned "
"AutoReg instance when applying the existing model "
"specification to the new data. The original traceback "
"appears below."
)
exc.args = (error,) + exc.args
raise exc.with_traceback(exc.__traceback__)
if (mod.exog is None) != (existing.exog is None):
if existing.exog is not None:
raise ValueError(
"exog must be provided when the original model contained "
"exog variables"
)
raise ValueError(
"exog must be None when the original model did not contain "
"exog variables"
)
if (
existing.exog is not None
and existing.exog.shape[1] != mod.exog.shape[1]
):
raise ValueError(
"The number of exog variables passed must match the original "
f"number of exog values ({existing.exog.shape[1]})"
)
if refit:
fit_kwargs = {} if fit_kwargs is None else fit_kwargs
return mod.fit(**fit_kwargs)
smry_txt = (
"Parameters and standard errors were estimated using a different "
"dataset and were then applied to this dataset."
)
res = AutoRegResults(
mod,
self.params,
self.cov_params_default,
self.normalized_cov_params,
use_t=self.use_t,
summary_text=smry_txt,
)
return AutoRegResultsWrapper(res) | Apply the fitted parameters to new data unrelated to the original data
Creates a new result object using the current fitted parameters,
applied to a completely new dataset that is assumed to be unrelated to
the model's original data. The new results can then be used for
analysis or forecasting.
Parameters
----------
endog : array_like
New observations from the modeled time-series process.
exog : array_like, optional
New observations of exogenous regressors, if applicable.
refit : bool, optional
Whether to re-fit the parameters, using the new dataset.
Default is False (so parameters from the current results object
are used to create the new results object).
fit_kwargs : dict, optional
Keyword arguments to pass to `fit` (if `refit=True`).
Returns
-------
AutoRegResults
Updated results object containing results for the new dataset.
See Also
--------
AutoRegResults.append
statsmodels.tsa.statespace.mlemodel.MLEResults.apply
Notes
-----
The `endog` argument to this method should consist of new observations
that are not necessarily related to the original model's `endog`
dataset.
Care is needed when using deterministic processes with cyclical
components such as seasonal dummies or Fourier series. These
deterministic components will align to the first observation
in the data and so it is essential that any new data have the
same initial period.
Examples
--------
>>> import pandas as pd
>>> from statsmodels.tsa.ar_model import AutoReg
>>> index = pd.period_range(start='2000', periods=3, freq='Y')
>>> original_observations = pd.Series([1.2, 1.5, 1.8], index=index)
>>> mod = AutoReg(original_observations, lags=1, trend="n")
>>> res = mod.fit()
>>> print(res.params)
y.L1 1.219512
dtype: float64
>>> print(res.fittedvalues)
2001 1.463415
2002 1.829268
Freq: A-DEC, dtype: float64
>>> print(res.forecast(1))
2003 2.195122
Freq: A-DEC, dtype: float64
>>> new_index = pd.period_range(start='1980', periods=3, freq='Y')
>>> new_observations = pd.Series([1.4, 0.3, 1.2], index=new_index)
>>> new_res = res.apply(new_observations)
>>> print(new_res.params)
y.L1 1.219512
dtype: float64
>>> print(new_res.fittedvalues)
1981 1.707317
1982 0.365854
Freq: A-DEC, dtype: float64
>>> print(new_res.forecast(1))
1983 1.463415
Freq: A-DEC, dtype: float64 | apply | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def append(self, endog, exog=None, refit=False, fit_kwargs=None):
"""
Append observations to the ones used to fit the model
Creates a new result object using the current fitted parameters
where additional observations are appended to the data used
to fit the model. The new results can then be used for
analysis or forecasting.
Parameters
----------
endog : array_like
New observations from the modeled time-series process.
exog : array_like, optional
New observations of exogenous regressors, if applicable.
refit : bool, optional
Whether to re-fit the parameters, using the new dataset.
Default is False (so parameters from the current results object
are used to create the new results object).
fit_kwargs : dict, optional
Keyword arguments to pass to `fit` (if `refit=True`).
Returns
-------
AutoRegResults
Updated results object containing results for the new dataset.
See Also
--------
AutoRegResults.apply
statsmodels.tsa.statespace.mlemodel.MLEResults.append
Notes
-----
The endog and exog arguments to this method must be formatted in the
same way (e.g. Pandas Series versus Numpy array) as were the endog
and exog arrays passed to the original model.
The endog argument to this method should consist of new observations
that occurred directly after the last element of endog. For any other
kind of dataset, see the apply method.
Examples
--------
>>> import pandas as pd
>>> from statsmodels.tsa.ar_model import AutoReg
>>> index = pd.period_range(start='2000', periods=3, freq='Y')
>>> original_observations = pd.Series([1.2, 1.4, 1.8], index=index)
>>> mod = AutoReg(original_observations, lags=1, trend="n")
>>> res = mod.fit()
>>> print(res.params)
y.L1 1.235294
dtype: float64
>>> print(res.fittedvalues)
2001 1.482353
2002 1.729412
Freq: A-DEC, dtype: float64
>>> print(res.forecast(1))
2003 2.223529
Freq: A-DEC, dtype: float64
>>> new_index = pd.period_range(start='2003', periods=3, freq='Y')
>>> new_observations = pd.Series([2.1, 2.4, 2.7], index=new_index)
>>> updated_res = res.append(new_observations)
>>> print(updated_res.params)
y.L1 1.235294
dtype: float64
>>> print(updated_res.fittedvalues)
dtype: float64
2001 1.482353
2002 1.729412
2003 2.223529
2004 2.594118
2005 2.964706
Freq: A-DEC, dtype: float64
>>> print(updated_res.forecast(1))
2006 3.335294
Freq: A-DEC, dtype: float64
"""
def _check(orig, new, name, use_pandas=True):
from statsmodels.tsa.statespace.mlemodel import _check_index
typ = type(orig)
if not isinstance(new, typ):
raise TypeError(
f"{name} must have the same type as the {name} used to "
f"originally create the model ({typ.__name__})."
)
if not use_pandas:
return np.concatenate([orig, new])
start = len(orig)
end = start + len(new) - 1
_, _, _, append_ix = self.model._get_prediction_index(start, end)
_check_index(append_ix, new, title=name)
return pd.concat([orig, new], axis=0)
existing = self.model
no_exog = existing.exog is None
if no_exog != (exog is None):
if no_exog:
err = (
"Original model does not contain exog data but exog data "
"passed"
)
else:
err = "Original model has exog data but not exog data passed"
raise ValueError(err)
if isinstance(existing.data.orig_endog, (pd.Series, pd.DataFrame)):
endog = _check(existing.data.orig_endog, endog, "endog")
else:
endog = _check(
existing.endog, np.asarray(endog), "endog", use_pandas=False
)
if isinstance(existing.data.orig_exog, (pd.Series, pd.DataFrame)):
exog = _check(existing.data.orig_exog, exog, "exog")
elif exog is not None:
exog = _check(
existing.exog, np.asarray(exog), "endog", use_pandas=False
)
return self.apply(endog, exog, refit=refit, fit_kwargs=fit_kwargs) | Append observations to the ones used to fit the model
Creates a new result object using the current fitted parameters
where additional observations are appended to the data used
to fit the model. The new results can then be used for
analysis or forecasting.
Parameters
----------
endog : array_like
New observations from the modeled time-series process.
exog : array_like, optional
New observations of exogenous regressors, if applicable.
refit : bool, optional
Whether to re-fit the parameters, using the new dataset.
Default is False (so parameters from the current results object
are used to create the new results object).
fit_kwargs : dict, optional
Keyword arguments to pass to `fit` (if `refit=True`).
Returns
-------
AutoRegResults
Updated results object containing results for the new dataset.
See Also
--------
AutoRegResults.apply
statsmodels.tsa.statespace.mlemodel.MLEResults.append
Notes
-----
The endog and exog arguments to this method must be formatted in the
same way (e.g. Pandas Series versus Numpy array) as were the endog
and exog arrays passed to the original model.
The endog argument to this method should consist of new observations
that occurred directly after the last element of endog. For any other
kind of dataset, see the apply method.
Examples
--------
>>> import pandas as pd
>>> from statsmodels.tsa.ar_model import AutoReg
>>> index = pd.period_range(start='2000', periods=3, freq='Y')
>>> original_observations = pd.Series([1.2, 1.4, 1.8], index=index)
>>> mod = AutoReg(original_observations, lags=1, trend="n")
>>> res = mod.fit()
>>> print(res.params)
y.L1 1.235294
dtype: float64
>>> print(res.fittedvalues)
2001 1.482353
2002 1.729412
Freq: A-DEC, dtype: float64
>>> print(res.forecast(1))
2003 2.223529
Freq: A-DEC, dtype: float64
>>> new_index = pd.period_range(start='2003', periods=3, freq='Y')
>>> new_observations = pd.Series([2.1, 2.4, 2.7], index=new_index)
>>> updated_res = res.append(new_observations)
>>> print(updated_res.params)
y.L1 1.235294
dtype: float64
>>> print(updated_res.fittedvalues)
dtype: float64
2001 1.482353
2002 1.729412
2003 2.223529
2004 2.594118
2005 2.964706
Freq: A-DEC, dtype: float64
>>> print(updated_res.forecast(1))
2006 3.335294
Freq: A-DEC, dtype: float64 | append | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def ic_no_data():
"""Fake mod and results to handle no regressor case"""
mod = SimpleNamespace(
nobs=y.shape[0], endog=y, exog=np.empty((y.shape[0], 0))
)
llf = OLS.loglike(mod, np.empty(0))
res = SimpleNamespace(
resid=y, nobs=y.shape[0], llf=llf, df_model=0, k_constant=0
)
return compute_ics(res) | Fake mod and results to handle no regressor case | ar_select_order.ic_no_data | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def ar_select_order(
endog,
maxlag,
ic="bic",
glob=False,
trend: Literal["n", "c", "ct", "ctt"] = "c",
seasonal=False,
exog=None,
hold_back=None,
period=None,
missing="none",
old_names=False,
):
"""
Autoregressive AR-X(p) model order selection.
Parameters
----------
endog : array_like
A 1-d endogenous response variable. The independent variable.
maxlag : int
The maximum lag to consider.
ic : {'aic', 'hqic', 'bic'}
The information criterion to use in the selection.
glob : bool
Flag indicating where to use a global search across all combinations
of lags. In practice, this option is not computational feasible when
maxlag is larger than 15 (or perhaps 20) since the global search
requires fitting 2**maxlag models.\n%(auto_reg_params)s
Returns
-------
AROrderSelectionResults
A results holder containing the model and the complete set of
information criteria for all models fit.
Examples
--------
>>> from statsmodels.tsa.ar_model import ar_select_order
>>> data = sm.datasets.sunspots.load_pandas().data['SUNACTIVITY']
Determine the optimal lag structure
>>> mod = ar_select_order(data, maxlag=13)
>>> mod.ar_lags
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
Determine the optimal lag structure with seasonal terms
>>> mod = ar_select_order(data, maxlag=13, seasonal=True, period=12)
>>> mod.ar_lags
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
Globally determine the optimal lag structure
>>> mod = ar_select_order(data, maxlag=13, glob=True)
>>> mod.ar_lags
array([1, 2, 9])
"""
full_mod = AutoReg(
endog,
maxlag,
trend=trend,
seasonal=seasonal,
exog=exog,
hold_back=hold_back,
period=period,
missing=missing,
old_names=old_names,
)
nexog = full_mod.exog.shape[1] if full_mod.exog is not None else 0
y, x = full_mod._y, full_mod._x
base_col = x.shape[1] - nexog - maxlag
sel = np.ones(x.shape[1], dtype=bool)
ics: list[tuple[int | tuple[int, ...], tuple[float, float, float]]] = []
def compute_ics(res):
nobs = res.nobs
df_model = res.df_model
sigma2 = 1.0 / nobs * sumofsq(res.resid)
llf = -nobs * (np.log(2 * np.pi * sigma2) + 1) / 2
res = SimpleNamespace(
nobs=nobs, df_model=df_model, sigma2=sigma2, llf=llf
)
aic = call_cached_func(AutoRegResults.aic, res)
bic = call_cached_func(AutoRegResults.bic, res)
hqic = call_cached_func(AutoRegResults.hqic, res)
return aic, bic, hqic
def ic_no_data():
"""Fake mod and results to handle no regressor case"""
mod = SimpleNamespace(
nobs=y.shape[0], endog=y, exog=np.empty((y.shape[0], 0))
)
llf = OLS.loglike(mod, np.empty(0))
res = SimpleNamespace(
resid=y, nobs=y.shape[0], llf=llf, df_model=0, k_constant=0
)
return compute_ics(res)
if not glob:
sel[base_col : base_col + maxlag] = False
for i in range(maxlag + 1):
sel[base_col : base_col + i] = True
if not np.any(sel):
ics.append((0, ic_no_data()))
continue
res = OLS(y, x[:, sel]).fit()
lags = tuple(j for j in range(1, i + 1))
lags = 0 if not lags else lags
ics.append((lags, compute_ics(res)))
else:
bits = np.arange(2**maxlag, dtype=np.int32)[:, None]
bits = bits.view(np.uint8)
bits = np.unpackbits(bits).reshape(-1, 32)
for i in range(4):
bits[:, 8 * i : 8 * (i + 1)] = bits[:, 8 * i : 8 * (i + 1)][
:, ::-1
]
masks = bits[:, :maxlag]
for mask in masks:
sel[base_col : base_col + maxlag] = mask
if not np.any(sel):
ics.append((0, ic_no_data()))
continue
res = OLS(y, x[:, sel]).fit()
lags = tuple(np.where(mask)[0] + 1)
lags = 0 if not lags else lags
ics.append((lags, compute_ics(res)))
key_loc = {"aic": 0, "bic": 1, "hqic": 2}[ic]
ics = sorted(ics, key=lambda x: x[1][key_loc])
selected_model = ics[0][0]
mod = AutoReg(
endog,
selected_model,
trend=trend,
seasonal=seasonal,
exog=exog,
hold_back=hold_back,
period=period,
missing=missing,
old_names=old_names,
)
return AROrderSelectionResults(mod, ics, trend, seasonal, period) | Autoregressive AR-X(p) model order selection.
Parameters
----------
endog : array_like
A 1-d endogenous response variable. The independent variable.
maxlag : int
The maximum lag to consider.
ic : {'aic', 'hqic', 'bic'}
The information criterion to use in the selection.
glob : bool
Flag indicating where to use a global search across all combinations
of lags. In practice, this option is not computational feasible when
maxlag is larger than 15 (or perhaps 20) since the global search
requires fitting 2**maxlag models.\n%(auto_reg_params)s
Returns
-------
AROrderSelectionResults
A results holder containing the model and the complete set of
information criteria for all models fit.
Examples
--------
>>> from statsmodels.tsa.ar_model import ar_select_order
>>> data = sm.datasets.sunspots.load_pandas().data['SUNACTIVITY']
Determine the optimal lag structure
>>> mod = ar_select_order(data, maxlag=13)
>>> mod.ar_lags
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
Determine the optimal lag structure with seasonal terms
>>> mod = ar_select_order(data, maxlag=13, seasonal=True, period=12)
>>> mod.ar_lags
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
Globally determine the optimal lag structure
>>> mod = ar_select_order(data, maxlag=13, glob=True)
>>> mod.ar_lags
array([1, 2, 9]) | ar_select_order | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def model(self) -> AutoReg:
"""The model selected using the chosen information criterion."""
return self._model | The model selected using the chosen information criterion. | model | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def seasonal(self) -> bool:
"""Flag indicating if a seasonal component is included."""
return self._seasonal | Flag indicating if a seasonal component is included. | seasonal | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def trend(self) -> Literal["n", "c", "ct", "ctt"]:
"""The trend included in the model selection."""
return self._trend | The trend included in the model selection. | trend | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def period(self) -> int | None:
"""The period of the seasonal component."""
return self._period | The period of the seasonal component. | period | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def aic(self) -> dict[int | tuple[int, ...], float]:
"""
The Akaike information criterion for the models fit.
Returns
-------
dict[tuple, float]
"""
return self._aic | The Akaike information criterion for the models fit.
Returns
-------
dict[tuple, float] | aic | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def bic(self) -> dict[int | tuple[int, ...], float]:
"""
The Bayesian (Schwarz) information criteria for the models fit.
Returns
-------
dict[tuple, float]
"""
return self._bic | The Bayesian (Schwarz) information criteria for the models fit.
Returns
-------
dict[tuple, float] | bic | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def hqic(self) -> dict[int | tuple[int, ...], float]:
"""
The Hannan-Quinn information criteria for the models fit.
Returns
-------
dict[tuple, float]
"""
return self._hqic | The Hannan-Quinn information criteria for the models fit.
Returns
-------
dict[tuple, float] | hqic | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def ar_lags(self) -> list[int] | None:
"""The lags included in the selected model."""
return self._model.ar_lags | The lags included in the selected model. | ar_lags | python | statsmodels/statsmodels | statsmodels/tsa/ar_model.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py | BSD-3-Clause |
def add_trend(x, trend="c", prepend=False, has_constant="skip"):
"""
Add a trend and/or constant to an array.
Parameters
----------
x : array_like
Original array of data.
trend : str {'n', 'c', 't', 'ct', 'ctt'}
The trend to add.
* 'n' add no trend.
* 'c' add constant only.
* 't' add trend only.
* 'ct' add constant and linear trend.
* 'ctt' add constant and linear and quadratic trend.
prepend : bool
If True, prepends the new data to the columns of X.
has_constant : str {'raise', 'add', 'skip'}
Controls what happens when trend is 'c' and a constant column already
exists in x. 'raise' will raise an error. 'add' will add a column of
1s. 'skip' will return the data without change. 'skip' is the default.
Returns
-------
array_like
The original data with the additional trend columns. If x is a
pandas Series or DataFrame, then the trend column names are 'const',
'trend' and 'trend_squared'.
See Also
--------
statsmodels.tools.tools.add_constant
Add a constant column to an array.
Notes
-----
Returns columns as ['ctt','ct','c'] whenever applicable. There is currently
no checking for an existing trend.
"""
prepend = bool_like(prepend, "prepend")
trend = string_like(trend, "trend", options=("n", "c", "t", "ct", "ctt"))
has_constant = string_like(
has_constant, "has_constant", options=("raise", "add", "skip")
)
# TODO: could be generalized for trend of aribitrary order
columns = ["const", "trend", "trend_squared"]
if trend == "n":
return x.copy()
elif trend == "c": # handles structured arrays
columns = columns[:1]
trendorder = 0
elif trend == "ct" or trend == "t":
columns = columns[:2]
if trend == "t":
columns = columns[1:2]
trendorder = 1
elif trend == "ctt":
trendorder = 2
if _is_recarray(x):
from statsmodels.tools.sm_exceptions import recarray_exception
raise NotImplementedError(recarray_exception)
is_pandas = _is_using_pandas(x, None)
if is_pandas:
if isinstance(x, pd.Series):
x = pd.DataFrame(x)
else:
x = x.copy()
else:
x = np.asanyarray(x)
nobs = len(x)
trendarr = np.vander(
np.arange(1, nobs + 1, dtype=np.float64), trendorder + 1
)
# put in order ctt
trendarr = np.fliplr(trendarr)
if trend == "t":
trendarr = trendarr[:, 1]
if "c" in trend:
if is_pandas:
# Mixed type protection
def safe_is_const(s):
try:
return np.ptp(s) == 0.0 and np.any(s != 0.0)
except Exception:
return False
col_const = x.apply(safe_is_const, 0)
else:
ptp0 = np.ptp(np.asanyarray(x), axis=0)
col_is_const = ptp0 == 0
nz_const = col_is_const & (x[0] != 0)
col_const = nz_const
if np.any(col_const):
if has_constant == "raise":
if x.ndim == 1:
base_err = "x is constant."
else:
columns = np.arange(x.shape[1])[col_const]
if isinstance(x, pd.DataFrame):
columns = x.columns
const_cols = ", ".join([str(c) for c in columns])
base_err = (
"x contains one or more constant columns. Column(s) "
f"{const_cols} are constant."
)
raise ValueError(
f"{base_err} Adding a constant with trend='{trend}' is "
"not allowed."
)
elif has_constant == "skip":
columns = columns[1:]
trendarr = trendarr[:, 1:]
order = 1 if prepend else -1
if is_pandas:
trendarr = pd.DataFrame(trendarr, index=x.index, columns=columns)
x = [trendarr, x]
x = pd.concat(x[::order], axis=1)
else:
x = [trendarr, x]
x = np.column_stack(x[::order])
return x | Add a trend and/or constant to an array.
Parameters
----------
x : array_like
Original array of data.
trend : str {'n', 'c', 't', 'ct', 'ctt'}
The trend to add.
* 'n' add no trend.
* 'c' add constant only.
* 't' add trend only.
* 'ct' add constant and linear trend.
* 'ctt' add constant and linear and quadratic trend.
prepend : bool
If True, prepends the new data to the columns of X.
has_constant : str {'raise', 'add', 'skip'}
Controls what happens when trend is 'c' and a constant column already
exists in x. 'raise' will raise an error. 'add' will add a column of
1s. 'skip' will return the data without change. 'skip' is the default.
Returns
-------
array_like
The original data with the additional trend columns. If x is a
pandas Series or DataFrame, then the trend column names are 'const',
'trend' and 'trend_squared'.
See Also
--------
statsmodels.tools.tools.add_constant
Add a constant column to an array.
Notes
-----
Returns columns as ['ctt','ct','c'] whenever applicable. There is currently
no checking for an existing trend. | add_trend | python | statsmodels/statsmodels | statsmodels/tsa/tsatools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/tsatools.py | BSD-3-Clause |
def add_lag(x, col=None, lags=1, drop=False, insert=True):
"""
Returns an array with lags included given an array.
Parameters
----------
x : array_like
An array or NumPy ndarray subclass. Can be either a 1d or 2d array with
observations in columns.
col : int or None
`col` can be an int of the zero-based column index. If it's a
1d array `col` can be None.
lags : int
The number of lags desired.
drop : bool
Whether to keep the contemporaneous variable for the data.
insert : bool or int
If True, inserts the lagged values after `col`. If False, appends
the data. If int inserts the lags at int.
Returns
-------
array : ndarray
Array with lags
Examples
--------
>>> import statsmodels.api as sm
>>> data = sm.datasets.macrodata.load()
>>> data = data.data[['year','quarter','realgdp','cpi']]
>>> data = sm.tsa.add_lag(data, 'realgdp', lags=2)
Notes
-----
Trims the array both forward and backward, so that the array returned
so that the length of the returned array is len(`X`) - lags. The lags are
returned in increasing order, ie., t-1,t-2,...,t-lags
"""
lags = int_like(lags, "lags")
drop = bool_like(drop, "drop")
x = array_like(x, "x", ndim=2)
if col is None:
col = 0
# handle negative index
if col < 0:
col = x.shape[1] + col
if x.ndim == 1:
x = x[:, None]
contemp = x[:, col]
if insert is True:
ins_idx = col + 1
elif insert is False:
ins_idx = x.shape[1]
else:
if insert < 0: # handle negative index
insert = x.shape[1] + insert + 1
if insert > x.shape[1]:
insert = x.shape[1]
warnings.warn(
"insert > number of variables, inserting at the"
" last position",
ValueWarning,
)
ins_idx = insert
ndlags = lagmat(contemp, lags, trim="Both")
first_cols = lrange(ins_idx)
last_cols = lrange(ins_idx, x.shape[1])
if drop:
if col in first_cols:
first_cols.pop(first_cols.index(col))
else:
last_cols.pop(last_cols.index(col))
return np.column_stack((x[lags:, first_cols], ndlags, x[lags:, last_cols])) | Returns an array with lags included given an array.
Parameters
----------
x : array_like
An array or NumPy ndarray subclass. Can be either a 1d or 2d array with
observations in columns.
col : int or None
`col` can be an int of the zero-based column index. If it's a
1d array `col` can be None.
lags : int
The number of lags desired.
drop : bool
Whether to keep the contemporaneous variable for the data.
insert : bool or int
If True, inserts the lagged values after `col`. If False, appends
the data. If int inserts the lags at int.
Returns
-------
array : ndarray
Array with lags
Examples
--------
>>> import statsmodels.api as sm
>>> data = sm.datasets.macrodata.load()
>>> data = data.data[['year','quarter','realgdp','cpi']]
>>> data = sm.tsa.add_lag(data, 'realgdp', lags=2)
Notes
-----
Trims the array both forward and backward, so that the array returned
so that the length of the returned array is len(`X`) - lags. The lags are
returned in increasing order, ie., t-1,t-2,...,t-lags | add_lag | python | statsmodels/statsmodels | statsmodels/tsa/tsatools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/tsatools.py | BSD-3-Clause |
def detrend(x, order=1, axis=0):
"""
Detrend an array with a trend of given order along axis 0 or 1.
Parameters
----------
x : array_like, 1d or 2d
Data, if 2d, then each row or column is independently detrended with
the same trendorder, but independent trend estimates.
order : int
The polynomial order of the trend, zero is constant, one is
linear trend, two is quadratic trend.
axis : int
Axis can be either 0, observations by rows, or 1, observations by
columns.
Returns
-------
ndarray
The detrended series is the residual of the linear regression of the
data on the trend of given order.
"""
order = int_like(order, "order")
axis = int_like(axis, "axis")
if x.ndim == 2 and int(axis) == 1:
x = x.T
elif x.ndim > 2:
raise NotImplementedError(
"x.ndim > 2 is not implemented until it is needed"
)
nobs = x.shape[0]
if order == 0:
# Special case demean
resid = x - x.mean(axis=0)
else:
trends = np.vander(np.arange(float(nobs)), N=order + 1)
beta = np.linalg.pinv(trends).dot(x)
resid = x - np.dot(trends, beta)
if x.ndim == 2 and int(axis) == 1:
resid = resid.T
return resid | Detrend an array with a trend of given order along axis 0 or 1.
Parameters
----------
x : array_like, 1d or 2d
Data, if 2d, then each row or column is independently detrended with
the same trendorder, but independent trend estimates.
order : int
The polynomial order of the trend, zero is constant, one is
linear trend, two is quadratic trend.
axis : int
Axis can be either 0, observations by rows, or 1, observations by
columns.
Returns
-------
ndarray
The detrended series is the residual of the linear regression of the
data on the trend of given order. | detrend | python | statsmodels/statsmodels | statsmodels/tsa/tsatools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/tsatools.py | BSD-3-Clause |
def lagmat(
x,
maxlag: int,
trim: Literal["forward", "backward", "both", "none"] = "forward",
original: Literal["ex", "sep", "in"] = "ex",
use_pandas: bool = False,
) -> NDArray | DataFrame | tuple[NDArray, NDArray] | tuple[DataFrame, DataFrame]:
"""
Create 2d array of lags.
Parameters
----------
x : array_like
Data; if 2d, observation in rows and variables in columns.
maxlag : int
All lags from zero to maxlag are included.
trim : {'forward', 'backward', 'both', 'none', None}
The trimming method to use.
* 'forward' : trim invalid observations in front.
* 'backward' : trim invalid initial observations.
* 'both' : trim invalid observations on both sides.
* 'none', None : no trimming of observations.
original : {'ex','sep','in'}
How the original is treated.
* 'ex' : drops the original array returning only the lagged values.
* 'in' : returns the original array and the lagged values as a single
array.
* 'sep' : returns a tuple (original array, lagged values). The original
array is truncated to have the same number of rows as
the returned lagmat.
use_pandas : bool
If true, returns a DataFrame when the input is a pandas
Series or DataFrame. If false, return numpy ndarrays.
Returns
-------
lagmat : ndarray
The array with lagged observations.
y : ndarray, optional
Only returned if original == 'sep'.
Notes
-----
When using a pandas DataFrame or Series with use_pandas=True, trim can only
be 'forward' or 'both' since it is not possible to consistently extend
index values.
Examples
--------
>>> from statsmodels.tsa.tsatools import lagmat
>>> import numpy as np
>>> X = np.arange(1,7).reshape(-1,2)
>>> lagmat(X, maxlag=2, trim="forward", original='in')
array([[ 1., 2., 0., 0., 0., 0.],
[ 3., 4., 1., 2., 0., 0.],
[ 5., 6., 3., 4., 1., 2.]])
>>> lagmat(X, maxlag=2, trim="backward", original='in')
array([[ 5., 6., 3., 4., 1., 2.],
[ 0., 0., 5., 6., 3., 4.],
[ 0., 0., 0., 0., 5., 6.]])
>>> lagmat(X, maxlag=2, trim="both", original='in')
array([[ 5., 6., 3., 4., 1., 2.]])
>>> lagmat(X, maxlag=2, trim="none", original='in')
array([[ 1., 2., 0., 0., 0., 0.],
[ 3., 4., 1., 2., 0., 0.],
[ 5., 6., 3., 4., 1., 2.],
[ 0., 0., 5., 6., 3., 4.],
[ 0., 0., 0., 0., 5., 6.]])
"""
maxlag = int_like(maxlag, "maxlag")
use_pandas = bool_like(use_pandas, "use_pandas")
trim = string_like(
trim,
"trim",
optional=True,
options=("forward", "backward", "both", "none"),
)
original = string_like(original, "original", options=("ex", "sep", "in"))
# TODO: allow list of lags additional to maxlag
orig = x
x = array_like(x, "x", ndim=2, dtype=None)
is_pandas = _is_using_pandas(orig, None) and use_pandas
trim = "none" if trim is None else trim
trim = trim.lower()
if is_pandas and trim in ("none", "backward"):
raise ValueError(
"trim cannot be 'none' or 'backward' when used on "
"Series or DataFrames"
)
dropidx = 0
nobs, nvar = x.shape
if original in ["ex", "sep"]:
dropidx = nvar
if maxlag >= nobs:
raise ValueError("maxlag should be < nobs")
lm = np.zeros((nobs + maxlag, nvar * (maxlag + 1)))
for k in range(0, int(maxlag + 1)):
lm[
maxlag - k : nobs + maxlag - k,
nvar * (maxlag - k) : nvar * (maxlag - k + 1),
] = x
if trim in ("none", "forward"):
startobs = 0
elif trim in ("backward", "both"):
startobs = maxlag
else:
raise ValueError("trim option not valid")
if trim in ("none", "backward"):
stopobs = len(lm)
else:
stopobs = nobs
if is_pandas:
x = orig
if isinstance(x, DataFrame):
x_columns = [str(c) for c in x.columns]
if len(set(x_columns)) != x.shape[1]:
raise ValueError(
"Columns names must be distinct after conversion to string "
"(if not already strings)."
)
else:
x_columns = [str(x.name)]
columns = [str(col) for col in x_columns]
for lag in range(maxlag):
lag_str = str(lag + 1)
columns.extend([str(col) + ".L." + lag_str for col in x_columns])
lm = DataFrame(lm[:stopobs], index=x.index, columns=columns)
lags = lm.iloc[startobs:]
if original in ("sep", "ex"):
leads = lags[x_columns]
lags = lags.drop(x_columns, axis=1)
else:
lags = lm[startobs:stopobs, dropidx:]
if original == "sep":
leads = lm[startobs:stopobs, :dropidx]
if original == "sep":
return lags, leads
else:
return lags | Create 2d array of lags.
Parameters
----------
x : array_like
Data; if 2d, observation in rows and variables in columns.
maxlag : int
All lags from zero to maxlag are included.
trim : {'forward', 'backward', 'both', 'none', None}
The trimming method to use.
* 'forward' : trim invalid observations in front.
* 'backward' : trim invalid initial observations.
* 'both' : trim invalid observations on both sides.
* 'none', None : no trimming of observations.
original : {'ex','sep','in'}
How the original is treated.
* 'ex' : drops the original array returning only the lagged values.
* 'in' : returns the original array and the lagged values as a single
array.
* 'sep' : returns a tuple (original array, lagged values). The original
array is truncated to have the same number of rows as
the returned lagmat.
use_pandas : bool
If true, returns a DataFrame when the input is a pandas
Series or DataFrame. If false, return numpy ndarrays.
Returns
-------
lagmat : ndarray
The array with lagged observations.
y : ndarray, optional
Only returned if original == 'sep'.
Notes
-----
When using a pandas DataFrame or Series with use_pandas=True, trim can only
be 'forward' or 'both' since it is not possible to consistently extend
index values.
Examples
--------
>>> from statsmodels.tsa.tsatools import lagmat
>>> import numpy as np
>>> X = np.arange(1,7).reshape(-1,2)
>>> lagmat(X, maxlag=2, trim="forward", original='in')
array([[ 1., 2., 0., 0., 0., 0.],
[ 3., 4., 1., 2., 0., 0.],
[ 5., 6., 3., 4., 1., 2.]])
>>> lagmat(X, maxlag=2, trim="backward", original='in')
array([[ 5., 6., 3., 4., 1., 2.],
[ 0., 0., 5., 6., 3., 4.],
[ 0., 0., 0., 0., 5., 6.]])
>>> lagmat(X, maxlag=2, trim="both", original='in')
array([[ 5., 6., 3., 4., 1., 2.]])
>>> lagmat(X, maxlag=2, trim="none", original='in')
array([[ 1., 2., 0., 0., 0., 0.],
[ 3., 4., 1., 2., 0., 0.],
[ 5., 6., 3., 4., 1., 2.],
[ 0., 0., 5., 6., 3., 4.],
[ 0., 0., 0., 0., 5., 6.]]) | lagmat | python | statsmodels/statsmodels | statsmodels/tsa/tsatools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/tsatools.py | BSD-3-Clause |
def lagmat2ds(
x, maxlag0, maxlagex=None, dropex=0, trim="forward", use_pandas=False
):
"""
Generate lagmatrix for 2d array, columns arranged by variables.
Parameters
----------
x : array_like
Data, 2d. Observations in rows and variables in columns.
maxlag0 : int
The first variable all lags from zero to maxlag are included.
maxlagex : {None, int}
The max lag for all other variables all lags from zero to maxlag are
included.
dropex : int
Exclude first dropex lags from other variables. For all variables,
except the first, lags from dropex to maxlagex are included.
trim : str
The trimming method to use.
* 'forward' : trim invalid observations in front.
* 'backward' : trim invalid initial observations.
* 'both' : trim invalid observations on both sides.
* 'none' : no trimming of observations.
use_pandas : bool
If true, returns a DataFrame when the input is a pandas
Series or DataFrame. If false, return numpy ndarrays.
Returns
-------
ndarray
The array with lagged observations, columns ordered by variable.
Notes
-----
Inefficient implementation for unequal lags, implemented for convenience.
"""
maxlag0 = int_like(maxlag0, "maxlag0")
maxlagex = int_like(maxlagex, "maxlagex", optional=True)
trim = string_like(
trim,
"trim",
optional=True,
options=("forward", "backward", "both", "none"),
)
if maxlagex is None:
maxlagex = maxlag0
maxlag = max(maxlag0, maxlagex)
is_pandas = _is_using_pandas(x, None)
if x.ndim == 1:
if is_pandas:
x = pd.DataFrame(x)
else:
x = x[:, None]
elif x.ndim == 0 or x.ndim > 2:
raise ValueError("Only supports 1 and 2-dimensional data.")
nobs, nvar = x.shape
if is_pandas and use_pandas:
lags = lagmat(
x.iloc[:, 0], maxlag, trim=trim, original="in", use_pandas=True
)
lagsli = [lags.iloc[:, : maxlag0 + 1]]
for k in range(1, nvar):
lags = lagmat(
x.iloc[:, k], maxlag, trim=trim, original="in", use_pandas=True
)
lagsli.append(lags.iloc[:, dropex : maxlagex + 1])
return pd.concat(lagsli, axis=1)
elif is_pandas:
x = np.asanyarray(x)
lagsli = [
lagmat(x[:, 0], maxlag, trim=trim, original="in")[:, : maxlag0 + 1]
]
for k in range(1, nvar):
lagsli.append(
lagmat(x[:, k], maxlag, trim=trim, original="in")[
:, dropex : maxlagex + 1
]
)
return np.column_stack(lagsli) | Generate lagmatrix for 2d array, columns arranged by variables.
Parameters
----------
x : array_like
Data, 2d. Observations in rows and variables in columns.
maxlag0 : int
The first variable all lags from zero to maxlag are included.
maxlagex : {None, int}
The max lag for all other variables all lags from zero to maxlag are
included.
dropex : int
Exclude first dropex lags from other variables. For all variables,
except the first, lags from dropex to maxlagex are included.
trim : str
The trimming method to use.
* 'forward' : trim invalid observations in front.
* 'backward' : trim invalid initial observations.
* 'both' : trim invalid observations on both sides.
* 'none' : no trimming of observations.
use_pandas : bool
If true, returns a DataFrame when the input is a pandas
Series or DataFrame. If false, return numpy ndarrays.
Returns
-------
ndarray
The array with lagged observations, columns ordered by variable.
Notes
-----
Inefficient implementation for unequal lags, implemented for convenience. | lagmat2ds | python | statsmodels/statsmodels | statsmodels/tsa/tsatools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/tsatools.py | BSD-3-Clause |
def duplication_matrix(n):
"""
Create duplication matrix D_n which satisfies vec(S) = D_n vech(S) for
symmetric matrix S
Returns
-------
D_n : ndarray
"""
n = int_like(n, "n")
tmp = np.eye(n * (n + 1) // 2)
return np.array([unvech(x).ravel() for x in tmp]).T | Create duplication matrix D_n which satisfies vec(S) = D_n vech(S) for
symmetric matrix S
Returns
-------
D_n : ndarray | duplication_matrix | python | statsmodels/statsmodels | statsmodels/tsa/tsatools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/tsatools.py | BSD-3-Clause |
def elimination_matrix(n):
"""
Create the elimination matrix L_n which satisfies vech(M) = L_n vec(M) for
any matrix M
Parameters
----------
Returns
-------
"""
n = int_like(n, "n")
vech_indices = vec(np.tril(np.ones((n, n))))
return np.eye(n * n)[vech_indices != 0] | Create the elimination matrix L_n which satisfies vech(M) = L_n vec(M) for
any matrix M
Parameters
----------
Returns
------- | elimination_matrix | python | statsmodels/statsmodels | statsmodels/tsa/tsatools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/tsatools.py | BSD-3-Clause |
def commutation_matrix(p, q):
"""
Create the commutation matrix K_{p,q} satisfying vec(A') = K_{p,q} vec(A)
Parameters
----------
p : int
q : int
Returns
-------
K : ndarray (pq x pq)
"""
p = int_like(p, "p")
q = int_like(q, "q")
K = np.eye(p * q)
indices = np.arange(p * q).reshape((p, q), order="F")
return K.take(indices.ravel(), axis=0) | Create the commutation matrix K_{p,q} satisfying vec(A') = K_{p,q} vec(A)
Parameters
----------
p : int
q : int
Returns
-------
K : ndarray (pq x pq) | commutation_matrix | python | statsmodels/statsmodels | statsmodels/tsa/tsatools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/tsatools.py | BSD-3-Clause |
def _ar_transparams(params):
"""
Transforms params to induce stationarity/invertability.
Parameters
----------
params : array_like
The AR coefficients
Reference
---------
Jones(1980)
"""
newparams = np.tanh(params / 2)
tmp = np.tanh(params / 2)
for j in range(1, len(params)):
a = newparams[j]
for kiter in range(j):
tmp[kiter] -= a * newparams[j - kiter - 1]
newparams[:j] = tmp[:j]
return newparams | Transforms params to induce stationarity/invertability.
Parameters
----------
params : array_like
The AR coefficients
Reference
---------
Jones(1980) | _ar_transparams | python | statsmodels/statsmodels | statsmodels/tsa/tsatools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/tsatools.py | BSD-3-Clause |
def _ar_invtransparams(params):
"""
Inverse of the Jones reparameterization
Parameters
----------
params : array_like
The transformed AR coefficients
"""
params = params.copy()
tmp = params.copy()
for j in range(len(params) - 1, 0, -1):
a = params[j]
for kiter in range(j):
tmp[kiter] = (params[kiter] + a * params[j - kiter - 1]) / (
1 - a ** 2
)
params[:j] = tmp[:j]
invarcoefs = 2 * np.arctanh(params)
return invarcoefs | Inverse of the Jones reparameterization
Parameters
----------
params : array_like
The transformed AR coefficients | _ar_invtransparams | python | statsmodels/statsmodels | statsmodels/tsa/tsatools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/tsatools.py | BSD-3-Clause |
def _ma_transparams(params):
"""
Transforms params to induce stationarity/invertability.
Parameters
----------
params : ndarray
The ma coeffecients of an (AR)MA model.
Reference
---------
Jones(1980)
"""
newparams = ((1 - np.exp(-params)) / (1 + np.exp(-params))).copy()
tmp = ((1 - np.exp(-params)) / (1 + np.exp(-params))).copy()
# levinson-durbin to get macf
for j in range(1, len(params)):
b = newparams[j]
for kiter in range(j):
tmp[kiter] += b * newparams[j - kiter - 1]
newparams[:j] = tmp[:j]
return newparams | Transforms params to induce stationarity/invertability.
Parameters
----------
params : ndarray
The ma coeffecients of an (AR)MA model.
Reference
---------
Jones(1980) | _ma_transparams | python | statsmodels/statsmodels | statsmodels/tsa/tsatools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/tsatools.py | BSD-3-Clause |
def _ma_invtransparams(macoefs):
"""
Inverse of the Jones reparameterization
Parameters
----------
params : ndarray
The transformed MA coefficients
"""
tmp = macoefs.copy()
for j in range(len(macoefs) - 1, 0, -1):
b = macoefs[j]
for kiter in range(j):
tmp[kiter] = (macoefs[kiter] - b * macoefs[j - kiter - 1]) / (
1 - b ** 2
)
macoefs[:j] = tmp[:j]
invmacoefs = -np.log((1 - macoefs) / (1 + macoefs))
return invmacoefs | Inverse of the Jones reparameterization
Parameters
----------
params : ndarray
The transformed MA coefficients | _ma_invtransparams | python | statsmodels/statsmodels | statsmodels/tsa/tsatools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/tsatools.py | BSD-3-Clause |
def unintegrate_levels(x, d):
"""
Returns the successive differences needed to unintegrate the series.
Parameters
----------
x : array_like
The original series
d : int
The number of differences of the differenced series.
Returns
-------
y : array_like
The increasing differences from 0 to d-1 of the first d elements
of x.
See Also
--------
unintegrate
"""
d = int_like(d, "d")
x = x[:d]
return np.asarray([np.diff(x, d - i)[0] for i in range(d, 0, -1)]) | Returns the successive differences needed to unintegrate the series.
Parameters
----------
x : array_like
The original series
d : int
The number of differences of the differenced series.
Returns
-------
y : array_like
The increasing differences from 0 to d-1 of the first d elements
of x.
See Also
--------
unintegrate | unintegrate_levels | python | statsmodels/statsmodels | statsmodels/tsa/tsatools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/tsatools.py | BSD-3-Clause |
def unintegrate(x, levels):
"""
After taking n-differences of a series, return the original series
Parameters
----------
x : array_like
The n-th differenced series
levels : list
A list of the first-value in each differenced series, for
[first-difference, second-difference, ..., n-th difference]
Returns
-------
y : array_like
The original series de-differenced
Examples
--------
>>> x = np.array([1, 3, 9., 19, 8.])
>>> levels = unintegrate_levels(x, 2)
>>> levels
array([ 1., 2.])
>>> unintegrate(np.diff(x, 2), levels)
array([ 1., 3., 9., 19., 8.])
"""
levels = list(levels)[:] # copy
if len(levels) > 1:
x0 = levels.pop(-1)
return unintegrate(np.cumsum(np.r_[x0, x]), levels)
x0 = levels[0]
return np.cumsum(np.r_[x0, x]) | After taking n-differences of a series, return the original series
Parameters
----------
x : array_like
The n-th differenced series
levels : list
A list of the first-value in each differenced series, for
[first-difference, second-difference, ..., n-th difference]
Returns
-------
y : array_like
The original series de-differenced
Examples
--------
>>> x = np.array([1, 3, 9., 19, 8.])
>>> levels = unintegrate_levels(x, 2)
>>> levels
array([ 1., 2.])
>>> unintegrate(np.diff(x, 2), levels)
array([ 1., 3., 9., 19., 8.]) | unintegrate | python | statsmodels/statsmodels | statsmodels/tsa/tsatools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/tsatools.py | BSD-3-Clause |
def freq_to_period(freq: str | offsets.DateOffset) -> int:
"""
Convert a pandas frequency to a periodicity
Parameters
----------
freq : str or offset
Frequency to convert
Returns
-------
int
Periodicity of freq
Notes
-----
Annual maps to 1, quarterly maps to 4, monthly to 12, weekly to 52.
"""
if not isinstance(freq, offsets.DateOffset):
freq = to_offset(freq) # go ahead and standardize
assert isinstance(freq, offsets.DateOffset)
freq = freq.rule_code.upper()
yearly_freqs = ("A-", "AS-", "Y-", "YS-", "YE-")
if freq in ("A", "Y") or freq.startswith(yearly_freqs):
return 1
elif freq == "Q" or freq.startswith(("Q-", "QS", "QE")):
return 4
elif freq == "M" or freq.startswith(("M-", "MS", "ME")):
return 12
elif freq == "W" or freq.startswith("W-"):
return 52
elif freq == "D":
return 7
elif freq == "B":
return 5
elif freq == "H":
return 24
else: # pragma : no cover
raise ValueError(
"freq {} not understood. Please report if you "
"think this is in error.".format(freq)
) | Convert a pandas frequency to a periodicity
Parameters
----------
freq : str or offset
Frequency to convert
Returns
-------
int
Periodicity of freq
Notes
-----
Annual maps to 1, quarterly maps to 4, monthly to 12, weekly to 52. | freq_to_period | python | statsmodels/statsmodels | statsmodels/tsa/tsatools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/tsatools.py | BSD-3-Clause |
def distance_indicators(x, epsilon=None, distance=1.5):
"""
Calculate all pairwise threshold distance indicators for a time series
Parameters
----------
x : 1d array
observations of time series for which heaviside distance indicators
are calculated
epsilon : scalar, optional
the threshold distance to use in calculating the heaviside indicators
distance : scalar, optional
if epsilon is omitted, specifies the distance multiplier to use when
computing it
Returns
-------
indicators : 2d array
matrix of distance threshold indicators
Notes
-----
Since this can be a very large matrix, use np.int8 to save some space.
"""
x = array_like(x, 'x')
if epsilon is not None and epsilon <= 0:
raise ValueError("Threshold distance must be positive if specified."
" Got epsilon of %f" % epsilon)
if distance <= 0:
raise ValueError("Threshold distance must be positive."
" Got distance multiplier %f" % distance)
# TODO: add functionality to select epsilon optimally
# TODO: and/or compute for a range of epsilons in [0.5*s, 2.0*s]?
# or [1.5*s, 2.0*s]?
if epsilon is None:
epsilon = distance * x.std(ddof=1)
return np.abs(x[:, None] - x) < epsilon | Calculate all pairwise threshold distance indicators for a time series
Parameters
----------
x : 1d array
observations of time series for which heaviside distance indicators
are calculated
epsilon : scalar, optional
the threshold distance to use in calculating the heaviside indicators
distance : scalar, optional
if epsilon is omitted, specifies the distance multiplier to use when
computing it
Returns
-------
indicators : 2d array
matrix of distance threshold indicators
Notes
-----
Since this can be a very large matrix, use np.int8 to save some space. | distance_indicators | python | statsmodels/statsmodels | statsmodels/tsa/_bds.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/_bds.py | BSD-3-Clause |
def correlation_sum(indicators, embedding_dim):
"""
Calculate a correlation sum
Useful as an estimator of a correlation integral
Parameters
----------
indicators : ndarray
2d array of distance threshold indicators
embedding_dim : int
embedding dimension
Returns
-------
corrsum : float
Correlation sum
indicators_joint
matrix of joint-distance-threshold indicators
"""
if not indicators.ndim == 2:
raise ValueError('Indicators must be a matrix')
if not indicators.shape[0] == indicators.shape[1]:
raise ValueError('Indicator matrix must be symmetric (square)')
if embedding_dim == 1:
indicators_joint = indicators
else:
corrsum, indicators = correlation_sum(indicators, embedding_dim - 1)
indicators_joint = indicators[1:, 1:]*indicators[:-1, :-1]
nobs = len(indicators_joint)
corrsum = np.mean(indicators_joint[np.triu_indices(nobs, 1)])
return corrsum, indicators_joint | Calculate a correlation sum
Useful as an estimator of a correlation integral
Parameters
----------
indicators : ndarray
2d array of distance threshold indicators
embedding_dim : int
embedding dimension
Returns
-------
corrsum : float
Correlation sum
indicators_joint
matrix of joint-distance-threshold indicators | correlation_sum | python | statsmodels/statsmodels | statsmodels/tsa/_bds.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/_bds.py | BSD-3-Clause |
def correlation_sums(indicators, max_dim):
"""
Calculate all correlation sums for embedding dimensions 1:max_dim
Parameters
----------
indicators : 2d array
matrix of distance threshold indicators
max_dim : int
maximum embedding dimension
Returns
-------
corrsums : ndarray
Correlation sums
"""
corrsums = np.zeros((1, max_dim))
corrsums[0, 0], indicators = correlation_sum(indicators, 1)
for i in range(1, max_dim):
corrsums[0, i], indicators = correlation_sum(indicators, 2)
return corrsums | Calculate all correlation sums for embedding dimensions 1:max_dim
Parameters
----------
indicators : 2d array
matrix of distance threshold indicators
max_dim : int
maximum embedding dimension
Returns
-------
corrsums : ndarray
Correlation sums | correlation_sums | python | statsmodels/statsmodels | statsmodels/tsa/_bds.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/_bds.py | BSD-3-Clause |
def _var(indicators, max_dim):
"""
Calculate the variance of a BDS effect
Parameters
----------
indicators : ndarray
2d array of distance threshold indicators
max_dim : int
maximum embedding dimension
Returns
-------
variances : float
Variance of BDS effect
"""
nobs = len(indicators)
corrsum_1dim, _ = correlation_sum(indicators, 1)
k = ((indicators.sum(1)**2).sum() - 3*indicators.sum() +
2*nobs) / (nobs * (nobs - 1) * (nobs - 2))
variances = np.zeros((1, max_dim - 1))
for embedding_dim in range(2, max_dim + 1):
tmp = 0
for j in range(1, embedding_dim):
tmp += (k**(embedding_dim - j))*(corrsum_1dim**(2 * j))
variances[0, embedding_dim-2] = 4 * (
k**embedding_dim +
2 * tmp +
((embedding_dim - 1)**2) * (corrsum_1dim**(2 * embedding_dim)) -
(embedding_dim**2) * k * (corrsum_1dim**(2 * embedding_dim - 2)))
return variances, k | Calculate the variance of a BDS effect
Parameters
----------
indicators : ndarray
2d array of distance threshold indicators
max_dim : int
maximum embedding dimension
Returns
-------
variances : float
Variance of BDS effect | _var | python | statsmodels/statsmodels | statsmodels/tsa/_bds.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/_bds.py | BSD-3-Clause |
def bds(x, max_dim=2, epsilon=None, distance=1.5):
"""
BDS Test Statistic for Independence of a Time Series
Parameters
----------
x : ndarray
Observations of time series for which bds statistics is calculated.
max_dim : int
The maximum embedding dimension.
epsilon : {float, None}, optional
The threshold distance to use in calculating the correlation sum.
distance : float, optional
Specifies the distance multiplier to use when computing the test
statistic if epsilon is omitted.
Returns
-------
bds_stat : float
The BDS statistic.
pvalue : float
The p-values associated with the BDS statistic.
Notes
-----
The null hypothesis of the test statistic is for an independent and
identically distributed (i.i.d.) time series, and an unspecified
alternative hypothesis.
This test is often used as a residual diagnostic.
The calculation involves matrices of size (nobs, nobs), so this test
will not work with very long datasets.
Implementation conditions on the first m-1 initial values, which are
required to calculate the m-histories:
x_t^m = (x_t, x_{t-1}, ... x_{t-(m-1)})
"""
x = array_like(x, 'x', ndim=1)
nobs_full = len(x)
if max_dim < 2 or max_dim >= nobs_full:
raise ValueError("Maximum embedding dimension must be in the range"
" [2,len(x)-1]. Got %d." % max_dim)
# Cache the indicators
indicators = distance_indicators(x, epsilon, distance)
# Get estimates of m-dimensional correlation integrals
corrsum_mdims = correlation_sums(indicators, max_dim)
# Get variance of effect
variances, k = _var(indicators, max_dim)
stddevs = np.sqrt(variances)
bds_stats = np.zeros((1, max_dim - 1))
pvalues = np.zeros((1, max_dim - 1))
for embedding_dim in range(2, max_dim+1):
ninitial = (embedding_dim - 1)
nobs = nobs_full - ninitial
# Get estimates of 1-dimensional correlation integrals
# (see Kanzler footnote 10 for why indicators are truncated)
corrsum_1dim, _ = correlation_sum(indicators[ninitial:, ninitial:], 1)
corrsum_mdim = corrsum_mdims[0, embedding_dim - 1]
# Get the intermediate values for the statistic
effect = corrsum_mdim - (corrsum_1dim**embedding_dim)
sd = stddevs[0, embedding_dim - 2]
# Calculate the statistic: bds_stat ~ N(0,1)
bds_stats[0, embedding_dim - 2] = np.sqrt(nobs) * effect / sd
# Calculate the p-value (two-tailed test)
pvalue = 2*stats.norm.sf(np.abs(bds_stats[0, embedding_dim - 2]))
pvalues[0, embedding_dim - 2] = pvalue
return np.squeeze(bds_stats), np.squeeze(pvalues) | BDS Test Statistic for Independence of a Time Series
Parameters
----------
x : ndarray
Observations of time series for which bds statistics is calculated.
max_dim : int
The maximum embedding dimension.
epsilon : {float, None}, optional
The threshold distance to use in calculating the correlation sum.
distance : float, optional
Specifies the distance multiplier to use when computing the test
statistic if epsilon is omitted.
Returns
-------
bds_stat : float
The BDS statistic.
pvalue : float
The p-values associated with the BDS statistic.
Notes
-----
The null hypothesis of the test statistic is for an independent and
identically distributed (i.i.d.) time series, and an unspecified
alternative hypothesis.
This test is often used as a residual diagnostic.
The calculation involves matrices of size (nobs, nobs), so this test
will not work with very long datasets.
Implementation conditions on the first m-1 initial values, which are
required to calculate the m-histories:
x_t^m = (x_t, x_{t-1}, ... x_{t-(m-1)}) | bds | python | statsmodels/statsmodels | statsmodels/tsa/_bds.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/_bds.py | BSD-3-Clause |
def arma_generate_sample(
ar, ma, nsample, scale=1, distrvs=None, axis=0, burnin=0
):
"""
Simulate data from an ARMA.
Parameters
----------
ar : array_like
The coefficient for autoregressive lag polynomial, including zero lag.
ma : array_like
The coefficient for moving-average lag polynomial, including zero lag.
nsample : int or tuple of ints
If nsample is an integer, then this creates a 1d timeseries of
length size. If nsample is a tuple, creates a len(nsample)
dimensional time series where time is indexed along the input
variable ``axis``. All series are unless ``distrvs`` generates
dependent data.
scale : float
The standard deviation of noise.
distrvs : function, random number generator
A function that generates the random numbers, and takes ``size``
as argument. The default is np.random.standard_normal.
axis : int
See nsample for details.
burnin : int
Number of observation at the beginning of the sample to drop.
Used to reduce dependence on initial values.
Returns
-------
ndarray
Random sample(s) from an ARMA process.
Notes
-----
As mentioned above, both the AR and MA components should include the
coefficient on the zero-lag. This is typically 1. Further, due to the
conventions used in signal processing used in signal.lfilter vs.
conventions in statistics for ARMA processes, the AR parameters should
have the opposite sign of what you might expect. See the examples below.
Examples
--------
>>> import numpy as np
>>> np.random.seed(12345)
>>> arparams = np.array([.75, -.25])
>>> maparams = np.array([.65, .35])
>>> ar = np.r_[1, -arparams] # add zero-lag and negate
>>> ma = np.r_[1, maparams] # add zero-lag
>>> y = sm.tsa.arma_generate_sample(ar, ma, 250)
>>> model = sm.tsa.ARIMA(y, (2, 0, 2), trend='n').fit(disp=0)
>>> model.params
array([ 0.79044189, -0.23140636, 0.70072904, 0.40608028])
"""
distrvs = np.random.standard_normal if distrvs is None else distrvs
if np.ndim(nsample) == 0:
nsample = [nsample]
if burnin:
# handle burin time for nd arrays
# maybe there is a better trick in scipy.fft code
newsize = list(nsample)
newsize[axis] += burnin
newsize = tuple(newsize)
fslice = [slice(None)] * len(newsize)
fslice[axis] = slice(burnin, None, None)
fslice = tuple(fslice)
else:
newsize = tuple(nsample)
fslice = tuple([slice(None)] * np.ndim(newsize))
eta = scale * distrvs(size=newsize)
return signal.lfilter(ma, ar, eta, axis=axis)[fslice] | Simulate data from an ARMA.
Parameters
----------
ar : array_like
The coefficient for autoregressive lag polynomial, including zero lag.
ma : array_like
The coefficient for moving-average lag polynomial, including zero lag.
nsample : int or tuple of ints
If nsample is an integer, then this creates a 1d timeseries of
length size. If nsample is a tuple, creates a len(nsample)
dimensional time series where time is indexed along the input
variable ``axis``. All series are unless ``distrvs`` generates
dependent data.
scale : float
The standard deviation of noise.
distrvs : function, random number generator
A function that generates the random numbers, and takes ``size``
as argument. The default is np.random.standard_normal.
axis : int
See nsample for details.
burnin : int
Number of observation at the beginning of the sample to drop.
Used to reduce dependence on initial values.
Returns
-------
ndarray
Random sample(s) from an ARMA process.
Notes
-----
As mentioned above, both the AR and MA components should include the
coefficient on the zero-lag. This is typically 1. Further, due to the
conventions used in signal processing used in signal.lfilter vs.
conventions in statistics for ARMA processes, the AR parameters should
have the opposite sign of what you might expect. See the examples below.
Examples
--------
>>> import numpy as np
>>> np.random.seed(12345)
>>> arparams = np.array([.75, -.25])
>>> maparams = np.array([.65, .35])
>>> ar = np.r_[1, -arparams] # add zero-lag and negate
>>> ma = np.r_[1, maparams] # add zero-lag
>>> y = sm.tsa.arma_generate_sample(ar, ma, 250)
>>> model = sm.tsa.ARIMA(y, (2, 0, 2), trend='n').fit(disp=0)
>>> model.params
array([ 0.79044189, -0.23140636, 0.70072904, 0.40608028]) | arma_generate_sample | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def arma_acovf(ar, ma, nobs=10, sigma2=1, dtype=None):
"""
Theoretical autocovariances of stationary ARMA processes
Parameters
----------
ar : array_like, 1d
The coefficients for autoregressive lag polynomial, including zero lag.
ma : array_like, 1d
The coefficients for moving-average lag polynomial, including zero lag.
nobs : int
The number of terms (lags plus zero lag) to include in returned acovf.
sigma2 : float
Variance of the innovation term.
Returns
-------
ndarray
The autocovariance of ARMA process given by ar, ma.
See Also
--------
arma_acf : Autocorrelation function for ARMA processes.
acovf : Sample autocovariance estimation.
References
----------
.. [*] Brockwell, Peter J., and Richard A. Davis. 2009. Time Series:
Theory and Methods. 2nd ed. 1991. New York, NY: Springer.
"""
if dtype is None:
dtype = np.common_type(np.array(ar), np.array(ma), np.array(sigma2))
p = len(ar) - 1
q = len(ma) - 1
m = max(p, q) + 1
if sigma2.real < 0:
raise ValueError("Must have positive innovation variance.")
# Short-circuit for trivial corner-case
if p == q == 0:
out = np.zeros(nobs, dtype=dtype)
out[0] = sigma2
return out
elif p > 0 and np.max(np.abs(np.roots(ar))) >= 1:
raise ValueError(NONSTATIONARY_ERROR)
# Get the moving average representation coefficients that we need
ma_coeffs = arma2ma(ar, ma, lags=m)
# Solve for the first m autocovariances via the linear system
# described by (BD, eq. 3.3.8)
A = np.zeros((m, m), dtype=dtype)
b = np.zeros((m, 1), dtype=dtype)
# We need a zero-right-padded version of ar params
tmp_ar = np.zeros(m, dtype=dtype)
tmp_ar[: p + 1] = ar
for k in range(m):
A[k, : (k + 1)] = tmp_ar[: (k + 1)][::-1]
A[k, 1 : m - k] += tmp_ar[(k + 1) : m]
b[k] = sigma2 * np.dot(ma[k : q + 1], ma_coeffs[: max((q + 1 - k), 0)])
acovf = np.zeros(max(nobs, m), dtype=dtype)
try:
acovf[:m] = np.linalg.solve(A, b)[:, 0]
except np.linalg.LinAlgError:
raise ValueError(NONSTATIONARY_ERROR)
# Iteratively apply (BD, eq. 3.3.9) to solve for remaining autocovariances
if nobs > m:
zi = signal.lfiltic([1], ar, acovf[:m:][::-1])
acovf[m:] = signal.lfilter(
[1], ar, np.zeros(nobs - m, dtype=dtype), zi=zi
)[0]
return acovf[:nobs] | Theoretical autocovariances of stationary ARMA processes
Parameters
----------
ar : array_like, 1d
The coefficients for autoregressive lag polynomial, including zero lag.
ma : array_like, 1d
The coefficients for moving-average lag polynomial, including zero lag.
nobs : int
The number of terms (lags plus zero lag) to include in returned acovf.
sigma2 : float
Variance of the innovation term.
Returns
-------
ndarray
The autocovariance of ARMA process given by ar, ma.
See Also
--------
arma_acf : Autocorrelation function for ARMA processes.
acovf : Sample autocovariance estimation.
References
----------
.. [*] Brockwell, Peter J., and Richard A. Davis. 2009. Time Series:
Theory and Methods. 2nd ed. 1991. New York, NY: Springer. | arma_acovf | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def arma_acf(ar, ma, lags=10):
"""
Theoretical autocorrelation function of an ARMA process.
Parameters
----------
ar : array_like
Coefficients for autoregressive lag polynomial, including zero lag.
ma : array_like
Coefficients for moving-average lag polynomial, including zero lag.
lags : int
The number of terms (lags plus zero lag) to include in returned acf.
Returns
-------
ndarray
The autocorrelations of ARMA process given by ar and ma.
See Also
--------
arma_acovf : Autocovariances from ARMA processes.
acf : Sample autocorrelation function estimation.
acovf : Sample autocovariance function estimation.
"""
acovf = arma_acovf(ar, ma, lags)
return acovf / acovf[0] | Theoretical autocorrelation function of an ARMA process.
Parameters
----------
ar : array_like
Coefficients for autoregressive lag polynomial, including zero lag.
ma : array_like
Coefficients for moving-average lag polynomial, including zero lag.
lags : int
The number of terms (lags plus zero lag) to include in returned acf.
Returns
-------
ndarray
The autocorrelations of ARMA process given by ar and ma.
See Also
--------
arma_acovf : Autocovariances from ARMA processes.
acf : Sample autocorrelation function estimation.
acovf : Sample autocovariance function estimation. | arma_acf | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def arma_pacf(ar, ma, lags=10):
"""
Theoretical partial autocorrelation function of an ARMA process.
Parameters
----------
ar : array_like, 1d
The coefficients for autoregressive lag polynomial, including zero lag.
ma : array_like, 1d
The coefficients for moving-average lag polynomial, including zero lag.
lags : int
The number of terms (lags plus zero lag) to include in returned pacf.
Returns
-------
ndarrray
The partial autocorrelation of ARMA process given by ar and ma.
Notes
-----
Solves yule-walker equation for each lag order up to nobs lags.
not tested/checked yet
"""
# TODO: Should use rank 1 inverse update
apacf = np.zeros(lags)
acov = arma_acf(ar, ma, lags=lags + 1)
apacf[0] = 1.0
for k in range(2, lags + 1):
r = acov[:k]
apacf[k - 1] = linalg.solve(linalg.toeplitz(r[:-1]), r[1:])[-1]
return apacf | Theoretical partial autocorrelation function of an ARMA process.
Parameters
----------
ar : array_like, 1d
The coefficients for autoregressive lag polynomial, including zero lag.
ma : array_like, 1d
The coefficients for moving-average lag polynomial, including zero lag.
lags : int
The number of terms (lags plus zero lag) to include in returned pacf.
Returns
-------
ndarrray
The partial autocorrelation of ARMA process given by ar and ma.
Notes
-----
Solves yule-walker equation for each lag order up to nobs lags.
not tested/checked yet | arma_pacf | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def arma_periodogram(ar, ma, worN=None, whole=0):
"""
Periodogram for ARMA process given by lag-polynomials ar and ma.
Parameters
----------
ar : array_like
The autoregressive lag-polynomial with leading 1 and lhs sign.
ma : array_like
The moving average lag-polynomial with leading 1.
worN : {None, int}, optional
An option for scipy.signal.freqz (read "w or N").
If None, then compute at 512 frequencies around the unit circle.
If a single integer, the compute at that many frequencies.
Otherwise, compute the response at frequencies given in worN.
whole : {0,1}, optional
An options for scipy.signal.freqz/
Normally, frequencies are computed from 0 to pi (upper-half of
unit-circle. If whole is non-zero compute frequencies from 0 to 2*pi.
Returns
-------
w : ndarray
The frequencies.
sd : ndarray
The periodogram, also known as the spectral density.
Notes
-----
Normalization ?
This uses signal.freqz, which does not use fft. There is a fft version
somewhere.
"""
w, h = signal.freqz(ma, ar, worN=worN, whole=whole)
sd = np.abs(h) ** 2 / np.sqrt(2 * np.pi)
if np.any(np.isnan(h)):
# this happens with unit root or seasonal unit root'
import warnings
warnings.warn(
"Warning: nan in frequency response h, maybe a unit root",
RuntimeWarning,
stacklevel=2,
)
return w, sd | Periodogram for ARMA process given by lag-polynomials ar and ma.
Parameters
----------
ar : array_like
The autoregressive lag-polynomial with leading 1 and lhs sign.
ma : array_like
The moving average lag-polynomial with leading 1.
worN : {None, int}, optional
An option for scipy.signal.freqz (read "w or N").
If None, then compute at 512 frequencies around the unit circle.
If a single integer, the compute at that many frequencies.
Otherwise, compute the response at frequencies given in worN.
whole : {0,1}, optional
An options for scipy.signal.freqz/
Normally, frequencies are computed from 0 to pi (upper-half of
unit-circle. If whole is non-zero compute frequencies from 0 to 2*pi.
Returns
-------
w : ndarray
The frequencies.
sd : ndarray
The periodogram, also known as the spectral density.
Notes
-----
Normalization ?
This uses signal.freqz, which does not use fft. There is a fft version
somewhere. | arma_periodogram | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def arma_impulse_response(ar, ma, leads=100):
"""
Compute the impulse response function (MA representation) for ARMA process.
Parameters
----------
ar : array_like, 1d
The auto regressive lag polynomial.
ma : array_like, 1d
The moving average lag polynomial.
leads : int
The number of observations to calculate.
Returns
-------
ndarray
The impulse response function with nobs elements.
Notes
-----
This is the same as finding the MA representation of an ARMA(p,q).
By reversing the role of ar and ma in the function arguments, the
returned result is the AR representation of an ARMA(p,q), i.e
ma_representation = arma_impulse_response(ar, ma, leads=100)
ar_representation = arma_impulse_response(ma, ar, leads=100)
Fully tested against matlab
Examples
--------
AR(1)
>>> arma_impulse_response([1.0, -0.8], [1.], leads=10)
array([ 1. , 0.8 , 0.64 , 0.512 , 0.4096 ,
0.32768 , 0.262144 , 0.2097152 , 0.16777216, 0.13421773])
this is the same as
>>> 0.8**np.arange(10)
array([ 1. , 0.8 , 0.64 , 0.512 , 0.4096 ,
0.32768 , 0.262144 , 0.2097152 , 0.16777216, 0.13421773])
MA(2)
>>> arma_impulse_response([1.0], [1., 0.5, 0.2], leads=10)
array([ 1. , 0.5, 0.2, 0. , 0. , 0. , 0. , 0. , 0. , 0. ])
ARMA(1,2)
>>> arma_impulse_response([1.0, -0.8], [1., 0.5, 0.2], leads=10)
array([ 1. , 1.3 , 1.24 , 0.992 , 0.7936 ,
0.63488 , 0.507904 , 0.4063232 , 0.32505856, 0.26004685])
"""
impulse = np.zeros(leads)
impulse[0] = 1.0
return signal.lfilter(ma, ar, impulse) | Compute the impulse response function (MA representation) for ARMA process.
Parameters
----------
ar : array_like, 1d
The auto regressive lag polynomial.
ma : array_like, 1d
The moving average lag polynomial.
leads : int
The number of observations to calculate.
Returns
-------
ndarray
The impulse response function with nobs elements.
Notes
-----
This is the same as finding the MA representation of an ARMA(p,q).
By reversing the role of ar and ma in the function arguments, the
returned result is the AR representation of an ARMA(p,q), i.e
ma_representation = arma_impulse_response(ar, ma, leads=100)
ar_representation = arma_impulse_response(ma, ar, leads=100)
Fully tested against matlab
Examples
--------
AR(1)
>>> arma_impulse_response([1.0, -0.8], [1.], leads=10)
array([ 1. , 0.8 , 0.64 , 0.512 , 0.4096 ,
0.32768 , 0.262144 , 0.2097152 , 0.16777216, 0.13421773])
this is the same as
>>> 0.8**np.arange(10)
array([ 1. , 0.8 , 0.64 , 0.512 , 0.4096 ,
0.32768 , 0.262144 , 0.2097152 , 0.16777216, 0.13421773])
MA(2)
>>> arma_impulse_response([1.0], [1., 0.5, 0.2], leads=10)
array([ 1. , 0.5, 0.2, 0. , 0. , 0. , 0. , 0. , 0. , 0. ])
ARMA(1,2)
>>> arma_impulse_response([1.0, -0.8], [1., 0.5, 0.2], leads=10)
array([ 1. , 1.3 , 1.24 , 0.992 , 0.7936 ,
0.63488 , 0.507904 , 0.4063232 , 0.32505856, 0.26004685]) | arma_impulse_response | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def arma2ma(ar, ma, lags=100):
"""
A finite-lag approximate MA representation of an ARMA process.
Parameters
----------
ar : ndarray
The auto regressive lag polynomial.
ma : ndarray
The moving average lag polynomial.
lags : int
The number of coefficients to calculate.
Returns
-------
ndarray
The coefficients of AR lag polynomial with nobs elements.
Notes
-----
Equivalent to ``arma_impulse_response(ma, ar, leads=100)``
"""
return arma_impulse_response(ar, ma, leads=lags) | A finite-lag approximate MA representation of an ARMA process.
Parameters
----------
ar : ndarray
The auto regressive lag polynomial.
ma : ndarray
The moving average lag polynomial.
lags : int
The number of coefficients to calculate.
Returns
-------
ndarray
The coefficients of AR lag polynomial with nobs elements.
Notes
-----
Equivalent to ``arma_impulse_response(ma, ar, leads=100)`` | arma2ma | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def arma2ar(ar, ma, lags=100):
"""
A finite-lag AR approximation of an ARMA process.
Parameters
----------
ar : array_like
The auto regressive lag polynomial.
ma : array_like
The moving average lag polynomial.
lags : int
The number of coefficients to calculate.
Returns
-------
ndarray
The coefficients of AR lag polynomial with nobs elements.
Notes
-----
Equivalent to ``arma_impulse_response(ma, ar, leads=100)``
"""
return arma_impulse_response(ma, ar, leads=lags) | A finite-lag AR approximation of an ARMA process.
Parameters
----------
ar : array_like
The auto regressive lag polynomial.
ma : array_like
The moving average lag polynomial.
lags : int
The number of coefficients to calculate.
Returns
-------
ndarray
The coefficients of AR lag polynomial with nobs elements.
Notes
-----
Equivalent to ``arma_impulse_response(ma, ar, leads=100)`` | arma2ar | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def ar2arma(ar_des, p, q, n=20, mse="ar", start=None):
"""
Find arma approximation to ar process.
This finds the ARMA(p,q) coefficients that minimize the integrated
squared difference between the impulse_response functions (MA
representation) of the AR and the ARMA process. This does not check
whether the MA lag polynomial of the ARMA process is invertible, neither
does it check the roots of the AR lag polynomial.
Parameters
----------
ar_des : array_like
The coefficients of original AR lag polynomial, including lag zero.
p : int
The length of desired AR lag polynomials.
q : int
The length of desired MA lag polynomials.
n : int
The number of terms of the impulse_response function to include in the
objective function for the approximation.
mse : str, 'ar'
Not used.
start : ndarray
Initial values to use when finding the approximation.
Returns
-------
ar_app : ndarray
The coefficients of the AR lag polynomials of the approximation.
ma_app : ndarray
The coefficients of the MA lag polynomials of the approximation.
res : tuple
The result of optimize.leastsq.
Notes
-----
Extension is possible if we want to match autocovariance instead
of impulse response function.
"""
# TODO: convert MA lag polynomial, ma_app, to be invertible, by mirroring
# TODO: roots outside the unit interval to ones that are inside. How to do
# TODO: this?
# p,q = pq
def msear_err(arma, ar_des):
ar, ma = np.r_[1, arma[: p - 1]], np.r_[1, arma[p - 1 :]]
ar_approx = arma_impulse_response(ma, ar, n)
return ar_des - ar_approx # ((ar - ar_approx)**2).sum()
if start is None:
arma0 = np.r_[-0.9 * np.ones(p - 1), np.zeros(q - 1)]
else:
arma0 = start
res = optimize.leastsq(msear_err, arma0, ar_des, maxfev=5000)
arma_app = np.atleast_1d(res[0])
ar_app = (np.r_[1, arma_app[: p - 1]],)
ma_app = np.r_[1, arma_app[p - 1 :]]
return ar_app, ma_app, res | Find arma approximation to ar process.
This finds the ARMA(p,q) coefficients that minimize the integrated
squared difference between the impulse_response functions (MA
representation) of the AR and the ARMA process. This does not check
whether the MA lag polynomial of the ARMA process is invertible, neither
does it check the roots of the AR lag polynomial.
Parameters
----------
ar_des : array_like
The coefficients of original AR lag polynomial, including lag zero.
p : int
The length of desired AR lag polynomials.
q : int
The length of desired MA lag polynomials.
n : int
The number of terms of the impulse_response function to include in the
objective function for the approximation.
mse : str, 'ar'
Not used.
start : ndarray
Initial values to use when finding the approximation.
Returns
-------
ar_app : ndarray
The coefficients of the AR lag polynomials of the approximation.
ma_app : ndarray
The coefficients of the MA lag polynomials of the approximation.
res : tuple
The result of optimize.leastsq.
Notes
-----
Extension is possible if we want to match autocovariance instead
of impulse response function. | ar2arma | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def lpol2index(ar):
"""
Remove zeros from lag polynomial
Parameters
----------
ar : array_like
coefficients of lag polynomial
Returns
-------
coeffs : ndarray
non-zero coefficients of lag polynomial
index : ndarray
index (lags) of lag polynomial with non-zero elements
"""
with warnings.catch_warnings():
warnings.simplefilter("ignore", ComplexWarning)
ar = array_like(ar, "ar")
index = np.nonzero(ar)[0]
coeffs = ar[index]
return coeffs, index | Remove zeros from lag polynomial
Parameters
----------
ar : array_like
coefficients of lag polynomial
Returns
-------
coeffs : ndarray
non-zero coefficients of lag polynomial
index : ndarray
index (lags) of lag polynomial with non-zero elements | lpol2index | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def index2lpol(coeffs, index):
"""
Expand coefficients to lag poly
Parameters
----------
coeffs : ndarray
non-zero coefficients of lag polynomial
index : ndarray
index (lags) of lag polynomial with non-zero elements
Returns
-------
ar : array_like
coefficients of lag polynomial
"""
n = max(index)
ar = np.zeros(n + 1)
ar[index] = coeffs
return ar | Expand coefficients to lag poly
Parameters
----------
coeffs : ndarray
non-zero coefficients of lag polynomial
index : ndarray
index (lags) of lag polynomial with non-zero elements
Returns
-------
ar : array_like
coefficients of lag polynomial | index2lpol | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def lpol_fima(d, n=20):
"""MA representation of fractional integration
.. math:: (1-L)^{-d} for |d|<0.5 or |d|<1 (?)
Parameters
----------
d : float
fractional power
n : int
number of terms to calculate, including lag zero
Returns
-------
ma : ndarray
coefficients of lag polynomial
"""
# hide import inside function until we use this heavily
from scipy.special import gammaln
j = np.arange(n)
return np.exp(gammaln(d + j) - gammaln(j + 1) - gammaln(d)) | MA representation of fractional integration
.. math:: (1-L)^{-d} for |d|<0.5 or |d|<1 (?)
Parameters
----------
d : float
fractional power
n : int
number of terms to calculate, including lag zero
Returns
-------
ma : ndarray
coefficients of lag polynomial | lpol_fima | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def lpol_fiar(d, n=20):
"""AR representation of fractional integration
.. math:: (1-L)^{d} for |d|<0.5 or |d|<1 (?)
Parameters
----------
d : float
fractional power
n : int
number of terms to calculate, including lag zero
Returns
-------
ar : ndarray
coefficients of lag polynomial
Notes:
first coefficient is 1, negative signs except for first term,
ar(L)*x_t
"""
# hide import inside function until we use this heavily
from scipy.special import gammaln
j = np.arange(n)
ar = -np.exp(gammaln(-d + j) - gammaln(j + 1) - gammaln(-d))
ar[0] = 1
return ar | AR representation of fractional integration
.. math:: (1-L)^{d} for |d|<0.5 or |d|<1 (?)
Parameters
----------
d : float
fractional power
n : int
number of terms to calculate, including lag zero
Returns
-------
ar : ndarray
coefficients of lag polynomial
Notes:
first coefficient is 1, negative signs except for first term,
ar(L)*x_t | lpol_fiar | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def lpol_sdiff(s):
"""return coefficients for seasonal difference (1-L^s)
just a trivial convenience function
Parameters
----------
s : int
number of periods in season
Returns
-------
sdiff : list, length s+1
"""
return [1] + [0] * (s - 1) + [-1] | return coefficients for seasonal difference (1-L^s)
just a trivial convenience function
Parameters
----------
s : int
number of periods in season
Returns
-------
sdiff : list, length s+1 | lpol_sdiff | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def deconvolve(num, den, n=None):
"""Deconvolves divisor out of signal, division of polynomials for n terms
calculates den^{-1} * num
Parameters
----------
num : array_like
signal or lag polynomial
denom : array_like
coefficients of lag polynomial (linear filter)
n : None or int
number of terms of quotient
Returns
-------
quot : ndarray
quotient or filtered series
rem : ndarray
remainder
Notes
-----
If num is a time series, then this applies the linear filter den^{-1}.
If both num and den are both lag polynomials, then this calculates the
quotient polynomial for n terms and also returns the remainder.
This is copied from scipy.signal.signaltools and added n as optional
parameter.
"""
num = np.atleast_1d(num)
den = np.atleast_1d(den)
N = len(num)
D = len(den)
if D > N and n is None:
quot = []
rem = num
else:
if n is None:
n = N - D + 1
input = np.zeros(n, float)
input[0] = 1
quot = signal.lfilter(num, den, input)
num_approx = signal.convolve(den, quot, mode="full")
if len(num) < len(num_approx): # 1d only ?
num = np.concatenate((num, np.zeros(len(num_approx) - len(num))))
rem = num - num_approx
return quot, rem | Deconvolves divisor out of signal, division of polynomials for n terms
calculates den^{-1} * num
Parameters
----------
num : array_like
signal or lag polynomial
denom : array_like
coefficients of lag polynomial (linear filter)
n : None or int
number of terms of quotient
Returns
-------
quot : ndarray
quotient or filtered series
rem : ndarray
remainder
Notes
-----
If num is a time series, then this applies the linear filter den^{-1}.
If both num and den are both lag polynomials, then this calculates the
quotient polynomial for n terms and also returns the remainder.
This is copied from scipy.signal.signaltools and added n as optional
parameter. | deconvolve | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def from_roots(cls, maroots=None, arroots=None, nobs=100):
"""
Create ArmaProcess from AR and MA polynomial roots.
Parameters
----------
maroots : array_like, optional
Roots for the MA polynomial
1 + theta_1*z + theta_2*z^2 + ..... + theta_n*z^n
arroots : array_like, optional
Roots for the AR polynomial
1 - phi_1*z - phi_2*z^2 - ..... - phi_n*z^n
nobs : int, optional
Length of simulated time series. Used, for example, if a sample
is generated.
Returns
-------
ArmaProcess
Class instance initialized with arcoefs and macoefs.
Examples
--------
>>> arroots = [.75, -.25]
>>> maroots = [.65, .35]
>>> arma_process = sm.tsa.ArmaProcess.from_roots(arroots, maroots)
>>> arma_process.isstationary
True
>>> arma_process.isinvertible
True
"""
if arroots is not None and len(arroots):
arpoly = np.polynomial.polynomial.Polynomial.fromroots(arroots)
arcoefs = arpoly.coef[1:] / arpoly.coef[0]
else:
arcoefs = []
if maroots is not None and len(maroots):
mapoly = np.polynomial.polynomial.Polynomial.fromroots(maroots)
macoefs = mapoly.coef[1:] / mapoly.coef[0]
else:
macoefs = []
# As from_coeffs will create a polynomial with constant 1/-1,(MA/AR)
# we need to scale the polynomial coefficients accordingly
return cls(np.r_[1, arcoefs], np.r_[1, macoefs], nobs=nobs) | Create ArmaProcess from AR and MA polynomial roots.
Parameters
----------
maroots : array_like, optional
Roots for the MA polynomial
1 + theta_1*z + theta_2*z^2 + ..... + theta_n*z^n
arroots : array_like, optional
Roots for the AR polynomial
1 - phi_1*z - phi_2*z^2 - ..... - phi_n*z^n
nobs : int, optional
Length of simulated time series. Used, for example, if a sample
is generated.
Returns
-------
ArmaProcess
Class instance initialized with arcoefs and macoefs.
Examples
--------
>>> arroots = [.75, -.25]
>>> maroots = [.65, .35]
>>> arma_process = sm.tsa.ArmaProcess.from_roots(arroots, maroots)
>>> arma_process.isstationary
True
>>> arma_process.isinvertible
True | from_roots | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def from_coeffs(cls, arcoefs=None, macoefs=None, nobs=100):
"""
Create ArmaProcess from an ARMA representation.
Parameters
----------
arcoefs : array_like
Coefficient for autoregressive lag polynomial, not including zero
lag. The sign is inverted to conform to the usual time series
representation of an ARMA process in statistics. See the class
docstring for more information.
macoefs : array_like
Coefficient for moving-average lag polynomial, excluding zero lag.
nobs : int, optional
Length of simulated time series. Used, for example, if a sample
is generated.
Returns
-------
ArmaProcess
Class instance initialized with arcoefs and macoefs.
Examples
--------
>>> arparams = [.75, -.25]
>>> maparams = [.65, .35]
>>> arma_process = sm.tsa.ArmaProcess.from_coeffs(ar, ma)
>>> arma_process.isstationary
True
>>> arma_process.isinvertible
True
"""
arcoefs = [] if arcoefs is None else arcoefs
macoefs = [] if macoefs is None else macoefs
return cls(
np.r_[1, -np.asarray(arcoefs)],
np.r_[1, np.asarray(macoefs)],
nobs=nobs,
) | Create ArmaProcess from an ARMA representation.
Parameters
----------
arcoefs : array_like
Coefficient for autoregressive lag polynomial, not including zero
lag. The sign is inverted to conform to the usual time series
representation of an ARMA process in statistics. See the class
docstring for more information.
macoefs : array_like
Coefficient for moving-average lag polynomial, excluding zero lag.
nobs : int, optional
Length of simulated time series. Used, for example, if a sample
is generated.
Returns
-------
ArmaProcess
Class instance initialized with arcoefs and macoefs.
Examples
--------
>>> arparams = [.75, -.25]
>>> maparams = [.65, .35]
>>> arma_process = sm.tsa.ArmaProcess.from_coeffs(ar, ma)
>>> arma_process.isstationary
True
>>> arma_process.isinvertible
True | from_coeffs | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def from_estimation(cls, model_results, nobs=None):
"""
Create an ArmaProcess from the results of an ARIMA estimation.
Parameters
----------
model_results : ARIMAResults instance
A fitted model.
nobs : int, optional
If None, nobs is taken from the results.
Returns
-------
ArmaProcess
Class instance initialized from model_results.
See Also
--------
statsmodels.tsa.arima.model.ARIMA
The models class used to create the ArmaProcess
"""
nobs = nobs or model_results.nobs
return cls(
model_results.polynomial_reduced_ar,
model_results.polynomial_reduced_ma,
nobs=nobs,
) | Create an ArmaProcess from the results of an ARIMA estimation.
Parameters
----------
model_results : ARIMAResults instance
A fitted model.
nobs : int, optional
If None, nobs is taken from the results.
Returns
-------
ArmaProcess
Class instance initialized from model_results.
See Also
--------
statsmodels.tsa.arima.model.ARIMA
The models class used to create the ArmaProcess | from_estimation | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def arroots(self):
"""Roots of autoregressive lag-polynomial"""
return self.arpoly.roots() | Roots of autoregressive lag-polynomial | arroots | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def maroots(self):
"""Roots of moving average lag-polynomial"""
return self.mapoly.roots() | Roots of moving average lag-polynomial | maroots | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def isstationary(self):
"""
Arma process is stationary if AR roots are outside unit circle.
Returns
-------
bool
True if autoregressive roots are outside unit circle.
"""
if np.all(np.abs(self.arroots) > 1.0):
return True
else:
return False | Arma process is stationary if AR roots are outside unit circle.
Returns
-------
bool
True if autoregressive roots are outside unit circle. | isstationary | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def isinvertible(self):
"""
Arma process is invertible if MA roots are outside unit circle.
Returns
-------
bool
True if moving average roots are outside unit circle.
"""
if np.all(np.abs(self.maroots) > 1):
return True
else:
return False | Arma process is invertible if MA roots are outside unit circle.
Returns
-------
bool
True if moving average roots are outside unit circle. | isinvertible | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def invertroots(self, retnew=False):
"""
Make MA polynomial invertible by inverting roots inside unit circle.
Parameters
----------
retnew : bool
If False (default), then return the lag-polynomial as array.
If True, then return a new instance with invertible MA-polynomial.
Returns
-------
manew : ndarray
A new invertible MA lag-polynomial, returned if retnew is false.
wasinvertible : bool
True if the MA lag-polynomial was already invertible, returned if
retnew is false.
armaprocess : new instance of class
If retnew is true, then return a new instance with invertible
MA-polynomial.
"""
# TODO: variable returns like this?
pr = self.maroots
mainv = self.ma
invertible = self.isinvertible
if not invertible:
pr[np.abs(pr) < 1] = 1.0 / pr[np.abs(pr) < 1]
pnew = np.polynomial.Polynomial.fromroots(pr)
mainv = pnew.coef / pnew.coef[0]
if retnew:
return self.__class__(self.ar, mainv, nobs=self.nobs)
else:
return mainv, invertible | Make MA polynomial invertible by inverting roots inside unit circle.
Parameters
----------
retnew : bool
If False (default), then return the lag-polynomial as array.
If True, then return a new instance with invertible MA-polynomial.
Returns
-------
manew : ndarray
A new invertible MA lag-polynomial, returned if retnew is false.
wasinvertible : bool
True if the MA lag-polynomial was already invertible, returned if
retnew is false.
armaprocess : new instance of class
If retnew is true, then return a new instance with invertible
MA-polynomial. | invertroots | python | statsmodels/statsmodels | statsmodels/tsa/arima_process.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/arima_process.py | BSD-3-Clause |
def mackinnonp(teststat, regression="c", N=1, lags=None):
"""
Returns MacKinnon's approximate p-value for teststat.
Parameters
----------
teststat : float
"T-value" from an Augmented Dickey-Fuller regression.
regression : str {"c", "n", "ct", "ctt"}
This is the method of regression that was used. Following MacKinnon's
notation, this can be "c" for constant, "n" for no constant, "ct" for
constant and trend, and "ctt" for constant, trend, and trend-squared.
N : int
The number of series believed to be I(1). For (Augmented) Dickey-
Fuller N = 1.
Returns
-------
p-value : float
The p-value for the ADF statistic estimated using MacKinnon 1994.
References
----------
.. [*] MacKinnon, J.G. 1994 "Approximate Asymptotic Distribution Functions
for Unit-Root and Cointegration Tests." Journal of Business & Economics
Statistics, 12.2, 167-76.
Notes
-----
For (A)DF
H_0: AR coefficient = 1
H_a: AR coefficient < 1
"""
maxstat = _tau_maxs[regression]
minstat = _tau_mins[regression]
starstat = _tau_stars[regression]
if teststat > maxstat[N-1]:
return 1.0
elif teststat < minstat[N-1]:
return 0.0
if teststat <= starstat[N-1]:
tau_coef = _tau_smallps[regression][N-1]
else:
# Note: above is only for z stats
tau_coef = _tau_largeps[regression][N-1]
return norm.cdf(polyval(tau_coef[::-1], teststat)) | Returns MacKinnon's approximate p-value for teststat.
Parameters
----------
teststat : float
"T-value" from an Augmented Dickey-Fuller regression.
regression : str {"c", "n", "ct", "ctt"}
This is the method of regression that was used. Following MacKinnon's
notation, this can be "c" for constant, "n" for no constant, "ct" for
constant and trend, and "ctt" for constant, trend, and trend-squared.
N : int
The number of series believed to be I(1). For (Augmented) Dickey-
Fuller N = 1.
Returns
-------
p-value : float
The p-value for the ADF statistic estimated using MacKinnon 1994.
References
----------
.. [*] MacKinnon, J.G. 1994 "Approximate Asymptotic Distribution Functions
for Unit-Root and Cointegration Tests." Journal of Business & Economics
Statistics, 12.2, 167-76.
Notes
-----
For (A)DF
H_0: AR coefficient = 1
H_a: AR coefficient < 1 | mackinnonp | python | statsmodels/statsmodels | statsmodels/tsa/adfvalues.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/adfvalues.py | BSD-3-Clause |
def mackinnoncrit(N=1, regression="c", nobs=inf):
"""
Returns the critical values for cointegrating and the ADF test.
In 2010 MacKinnon updated the values of his 1994 paper with critical values
for the augmented Dickey-Fuller tests. These new values are to be
preferred and are used here.
Parameters
----------
N : int
The number of series of I(1) series for which the null of
non-cointegration is being tested. For N > 12, the critical values
are linearly interpolated (not yet implemented). For the ADF test,
N = 1.
reg : str {'c', 'tc', 'ctt', 'n'}
Following MacKinnon (1996), these stand for the type of regression run.
'c' for constant and no trend, 'tc' for constant with a linear trend,
'ctt' for constant with a linear and quadratic trend, and 'n' for
no constant. The values for the no constant case are taken from the
1996 paper, as they were not updated for 2010 due to the unrealistic
assumptions that would underlie such a case.
nobs : int or np.inf
This is the sample size. If the sample size is numpy.inf, then the
asymptotic critical values are returned.
References
----------
.. [*] MacKinnon, J.G. 1994 "Approximate Asymptotic Distribution Functions
for Unit-Root and Cointegration Tests." Journal of Business & Economics
Statistics, 12.2, 167-76.
.. [*] MacKinnon, J.G. 2010. "Critical Values for Cointegration Tests."
Queen's University, Dept of Economics Working Papers 1227.
http://ideas.repec.org/p/qed/wpaper/1227.html
"""
reg = regression
if reg not in ['c', 'ct', 'n', 'ctt']:
raise ValueError("regression keyword %s not understood" % reg)
tau = tau_2010s[reg]
if nobs is inf:
return tau[N-1, :, 0]
else:
val = tau[N-1, :, ::-1]
return polyval(val.T, 1./nobs) | Returns the critical values for cointegrating and the ADF test.
In 2010 MacKinnon updated the values of his 1994 paper with critical values
for the augmented Dickey-Fuller tests. These new values are to be
preferred and are used here.
Parameters
----------
N : int
The number of series of I(1) series for which the null of
non-cointegration is being tested. For N > 12, the critical values
are linearly interpolated (not yet implemented). For the ADF test,
N = 1.
reg : str {'c', 'tc', 'ctt', 'n'}
Following MacKinnon (1996), these stand for the type of regression run.
'c' for constant and no trend, 'tc' for constant with a linear trend,
'ctt' for constant with a linear and quadratic trend, and 'n' for
no constant. The values for the no constant case are taken from the
1996 paper, as they were not updated for 2010 due to the unrealistic
assumptions that would underlie such a case.
nobs : int or np.inf
This is the sample size. If the sample size is numpy.inf, then the
asymptotic critical values are returned.
References
----------
.. [*] MacKinnon, J.G. 1994 "Approximate Asymptotic Distribution Functions
for Unit-Root and Cointegration Tests." Journal of Business & Economics
Statistics, 12.2, 167-76.
.. [*] MacKinnon, J.G. 2010. "Critical Values for Cointegration Tests."
Queen's University, Dept of Economics Working Papers 1227.
http://ideas.repec.org/p/qed/wpaper/1227.html | mackinnoncrit | python | statsmodels/statsmodels | statsmodels/tsa/adfvalues.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/adfvalues.py | BSD-3-Clause |
def loglike(self, params):
"""
Loglikelihood for timeseries model
Parameters
----------
params : array_like
The model parameters
Notes
-----
needs to be overwritten by subclass
"""
raise NotImplementedError | Loglikelihood for timeseries model
Parameters
----------
params : array_like
The model parameters
Notes
-----
needs to be overwritten by subclass | loglike | python | statsmodels/statsmodels | statsmodels/tsa/mlemodel.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/mlemodel.py | BSD-3-Clause |
def score(self, params):
"""
Score vector for Arma model
"""
#return None
#print params
jac = ndt.Jacobian(self.loglike, stepMax=1e-4)
return jac(params)[-1] | Score vector for Arma model | score | python | statsmodels/statsmodels | statsmodels/tsa/mlemodel.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/mlemodel.py | BSD-3-Clause |
def hessian(self, params):
"""
Hessian of arma model. Currently uses numdifftools
"""
#return None
Hfun = ndt.Jacobian(self.score, stepMax=1e-4)
return Hfun(params)[-1] | Hessian of arma model. Currently uses numdifftools | hessian | python | statsmodels/statsmodels | statsmodels/tsa/mlemodel.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/mlemodel.py | BSD-3-Clause |
def fit(self, start_params=None, maxiter=5000, method='fmin', tol=1e-08):
'''estimate model by minimizing negative loglikelihood
does this need to be overwritten ?
'''
if start_params is None and hasattr(self, '_start_params'):
start_params = self._start_params
#start_params = np.concatenate((0.05*np.ones(self.nar + self.nma), [1]))
mlefit = super().fit(start_params=start_params,
maxiter=maxiter, method=method, tol=tol)
return mlefit | estimate model by minimizing negative loglikelihood
does this need to be overwritten ? | fit | python | statsmodels/statsmodels | statsmodels/tsa/mlemodel.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/mlemodel.py | BSD-3-Clause |
def fit(self, *, inner_iter=None, outer_iter=None, fit_kwargs=None):
"""
Estimate STL and forecasting model parameters.
Parameters
----------\n%(fit_params)s
fit_kwargs : dict[str, Any]
Any additional keyword arguments to pass to ``model``'s ``fit``
method when estimating the model on the decomposed residuals.
Returns
-------
STLForecastResults
Results with forecasting methods.
"""
fit_kwargs = {} if fit_kwargs is None else fit_kwargs
stl = STL(self._endog, **self._stl_kwargs)
stl_fit: DecomposeResult = stl.fit(
inner_iter=inner_iter, outer_iter=outer_iter
)
model_endog = stl_fit.trend + stl_fit.resid
mod = self._model(model_endog, **self._model_kwargs)
res = mod.fit(**fit_kwargs)
if not hasattr(res, "forecast"):
raise AttributeError(
"The model's result must expose a ``forecast`` method."
)
return STLForecastResults(stl, stl_fit, mod, res, self._endog) | Estimate STL and forecasting model parameters.
Parameters
----------\n%(fit_params)s
fit_kwargs : dict[str, Any]
Any additional keyword arguments to pass to ``model``'s ``fit``
method when estimating the model on the decomposed residuals.
Returns
-------
STLForecastResults
Results with forecasting methods. | fit | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/stl.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/stl.py | BSD-3-Clause |
def period(self) -> int:
"""The period of the seasonal component"""
return self._stl.period | The period of the seasonal component | period | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/stl.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/stl.py | BSD-3-Clause |
def stl(self) -> STL:
"""The STL instance used to decompose the time series"""
return self._stl | The STL instance used to decompose the time series | stl | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/stl.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/stl.py | BSD-3-Clause |
def result(self) -> DecomposeResult:
"""The result of applying STL to the data"""
return self._result | The result of applying STL to the data | result | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/stl.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/stl.py | BSD-3-Clause |
def model(self) -> Any:
"""The model fit to the additively deseasonalized data"""
return self._model | The model fit to the additively deseasonalized data | model | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/stl.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/stl.py | BSD-3-Clause |
def model_result(self) -> Any:
"""The result class from the estimated model"""
return self._model_result | The result class from the estimated model | model_result | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/stl.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/stl.py | BSD-3-Clause |
def summary(self) -> Summary:
"""
Summary of both the STL decomposition and the model fit.
Returns
-------
Summary
The summary of the model fit and the STL decomposition.
Notes
-----
Requires that the model's result class supports ``summary`` and
returns a ``Summary`` object.
"""
if not hasattr(self._model_result, "summary"):
raise AttributeError(
"The model result does not have a summary attribute."
)
summary: Summary = self._model_result.summary()
if not isinstance(summary, Summary):
raise TypeError(
"The model result's summary is not a Summary object."
)
summary.tables[0].title = (
"STL Decomposition and " + summary.tables[0].title
)
config = self._stl.config
left_keys = ("period", "seasonal", "robust")
left_data = []
left_stubs = []
right_data = []
right_stubs = []
for key in config:
new = key.capitalize()
new = new.replace("_", " ")
if new in ("Trend", "Low Pass"):
new += " Length"
is_left = any(key.startswith(val) for val in left_keys)
new += ":"
stub = f"{new:<23s}"
val = f"{str(config[key]):>13s}"
if is_left:
left_stubs.append(stub)
left_data.append([val])
else:
right_stubs.append(" " * 6 + stub)
right_data.append([val])
tab = SimpleTable(
left_data, stubs=tuple(left_stubs), title="STL Configuration"
)
tab.extend_right(SimpleTable(right_data, stubs=right_stubs))
summary.tables.append(tab)
return summary | Summary of both the STL decomposition and the model fit.
Returns
-------
Summary
The summary of the model fit and the STL decomposition.
Notes
-----
Requires that the model's result class supports ``summary`` and
returns a ``Summary`` object. | summary | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/stl.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/stl.py | BSD-3-Clause |
def _get_seasonal_prediction(
self,
start: Optional[DateLike],
end: Optional[DateLike],
dynamic: Union[bool, DateLike],
) -> np.ndarray:
"""
Get STLs seasonal in- and out-of-sample predictions
Parameters
----------
start : int, str, or datetime, optional
Zero-indexed observation number at which to start forecasting,
i.e., the first forecast is start. Can also be a date string to
parse or a datetime type. Default is the the zeroth observation.
end : int, str, or datetime, optional
Zero-indexed observation number at which to end forecasting, i.e.,
the last forecast is end. Can also be a date string to
parse or a datetime type. However, if the dates index does not
have a fixed frequency, end must be an integer index if you
want out of sample prediction. Default is the last observation in
the sample.
dynamic : bool, int, str, or datetime, optional
Integer offset relative to `start` at which to begin dynamic
prediction. Can also be an absolute date string to parse or a
datetime type (these are not interpreted as offsets).
Prior to this observation, true endogenous values will be used for
prediction; starting with this observation and continuing through
the end of prediction, forecasted endogenous values will be used
instead.
Returns
-------
ndarray
Array containing the seasibak predictions.
"""
data = PandasData(pd.Series(self._endog), index=self._index)
if start is None:
start = 0
(start, end, out_of_sample, prediction_index) = get_prediction_index(
start, end, self._nobs, self._index, data=data
)
if isinstance(dynamic, (str, dt.datetime, pd.Timestamp)):
dynamic, _, _ = get_index_loc(dynamic, self._index)
dynamic = dynamic - start
elif dynamic is True:
dynamic = 0
elif dynamic is False:
# If `dynamic=False`, then no dynamic predictions
dynamic = None
nobs = self._nobs
dynamic, _ = _check_dynamic(dynamic, start, end, nobs)
in_sample_end = end + 1 if dynamic is None else dynamic
seasonal = np.asarray(self._result.seasonal)
predictions = seasonal[start:in_sample_end]
oos = np.empty((0,))
if dynamic is not None:
num = out_of_sample + end + 1 - dynamic
oos = self._seasonal_forecast(num, None, offset=dynamic)
elif out_of_sample:
oos = self._seasonal_forecast(out_of_sample, None)
oos_start = max(start - nobs, 0)
oos = oos[oos_start:]
predictions = np.r_[predictions, oos]
return predictions | Get STLs seasonal in- and out-of-sample predictions
Parameters
----------
start : int, str, or datetime, optional
Zero-indexed observation number at which to start forecasting,
i.e., the first forecast is start. Can also be a date string to
parse or a datetime type. Default is the the zeroth observation.
end : int, str, or datetime, optional
Zero-indexed observation number at which to end forecasting, i.e.,
the last forecast is end. Can also be a date string to
parse or a datetime type. However, if the dates index does not
have a fixed frequency, end must be an integer index if you
want out of sample prediction. Default is the last observation in
the sample.
dynamic : bool, int, str, or datetime, optional
Integer offset relative to `start` at which to begin dynamic
prediction. Can also be an absolute date string to parse or a
datetime type (these are not interpreted as offsets).
Prior to this observation, true endogenous values will be used for
prediction; starting with this observation and continuing through
the end of prediction, forecasted endogenous values will be used
instead.
Returns
-------
ndarray
Array containing the seasibak predictions. | _get_seasonal_prediction | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/stl.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/stl.py | BSD-3-Clause |
def _seasonal_forecast(
self, steps: int, index: Optional[pd.Index], offset=None
) -> Union[pd.Series, np.ndarray]:
"""
Get the seasonal component of the forecast
Parameters
----------
steps : int
The number of steps required.
index : pd.Index
A pandas index to use. If None, returns an ndarray.
offset : int
The index of the first out-of-sample observation. If None, uses
nobs.
Returns
-------
seasonal : {ndarray, Series}
The seasonal component.
"""
period = self.period
seasonal = np.asarray(self._result.seasonal)
offset = self._nobs if offset is None else offset
seasonal = seasonal[offset - period : offset]
seasonal = np.tile(seasonal, steps // period + ((steps % period) != 0))
seasonal = seasonal[:steps]
if index is not None:
seasonal = pd.Series(seasonal, index=index)
return seasonal | Get the seasonal component of the forecast
Parameters
----------
steps : int
The number of steps required.
index : pd.Index
A pandas index to use. If None, returns an ndarray.
offset : int
The index of the first out-of-sample observation. If None, uses
nobs.
Returns
-------
seasonal : {ndarray, Series}
The seasonal component. | _seasonal_forecast | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/stl.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/stl.py | BSD-3-Clause |
def forecast(
self, steps: int = 1, **kwargs: dict[str, Any]
) -> Union[np.ndarray, pd.Series]:
"""
Out-of-sample forecasts
Parameters
----------
steps : int, str, or datetime, optional
If an integer, the number of steps to forecast from the end of the
sample. Can also be a date string to parse or a datetime type.
However, if the dates index does not have a fixed frequency, steps
must be an integer. Default
**kwargs
Additional arguments may required for forecasting beyond the end
of the sample. These arguments are passed into the time series
model results' ``forecast`` method.
Returns
-------
forecast : {ndarray, Series}
Out of sample forecasts
"""
forecast = self._model_result.forecast(steps=steps, **kwargs)
index = forecast.index if isinstance(forecast, pd.Series) else None
return forecast + self._seasonal_forecast(steps, index) | Out-of-sample forecasts
Parameters
----------
steps : int, str, or datetime, optional
If an integer, the number of steps to forecast from the end of the
sample. Can also be a date string to parse or a datetime type.
However, if the dates index does not have a fixed frequency, steps
must be an integer. Default
**kwargs
Additional arguments may required for forecasting beyond the end
of the sample. These arguments are passed into the time series
model results' ``forecast`` method.
Returns
-------
forecast : {ndarray, Series}
Out of sample forecasts | forecast | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/stl.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/stl.py | BSD-3-Clause |
def get_prediction(
self,
start: Optional[DateLike] = None,
end: Optional[DateLike] = None,
dynamic: Union[bool, DateLike] = False,
**kwargs: dict[str, Any],
):
"""
In-sample prediction and out-of-sample forecasting
Parameters
----------
start : int, str, or datetime, optional
Zero-indexed observation number at which to start forecasting,
i.e., the first forecast is start. Can also be a date string to
parse or a datetime type. Default is the the zeroth observation.
end : int, str, or datetime, optional
Zero-indexed observation number at which to end forecasting, i.e.,
the last forecast is end. Can also be a date string to
parse or a datetime type. However, if the dates index does not
have a fixed frequency, end must be an integer index if you
want out of sample prediction. Default is the last observation in
the sample.
dynamic : bool, int, str, or datetime, optional
Integer offset relative to `start` at which to begin dynamic
prediction. Can also be an absolute date string to parse or a
datetime type (these are not interpreted as offsets).
Prior to this observation, true endogenous values will be used for
prediction; starting with this observation and continuing through
the end of prediction, forecasted endogenous values will be used
instead.
**kwargs
Additional arguments may required for forecasting beyond the end
of the sample. These arguments are passed into the time series
model results' ``get_prediction`` method.
Returns
-------
PredictionResults
PredictionResults instance containing in-sample predictions,
out-of-sample forecasts, and prediction intervals.
"""
pred = self._model_result.get_prediction(
start=start, end=end, dynamic=dynamic, **kwargs
)
seasonal_prediction = self._get_seasonal_prediction(
start, end, dynamic
)
mean = pred.predicted_mean + seasonal_prediction
try:
var_pred_mean = pred.var_pred_mean
except (AttributeError, NotImplementedError):
# Allow models that do not return var_pred_mean
import warnings
warnings.warn(
"The variance of the predicted mean is not available using "
f"the {self.model.__class__.__name__} model class.",
UserWarning,
stacklevel=2,
)
var_pred_mean = np.nan + mean.copy()
return PredictionResults(
mean, var_pred_mean, dist="norm", row_labels=pred.row_labels
) | In-sample prediction and out-of-sample forecasting
Parameters
----------
start : int, str, or datetime, optional
Zero-indexed observation number at which to start forecasting,
i.e., the first forecast is start. Can also be a date string to
parse or a datetime type. Default is the the zeroth observation.
end : int, str, or datetime, optional
Zero-indexed observation number at which to end forecasting, i.e.,
the last forecast is end. Can also be a date string to
parse or a datetime type. However, if the dates index does not
have a fixed frequency, end must be an integer index if you
want out of sample prediction. Default is the last observation in
the sample.
dynamic : bool, int, str, or datetime, optional
Integer offset relative to `start` at which to begin dynamic
prediction. Can also be an absolute date string to parse or a
datetime type (these are not interpreted as offsets).
Prior to this observation, true endogenous values will be used for
prediction; starting with this observation and continuing through
the end of prediction, forecasted endogenous values will be used
instead.
**kwargs
Additional arguments may required for forecasting beyond the end
of the sample. These arguments are passed into the time series
model results' ``get_prediction`` method.
Returns
-------
PredictionResults
PredictionResults instance containing in-sample predictions,
out-of-sample forecasts, and prediction intervals. | get_prediction | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/stl.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/stl.py | BSD-3-Clause |
def deseasonalize(self) -> bool:
"""Whether to deseasonalize the data"""
return self._deseasonalize | Whether to deseasonalize the data | deseasonalize | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/theta.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/theta.py | BSD-3-Clause |
def period(self) -> int:
"""The period of the seasonality"""
return self._period | The period of the seasonality | period | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/theta.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/theta.py | BSD-3-Clause |
def use_test(self) -> bool:
"""Whether to test the data for seasonality"""
return self._use_test | Whether to test the data for seasonality | use_test | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/theta.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/theta.py | BSD-3-Clause |
def difference(self) -> bool:
"""Whether the data is differenced in the seasonality test"""
return self._diff | Whether the data is differenced in the seasonality test | difference | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/theta.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/theta.py | BSD-3-Clause |
def method(self) -> str:
"""The method used to deseasonalize the data"""
return self._method | The method used to deseasonalize the data | method | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/theta.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/theta.py | BSD-3-Clause |
def params(self) -> pd.Series:
"""The forecasting model parameters"""
return pd.Series([self._b0, self._alpha], index=["b0", "alpha"]) | The forecasting model parameters | params | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/theta.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/theta.py | BSD-3-Clause |
def sigma2(self) -> float:
"""The estimated residual variance"""
if self._sigma2 is None:
mod = SARIMAX(self.model._y, order=(0, 1, 1), trend="c")
res = mod.fit(disp=False)
self._sigma2 = np.asarray(res.params)[-1]
assert self._sigma2 is not None
return self._sigma2 | The estimated residual variance | sigma2 | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/theta.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/theta.py | BSD-3-Clause |
def model(self) -> ThetaModel:
"""The model used to produce the results"""
return self._model | The model used to produce the results | model | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/theta.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/theta.py | BSD-3-Clause |
def summary(self) -> Summary:
"""
Summarize the model
Returns
-------
Summary
This holds the summary table and text, which can be printed or
converted to various output formats.
See Also
--------
statsmodels.iolib.summary.Summary
"""
model = self.model
smry = Summary()
model_name = type(model).__name__
title = model_name + " Results"
method = "MLE" if self._use_mle else "OLS/SES"
is_series = isinstance(model.endog_orig, pd.Series)
index = getattr(model.endog_orig, "index", None)
if is_series and isinstance(index, (pd.DatetimeIndex, pd.PeriodIndex)):
sample = [index[0].strftime("%m-%d-%Y")]
sample += ["- " + index[-1].strftime("%m-%d-%Y")]
else:
sample = [str(0), str(model.endog_orig.shape[0])]
dep_name = getattr(model.endog_orig, "name", "endog") or "endog"
top_left = [
("Dep. Variable:", [dep_name]),
("Method:", [method]),
("Date:", None),
("Time:", None),
("Sample:", [sample[0]]),
("", [sample[1]]),
]
method = (
"Multiplicative" if model.method.startswith("mul") else "Additive"
)
top_right = [
("No. Observations:", [str(self._nobs)]),
("Deseasonalized:", [str(model.deseasonalize)]),
]
if model.deseasonalize:
top_right.extend(
[
("Deseas. Method:", [method]),
("Period:", [str(model.period)]),
("", [""]),
("", [""]),
]
)
else:
top_right.extend([("", [""])] * 4)
smry.add_table_2cols(
self, gleft=top_left, gright=top_right, title=title
)
table_fmt = {"data_fmts": ["%s", "%#0.4g"], "data_aligns": "r"}
data = np.asarray(self.params)[:, None]
st = SimpleTable(
data,
["Parameters", "Estimate"],
list(self.params.index),
title="Parameter Estimates",
txt_fmt=table_fmt,
)
smry.tables.append(st)
return smry | Summarize the model
Returns
-------
Summary
This holds the summary table and text, which can be printed or
converted to various output formats.
See Also
--------
statsmodels.iolib.summary.Summary | summary | python | statsmodels/statsmodels | statsmodels/tsa/forecasting/theta.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/forecasting/theta.py | BSD-3-Clause |
def concat(series, axis=0, allow_mix=False):
"""
Concatenate a set of series.
Parameters
----------
series : iterable
An iterable of series to be concatenated
axis : int, optional
The axis along which to concatenate. Default is 1 (columns).
allow_mix : bool
Whether or not to allow a mix of pandas and non-pandas objects. Default
is False. If true, the returned object is an ndarray, and additional
pandas metadata (e.g. column names, indices, etc) is lost.
Returns
-------
concatenated : array or pd.DataFrame
The concatenated array. Will be a DataFrame if series are pandas
objects.
"""
is_pandas = np.r_[[_is_using_pandas(s, None) for s in series]]
ndim = np.r_[[np.ndim(s) for s in series]]
max_ndim = np.max(ndim)
if max_ndim > 2:
raise ValueError('`tools.concat` does not support arrays with 3 or'
' more dimensions.')
# Make sure the iterable is mutable
if isinstance(series, tuple):
series = list(series)
# Standardize ndim
for i in range(len(series)):
if ndim[i] == 0 and max_ndim == 1:
series[i] = np.atleast_1d(series[i])
elif ndim[i] == 0 and max_ndim == 2:
series[i] = np.atleast_2d(series[i])
elif ndim[i] == 1 and max_ndim == 2 and is_pandas[i]:
name = series[i].name
series[i] = series[i].to_frame()
series[i].columns = [name]
elif ndim[i] == 1 and max_ndim == 2 and not is_pandas[i]:
series[i] = np.atleast_2d(series[i]).T
if np.all(is_pandas):
if isinstance(series[0], pd.DataFrame):
base_columns = series[0].columns
else:
base_columns = pd.Index([series[0].name])
for i in range(1, len(series)):
s = series[i]
if isinstance(s, pd.DataFrame):
# Handle case where we were passed a dataframe and a series
# to concatenate, and the series did not have a name.
if s.columns.equals(pd.Index([None])):
s.columns = base_columns[:1]
s_columns = s.columns
else:
s_columns = pd.Index([s.name])
if axis == 0 and not base_columns.equals(s_columns):
raise ValueError('Columns must match to concatenate along'
' rows.')
elif axis == 1 and not series[0].index.equals(s.index):
raise ValueError('Index must match to concatenate along'
' columns.')
concatenated = pd.concat(series, axis=axis)
elif np.all(~is_pandas) or allow_mix:
concatenated = np.concatenate(series, axis=axis)
else:
raise ValueError('Attempted to concatenate Pandas objects with'
' non-Pandas objects with `allow_mix=False`.')
return concatenated | Concatenate a set of series.
Parameters
----------
series : iterable
An iterable of series to be concatenated
axis : int, optional
The axis along which to concatenate. Default is 1 (columns).
allow_mix : bool
Whether or not to allow a mix of pandas and non-pandas objects. Default
is False. If true, the returned object is an ndarray, and additional
pandas metadata (e.g. column names, indices, etc) is lost.
Returns
-------
concatenated : array or pd.DataFrame
The concatenated array. Will be a DataFrame if series are pandas
objects. | concat | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def constrain_stationary_univariate(unconstrained):
"""
Transform unconstrained parameters used by the optimizer to constrained
parameters used in likelihood evaluation
Parameters
----------
unconstrained : ndarray
Unconstrained parameters used by the optimizer, to be transformed to
stationary coefficients of, e.g., an autoregressive or moving average
component.
Returns
-------
constrained : ndarray
Constrained parameters of, e.g., an autoregressive or moving average
component, to be transformed to arbitrary parameters used by the
optimizer.
References
----------
.. [*] Monahan, John F. 1984.
"A Note on Enforcing Stationarity in
Autoregressive-moving Average Models."
Biometrika 71 (2) (August 1): 403-404.
"""
n = unconstrained.shape[0]
y = np.zeros((n, n), dtype=unconstrained.dtype)
r = unconstrained/((1 + unconstrained**2)**0.5)
for k in range(n):
for i in range(k):
y[k, i] = y[k - 1, i] + r[k] * y[k - 1, k - i - 1]
y[k, k] = r[k]
return -y[n - 1, :] | Transform unconstrained parameters used by the optimizer to constrained
parameters used in likelihood evaluation
Parameters
----------
unconstrained : ndarray
Unconstrained parameters used by the optimizer, to be transformed to
stationary coefficients of, e.g., an autoregressive or moving average
component.
Returns
-------
constrained : ndarray
Constrained parameters of, e.g., an autoregressive or moving average
component, to be transformed to arbitrary parameters used by the
optimizer.
References
----------
.. [*] Monahan, John F. 1984.
"A Note on Enforcing Stationarity in
Autoregressive-moving Average Models."
Biometrika 71 (2) (August 1): 403-404. | constrain_stationary_univariate | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def unconstrain_stationary_univariate(constrained):
"""
Transform constrained parameters used in likelihood evaluation
to unconstrained parameters used by the optimizer
Parameters
----------
constrained : ndarray
Constrained parameters of, e.g., an autoregressive or moving average
component, to be transformed to arbitrary parameters used by the
optimizer.
Returns
-------
unconstrained : ndarray
Unconstrained parameters used by the optimizer, to be transformed to
stationary coefficients of, e.g., an autoregressive or moving average
component.
References
----------
.. [*] Monahan, John F. 1984.
"A Note on Enforcing Stationarity in
Autoregressive-moving Average Models."
Biometrika 71 (2) (August 1): 403-404.
"""
n = constrained.shape[0]
y = np.zeros((n, n), dtype=constrained.dtype)
y[n-1:] = -constrained
for k in range(n-1, 0, -1):
for i in range(k):
y[k-1, i] = (y[k, i] - y[k, k]*y[k, k-i-1]) / (1 - y[k, k]**2)
r = y.diagonal()
x = r / ((1 - r**2)**0.5)
return x | Transform constrained parameters used in likelihood evaluation
to unconstrained parameters used by the optimizer
Parameters
----------
constrained : ndarray
Constrained parameters of, e.g., an autoregressive or moving average
component, to be transformed to arbitrary parameters used by the
optimizer.
Returns
-------
unconstrained : ndarray
Unconstrained parameters used by the optimizer, to be transformed to
stationary coefficients of, e.g., an autoregressive or moving average
component.
References
----------
.. [*] Monahan, John F. 1984.
"A Note on Enforcing Stationarity in
Autoregressive-moving Average Models."
Biometrika 71 (2) (August 1): 403-404. | unconstrain_stationary_univariate | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def _constrain_sv_less_than_one_python(unconstrained, order=None,
k_endog=None):
"""
Transform arbitrary matrices to matrices with singular values less than
one.
Parameters
----------
unconstrained : list
Arbitrary matrices. Should be a list of length `order`, where each
element is an array sized `k_endog` x `k_endog`.
order : int, optional
The order of the autoregression.
k_endog : int, optional
The dimension of the data vector.
Returns
-------
constrained : list
Partial autocorrelation matrices. Should be a list of length
`order`, where each element is an array sized `k_endog` x `k_endog`.
See Also
--------
constrain_stationary_multivariate
Notes
-----
Corresponds to Lemma 2.2 in Ansley and Kohn (1986). See
`constrain_stationary_multivariate` for more details.
"""
from scipy import linalg
constrained = [] # P_s, s = 1, ..., p
if order is None:
order = len(unconstrained)
if k_endog is None:
k_endog = unconstrained[0].shape[0]
eye = np.eye(k_endog)
for i in range(order):
A = unconstrained[i]
B, lower = linalg.cho_factor(eye + np.dot(A, A.T), lower=True)
constrained.append(linalg.solve_triangular(B, A, lower=lower))
return constrained | Transform arbitrary matrices to matrices with singular values less than
one.
Parameters
----------
unconstrained : list
Arbitrary matrices. Should be a list of length `order`, where each
element is an array sized `k_endog` x `k_endog`.
order : int, optional
The order of the autoregression.
k_endog : int, optional
The dimension of the data vector.
Returns
-------
constrained : list
Partial autocorrelation matrices. Should be a list of length
`order`, where each element is an array sized `k_endog` x `k_endog`.
See Also
--------
constrain_stationary_multivariate
Notes
-----
Corresponds to Lemma 2.2 in Ansley and Kohn (1986). See
`constrain_stationary_multivariate` for more details. | _constrain_sv_less_than_one_python | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def _compute_coefficients_from_multivariate_pacf_python(
partial_autocorrelations, error_variance, transform_variance=False,
order=None, k_endog=None):
"""
Transform matrices with singular values less than one to matrices
corresponding to a stationary (or invertible) process.
Parameters
----------
partial_autocorrelations : list
Partial autocorrelation matrices. Should be a list of length `order`,
where each element is an array sized `k_endog` x `k_endog`.
error_variance : ndarray
The variance / covariance matrix of the error term. Should be sized
`k_endog` x `k_endog`. This is used as input in the algorithm even if
is not transformed by it (when `transform_variance` is False). The
error term variance is required input when transformation is used
either to force an autoregressive component to be stationary or to
force a moving average component to be invertible.
transform_variance : bool, optional
Whether or not to transform the error variance term. This option is
not typically used, and the default is False.
order : int, optional
The order of the autoregression.
k_endog : int, optional
The dimension of the data vector.
Returns
-------
coefficient_matrices : list
Transformed coefficient matrices leading to a stationary VAR
representation.
See Also
--------
constrain_stationary_multivariate
Notes
-----
Corresponds to Lemma 2.1 in Ansley and Kohn (1986). See
`constrain_stationary_multivariate` for more details.
"""
from scipy import linalg
if order is None:
order = len(partial_autocorrelations)
if k_endog is None:
k_endog = partial_autocorrelations[0].shape[0]
# If we want to keep the provided variance but with the constrained
# coefficient matrices, we need to make a copy here, and then after the
# main loop we will transform the coefficients to match the passed variance
if not transform_variance:
initial_variance = error_variance
# Need to make the input variance large enough that the recursions
# do not lead to zero-matrices due to roundoff error, which would case
# exceptions from the Cholesky decompositions.
# Note that this will still not always ensure positive definiteness,
# and for k_endog, order large enough an exception may still be raised
error_variance = np.eye(k_endog) * (order + k_endog)**10
forward_variances = [error_variance] # \Sigma_s
backward_variances = [error_variance] # \Sigma_s^*, s = 0, ..., p
autocovariances = [error_variance] # \Gamma_s
# \phi_{s,k}, s = 1, ..., p
# k = 1, ..., s+1
forwards = []
# \phi_{s,k}^*
backwards = []
error_variance_factor = linalg.cholesky(error_variance, lower=True)
forward_factors = [error_variance_factor]
backward_factors = [error_variance_factor]
# We fill in the entries as follows:
# [1,1]
# [2,2], [2,1]
# [3,3], [3,1], [3,2]
# ...
# [p,p], [p,1], ..., [p,p-1]
# the last row, correctly ordered, is then used as the coefficients
for s in range(order): # s = 0, ..., p-1
prev_forwards = forwards
prev_backwards = backwards
forwards = []
backwards = []
# Create the "last" (k = s+1) matrix
# Note: this is for k = s+1. However, below we then have to fill
# in for k = 1, ..., s in order.
# P L*^{-1} = x
# x L* = P
# L*' x' = P'
forwards.append(
linalg.solve_triangular(
backward_factors[s], partial_autocorrelations[s].T,
lower=True, trans='T'))
forwards[0] = np.dot(forward_factors[s], forwards[0].T)
# P' L^{-1} = x
# x L = P'
# L' x' = P
backwards.append(
linalg.solve_triangular(
forward_factors[s], partial_autocorrelations[s],
lower=True, trans='T'))
backwards[0] = np.dot(backward_factors[s], backwards[0].T)
# Update the variance
# Note: if s >= 1, this will be further updated in the for loop
# below
# Also, this calculation will be re-used in the forward variance
tmp = np.dot(forwards[0], backward_variances[s])
autocovariances.append(tmp.copy().T)
# Create the remaining k = 1, ..., s matrices,
# only has an effect if s >= 1
for k in range(s):
forwards.insert(k, prev_forwards[k] - np.dot(
forwards[-1], prev_backwards[s-(k+1)]))
backwards.insert(k, prev_backwards[k] - np.dot(
backwards[-1], prev_forwards[s-(k+1)]))
autocovariances[s+1] += np.dot(autocovariances[k+1],
prev_forwards[s-(k+1)].T)
# Create forward and backwards variances
forward_variances.append(
forward_variances[s] - np.dot(tmp, forwards[s].T)
)
backward_variances.append(
backward_variances[s] -
np.dot(
np.dot(backwards[s], forward_variances[s]),
backwards[s].T
)
)
# Cholesky factors
forward_factors.append(
linalg.cholesky(forward_variances[s+1], lower=True)
)
backward_factors.append(
linalg.cholesky(backward_variances[s+1], lower=True)
)
# If we do not want to use the transformed variance, we need to
# adjust the constrained matrices, as presented in Lemma 2.3, see above
variance = forward_variances[-1]
if not transform_variance:
# Here, we need to construct T such that:
# variance = T * initial_variance * T'
# To do that, consider the Cholesky of variance (L) and
# input_variance (M) to get:
# L L' = T M M' T' = (TM) (TM)'
# => L = T M
# => L M^{-1} = T
initial_variance_factor = np.linalg.cholesky(initial_variance)
transformed_variance_factor = np.linalg.cholesky(variance)
transform = np.dot(initial_variance_factor,
np.linalg.inv(transformed_variance_factor))
inv_transform = np.linalg.inv(transform)
for i in range(order):
forwards[i] = (
np.dot(np.dot(transform, forwards[i]), inv_transform)
)
return forwards, variance | Transform matrices with singular values less than one to matrices
corresponding to a stationary (or invertible) process.
Parameters
----------
partial_autocorrelations : list
Partial autocorrelation matrices. Should be a list of length `order`,
where each element is an array sized `k_endog` x `k_endog`.
error_variance : ndarray
The variance / covariance matrix of the error term. Should be sized
`k_endog` x `k_endog`. This is used as input in the algorithm even if
is not transformed by it (when `transform_variance` is False). The
error term variance is required input when transformation is used
either to force an autoregressive component to be stationary or to
force a moving average component to be invertible.
transform_variance : bool, optional
Whether or not to transform the error variance term. This option is
not typically used, and the default is False.
order : int, optional
The order of the autoregression.
k_endog : int, optional
The dimension of the data vector.
Returns
-------
coefficient_matrices : list
Transformed coefficient matrices leading to a stationary VAR
representation.
See Also
--------
constrain_stationary_multivariate
Notes
-----
Corresponds to Lemma 2.1 in Ansley and Kohn (1986). See
`constrain_stationary_multivariate` for more details. | _compute_coefficients_from_multivariate_pacf_python | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def _unconstrain_sv_less_than_one(constrained, order=None, k_endog=None):
"""
Transform matrices with singular values less than one to arbitrary
matrices.
Parameters
----------
constrained : list
The partial autocorrelation matrices. Should be a list of length
`order`, where each element is an array sized `k_endog` x `k_endog`.
order : int, optional
The order of the autoregression.
k_endog : int, optional
The dimension of the data vector.
Returns
-------
unconstrained : list
Unconstrained matrices. A list of length `order`, where each element is
an array sized `k_endog` x `k_endog`.
See Also
--------
unconstrain_stationary_multivariate
Notes
-----
Corresponds to the inverse of Lemma 2.2 in Ansley and Kohn (1986). See
`unconstrain_stationary_multivariate` for more details.
"""
from scipy import linalg
unconstrained = [] # A_s, s = 1, ..., p
if order is None:
order = len(constrained)
if k_endog is None:
k_endog = constrained[0].shape[0]
eye = np.eye(k_endog)
for i in range(order):
P = constrained[i]
# B^{-1} B^{-1}' = I - P P'
B_inv, lower = linalg.cho_factor(eye - np.dot(P, P.T), lower=True)
# A = BP
# B^{-1} A = P
unconstrained.append(linalg.solve_triangular(B_inv, P, lower=lower))
return unconstrained | Transform matrices with singular values less than one to arbitrary
matrices.
Parameters
----------
constrained : list
The partial autocorrelation matrices. Should be a list of length
`order`, where each element is an array sized `k_endog` x `k_endog`.
order : int, optional
The order of the autoregression.
k_endog : int, optional
The dimension of the data vector.
Returns
-------
unconstrained : list
Unconstrained matrices. A list of length `order`, where each element is
an array sized `k_endog` x `k_endog`.
See Also
--------
unconstrain_stationary_multivariate
Notes
-----
Corresponds to the inverse of Lemma 2.2 in Ansley and Kohn (1986). See
`unconstrain_stationary_multivariate` for more details. | _unconstrain_sv_less_than_one | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def _compute_multivariate_sample_pacf(endog, maxlag):
"""
Computer multivariate sample partial autocorrelations
Parameters
----------
endog : array_like
Sample data on which to compute sample autocovariances. Shaped
`nobs` x `k_endog`.
maxlag : int
Maximum lag for which to calculate sample partial autocorrelations.
Returns
-------
sample_pacf : list
A list of the first `maxlag` sample partial autocorrelation matrices.
Each matrix is shaped `k_endog` x `k_endog`.
"""
sample_autocovariances = _compute_multivariate_sample_acovf(endog, maxlag)
return _compute_multivariate_pacf_from_autocovariances(
sample_autocovariances) | Computer multivariate sample partial autocorrelations
Parameters
----------
endog : array_like
Sample data on which to compute sample autocovariances. Shaped
`nobs` x `k_endog`.
maxlag : int
Maximum lag for which to calculate sample partial autocorrelations.
Returns
-------
sample_pacf : list
A list of the first `maxlag` sample partial autocorrelation matrices.
Each matrix is shaped `k_endog` x `k_endog`. | _compute_multivariate_sample_pacf | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def _compute_multivariate_pacf_from_autocovariances(autocovariances,
order=None, k_endog=None):
"""
Compute multivariate partial autocorrelations from autocovariances.
Parameters
----------
autocovariances : list
Autocorrelations matrices. Should be a list of length `order` + 1,
where each element is an array sized `k_endog` x `k_endog`.
order : int, optional
The order of the autoregression.
k_endog : int, optional
The dimension of the data vector.
Returns
-------
pacf : list
List of first `order` multivariate partial autocorrelations.
See Also
--------
unconstrain_stationary_multivariate
Notes
-----
Note that this computes multivariate partial autocorrelations.
Corresponds to the inverse of Lemma 2.1 in Ansley and Kohn (1986). See
`unconstrain_stationary_multivariate` for more details.
Computes sample partial autocorrelations if sample autocovariances are
given.
"""
from scipy import linalg
if order is None:
order = len(autocovariances)-1
if k_endog is None:
k_endog = autocovariances[0].shape[0]
# Now apply the Ansley and Kohn (1986) algorithm, except that instead of
# calculating phi_{s+1, s+1} = L_s P_{s+1} {L_s^*}^{-1} (which requires
# the partial autocorrelation P_{s+1} which is what we're trying to
# calculate here), we calculate it as in Ansley and Newbold (1979), using
# the autocovariances \Gamma_s and the forwards and backwards residual
# variances \Sigma_s, \Sigma_s^*:
# phi_{s+1, s+1} = [ \Gamma_{s+1}' - \phi_{s,1} \Gamma_s' - ... -
# \phi_{s,s} \Gamma_1' ] {\Sigma_s^*}^{-1}
# Forward and backward variances
forward_variances = [] # \Sigma_s
backward_variances = [] # \Sigma_s^*, s = 0, ..., p
# \phi_{s,k}, s = 1, ..., p
# k = 1, ..., s+1
forwards = []
# \phi_{s,k}^*
backwards = []
forward_factors = [] # L_s
backward_factors = [] # L_s^*, s = 0, ..., p
# Ultimately we want to construct the partial autocorrelation matrices
# Note that this is "1-indexed" in the sense that it stores P_1, ... P_p
# rather than starting with P_0.
partial_autocorrelations = []
# We fill in the entries of phi_{s,k} as follows:
# [1,1]
# [2,2], [2,1]
# [3,3], [3,1], [3,2]
# ...
# [p,p], [p,1], ..., [p,p-1]
# the last row, correctly ordered, should be the same as the coefficient
# matrices provided in the argument `constrained`
for s in range(order): # s = 0, ..., p-1
prev_forwards = list(forwards)
prev_backwards = list(backwards)
forwards = []
backwards = []
# Create forward and backwards variances Sigma_s, Sigma*_s
forward_variance = autocovariances[0].copy()
backward_variance = autocovariances[0].T.copy()
for k in range(s):
forward_variance -= np.dot(prev_forwards[k],
autocovariances[k+1])
backward_variance -= np.dot(prev_backwards[k],
autocovariances[k+1].T)
forward_variances.append(forward_variance)
backward_variances.append(backward_variance)
# Cholesky factors
forward_factors.append(
linalg.cholesky(forward_variances[s], lower=True)
)
backward_factors.append(
linalg.cholesky(backward_variances[s], lower=True)
)
# Create the intermediate sum term
if s == 0:
# phi_11 = \Gamma_1' \Gamma_0^{-1}
# phi_11 \Gamma_0 = \Gamma_1'
# \Gamma_0 phi_11' = \Gamma_1
forwards.append(linalg.cho_solve(
(forward_factors[0], True), autocovariances[1]).T)
# backwards.append(forwards[-1])
# phi_11_star = \Gamma_1 \Gamma_0^{-1}
# phi_11_star \Gamma_0 = \Gamma_1
# \Gamma_0 phi_11_star' = \Gamma_1'
backwards.append(linalg.cho_solve(
(backward_factors[0], True), autocovariances[1].T).T)
else:
# G := \Gamma_{s+1}' -
# \phi_{s,1} \Gamma_s' - .. - \phi_{s,s} \Gamma_1'
tmp_sum = autocovariances[s+1].T.copy()
for k in range(s):
tmp_sum -= np.dot(prev_forwards[k], autocovariances[s-k].T)
# Create the "last" (k = s+1) matrix
# Note: this is for k = s+1. However, below we then have to
# fill in for k = 1, ..., s in order.
# phi = G Sigma*^{-1}
# phi Sigma* = G
# Sigma*' phi' = G'
# Sigma* phi' = G'
# (because Sigma* is symmetric)
forwards.append(linalg.cho_solve(
(backward_factors[s], True), tmp_sum.T).T)
# phi = G' Sigma^{-1}
# phi Sigma = G'
# Sigma' phi' = G
# Sigma phi' = G
# (because Sigma is symmetric)
backwards.append(linalg.cho_solve(
(forward_factors[s], True), tmp_sum).T)
# Create the remaining k = 1, ..., s matrices,
# only has an effect if s >= 1
for k in range(s):
forwards.insert(k, prev_forwards[k] - np.dot(
forwards[-1], prev_backwards[s-(k+1)]))
backwards.insert(k, prev_backwards[k] - np.dot(
backwards[-1], prev_forwards[s-(k+1)]))
# Partial autocorrelation matrix: P_{s+1}
# P = L^{-1} phi L*
# L P = (phi L*)
partial_autocorrelations.append(linalg.solve_triangular(
forward_factors[s], np.dot(forwards[s], backward_factors[s]),
lower=True))
return partial_autocorrelations | Compute multivariate partial autocorrelations from autocovariances.
Parameters
----------
autocovariances : list
Autocorrelations matrices. Should be a list of length `order` + 1,
where each element is an array sized `k_endog` x `k_endog`.
order : int, optional
The order of the autoregression.
k_endog : int, optional
The dimension of the data vector.
Returns
-------
pacf : list
List of first `order` multivariate partial autocorrelations.
See Also
--------
unconstrain_stationary_multivariate
Notes
-----
Note that this computes multivariate partial autocorrelations.
Corresponds to the inverse of Lemma 2.1 in Ansley and Kohn (1986). See
`unconstrain_stationary_multivariate` for more details.
Computes sample partial autocorrelations if sample autocovariances are
given. | _compute_multivariate_pacf_from_autocovariances | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def unconstrain_stationary_multivariate(constrained, error_variance):
"""
Transform constrained parameters used in likelihood evaluation
to unconstrained parameters used by the optimizer
Parameters
----------
constrained : array or list
Constrained parameters of, e.g., an autoregressive or moving average
component, to be transformed to arbitrary parameters used by the
optimizer. If a list, should be a list of length `order`, where each
element is an array sized `k_endog` x `k_endog`. If an array, should be
the coefficient matrices horizontally concatenated and sized
`k_endog` x `k_endog * order`.
error_variance : ndarray
The variance / covariance matrix of the error term. Should be sized
`k_endog` x `k_endog`. This is used as input in the algorithm even if
is not transformed by it (when `transform_variance` is False).
Returns
-------
unconstrained : ndarray
Unconstrained parameters used by the optimizer, to be transformed to
stationary coefficients of, e.g., an autoregressive or moving average
component. Will match the type of the passed `constrained`
variable (so if a list was passed, a list will be returned).
Notes
-----
Uses the list representation internally, even if an array is passed.
References
----------
.. [*] Ansley, Craig F., and Robert Kohn. 1986.
"A Note on Reparameterizing a Vector Autoregressive Moving Average Model
to Enforce Stationarity."
Journal of Statistical Computation and Simulation 24 (2): 99-106.
"""
use_list = type(constrained) is list
if not use_list:
k_endog, order = constrained.shape
order //= k_endog
constrained = [
constrained[:k_endog, i*k_endog:(i+1)*k_endog]
for i in range(order)
]
else:
order = len(constrained)
k_endog = constrained[0].shape[0]
# Step 1: convert matrices from the space of stationary
# coefficient matrices to our "partial autocorrelation matrix" space
# (matrices with singular values less than one)
partial_autocorrelations = _compute_multivariate_pacf_from_coefficients(
constrained, error_variance, order, k_endog)
# Step 2: convert from arbitrary matrices to those with singular values
# less than one.
unconstrained = _unconstrain_sv_less_than_one(
partial_autocorrelations, order, k_endog)
if not use_list:
unconstrained = np.concatenate(unconstrained, axis=1)
return unconstrained, error_variance | Transform constrained parameters used in likelihood evaluation
to unconstrained parameters used by the optimizer
Parameters
----------
constrained : array or list
Constrained parameters of, e.g., an autoregressive or moving average
component, to be transformed to arbitrary parameters used by the
optimizer. If a list, should be a list of length `order`, where each
element is an array sized `k_endog` x `k_endog`. If an array, should be
the coefficient matrices horizontally concatenated and sized
`k_endog` x `k_endog * order`.
error_variance : ndarray
The variance / covariance matrix of the error term. Should be sized
`k_endog` x `k_endog`. This is used as input in the algorithm even if
is not transformed by it (when `transform_variance` is False).
Returns
-------
unconstrained : ndarray
Unconstrained parameters used by the optimizer, to be transformed to
stationary coefficients of, e.g., an autoregressive or moving average
component. Will match the type of the passed `constrained`
variable (so if a list was passed, a list will be returned).
Notes
-----
Uses the list representation internally, even if an array is passed.
References
----------
.. [*] Ansley, Craig F., and Robert Kohn. 1986.
"A Note on Reparameterizing a Vector Autoregressive Moving Average Model
to Enforce Stationarity."
Journal of Statistical Computation and Simulation 24 (2): 99-106. | unconstrain_stationary_multivariate | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.