code
stringlengths
26
870k
docstring
stringlengths
1
65.6k
func_name
stringlengths
1
194
language
stringclasses
1 value
repo
stringlengths
8
68
path
stringlengths
5
194
url
stringlengths
46
254
license
stringclasses
4 values
def load(): """ Load the strikes data and return a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ return load_pandas()
Load the strikes data and return a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load
python
statsmodels/statsmodels
statsmodels/datasets/strikes/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/strikes/data.py
BSD-3-Clause
def load(): """ Load the US macro data and return a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. Notes ----- The macrodata Dataset instance does not contain endog and exog attributes. """ return load_pandas()
Load the US macro data and return a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. Notes ----- The macrodata Dataset instance does not contain endog and exog attributes.
load
python
statsmodels/statsmodels
statsmodels/datasets/macrodata/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/macrodata/data.py
BSD-3-Clause
def load_pandas(): """ Load the cpunish data and return a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ data = _get_data() return du.process_pandas(data, endog_idx=0)
Load the cpunish data and return a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load_pandas
python
statsmodels/statsmodels
statsmodels/datasets/cpunish/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/cpunish/data.py
BSD-3-Clause
def load(): """ Load the cpunish data and return a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ return load_pandas()
Load the cpunish data and return a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load
python
statsmodels/statsmodels
statsmodels/datasets/cpunish/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/cpunish/data.py
BSD-3-Clause
def load(): """ Load the data and return a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ return load_pandas()
Load the data and return a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load
python
statsmodels/statsmodels
statsmodels/datasets/fair/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/fair/data.py
BSD-3-Clause
def load(): """ Loads the RAND HIE data and returns a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. Notes ----- endog - response variable, mdvis exog - design """ return load_pandas()
Loads the RAND HIE data and returns a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. Notes ----- endog - response variable, mdvis exog - design
load
python
statsmodels/statsmodels
statsmodels/datasets/randhie/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/randhie/data.py
BSD-3-Clause
def load_pandas(): """ Loads the RAND HIE data and returns a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. Notes ----- endog - response variable, mdvis exog - design """ return du.process_pandas(_get_data(), endog_idx=0)
Loads the RAND HIE data and returns a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. Notes ----- endog - response variable, mdvis exog - design
load_pandas
python
statsmodels/statsmodels
statsmodels/datasets/randhie/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/randhie/data.py
BSD-3-Clause
def load(): """ Load the data and return a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ return load_pandas()
Load the data and return a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load
python
statsmodels/statsmodels
statsmodels/datasets/engel/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/engel/data.py
BSD-3-Clause
def load(): """ Load the EU Electrical Equipment manufacturing data into a Dataset class Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. Notes ----- The Dataset instance does not contain endog and exog attributes. """ return load_pandas()
Load the EU Electrical Equipment manufacturing data into a Dataset class Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. Notes ----- The Dataset instance does not contain endog and exog attributes.
load
python
statsmodels/statsmodels
statsmodels/datasets/elec_equip/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/elec_equip/data.py
BSD-3-Clause
def load_pandas(): """ Load the China smoking/lung cancer data and return a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ raw_data = du.load_csv(__file__, 'china_smoking.csv') data = raw_data.set_index('Location') dset = du.Dataset(data=data, title="Smoking and lung cancer in Chinese regions") dset.raw_data = raw_data return dset
Load the China smoking/lung cancer data and return a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load_pandas
python
statsmodels/statsmodels
statsmodels/datasets/china_smoking/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/china_smoking/data.py
BSD-3-Clause
def load(): """ Load the China smoking/lung cancer data and return a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ return load_pandas()
Load the China smoking/lung cancer data and return a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load
python
statsmodels/statsmodels
statsmodels/datasets/china_smoking/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/china_smoking/data.py
BSD-3-Clause
def load_pandas(): """Load the credit card data and returns a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ data = _get_data() return du.process_pandas(data, endog_idx=0)
Load the credit card data and returns a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load_pandas
python
statsmodels/statsmodels
statsmodels/datasets/ccard/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/ccard/data.py
BSD-3-Clause
def load(): """Load the credit card data and returns a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ return load_pandas()
Load the credit card data and returns a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load
python
statsmodels/statsmodels
statsmodels/datasets/ccard/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/ccard/data.py
BSD-3-Clause
def load(): """ Load the statecrime data and return a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ return load_pandas()
Load the statecrime data and return a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load
python
statsmodels/statsmodels
statsmodels/datasets/statecrime/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/statecrime/data.py
BSD-3-Clause
def load(): """ Load the West German interest/inflation data and return a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. Notes ----- The interest_inflation Dataset instance does not contain endog and exog attributes. """ return load_pandas()
Load the West German interest/inflation data and return a Dataset class. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. Notes ----- The interest_inflation Dataset instance does not contain endog and exog attributes.
load
python
statsmodels/statsmodels
statsmodels/datasets/interest_inflation/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/interest_inflation/data.py
BSD-3-Clause
def load(): """ Load the Spector dataset and returns a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ return load_pandas()
Load the Spector dataset and returns a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load
python
statsmodels/statsmodels
statsmodels/datasets/spector/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/spector/data.py
BSD-3-Clause
def load_pandas(): """ Load the Spector dataset and returns a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ data = _get_data() return du.process_pandas(data, endog_idx=3)
Load the Spector dataset and returns a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load_pandas
python
statsmodels/statsmodels
statsmodels/datasets/spector/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/spector/data.py
BSD-3-Clause
def load(): """ Load the data modechoice data and return a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ return load_pandas()
Load the data modechoice data and return a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load
python
statsmodels/statsmodels
statsmodels/datasets/modechoice/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/modechoice/data.py
BSD-3-Clause
def load_pandas(): """ Load the data modechoice data and return a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ data = _get_data() return du.process_pandas(data, endog_idx = 2, exog_idx=[3,4,5,6,7,8])
Load the data modechoice data and return a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load_pandas
python
statsmodels/statsmodels
statsmodels/datasets/modechoice/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/modechoice/data.py
BSD-3-Clause
def load(): """ Load the data and return a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ return load_pandas()
Load the data and return a Dataset class instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load
python
statsmodels/statsmodels
statsmodels/datasets/co2/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/co2/data.py
BSD-3-Clause
def load(): """ Load the Scotvote data and returns a Dataset instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ return load_pandas()
Load the Scotvote data and returns a Dataset instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load
python
statsmodels/statsmodels
statsmodels/datasets/scotland/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/scotland/data.py
BSD-3-Clause
def load_pandas(): """ Load the Scotvote data and returns a Dataset instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information. """ data = _get_data() return du.process_pandas(data, endog_idx=0)
Load the Scotvote data and returns a Dataset instance. Returns ------- Dataset See DATASET_PROPOSAL.txt for more information.
load_pandas
python
statsmodels/statsmodels
statsmodels/datasets/scotland/data.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/datasets/scotland/data.py
BSD-3-Clause
def _find_x12(x12path=None, prefer_x13=True): """ If x12path is not given, then either x13as[.exe] or x12a[.exe] must be found on the PATH. Otherwise, the environmental variable X12PATH or X13PATH must be defined. If prefer_x13 is True, only X13PATH is searched for. If it is false, only X12PATH is searched for. """ global _binary_names if x12path is not None and x12path.endswith(_binary_names): # remove binary from path if path is not a directory if not os.path.isdir(x12path): x12path = os.path.dirname(x12path) if not prefer_x13: # search for x12 first _binary_names = _binary_names[::-1] if x12path is None: x12path = os.getenv("X12PATH", "") if not x12path: x12path = os.getenv("X13PATH", "") elif x12path is None: x12path = os.getenv("X13PATH", "") if not x12path: x12path = os.getenv("X12PATH", "") for binary in _binary_names: x12 = os.path.join(x12path, binary) try: subprocess.check_call(x12, stdout=subprocess.PIPE, stderr=subprocess.PIPE) return x12 except OSError: pass else: return False
If x12path is not given, then either x13as[.exe] or x12a[.exe] must be found on the PATH. Otherwise, the environmental variable X12PATH or X13PATH must be defined. If prefer_x13 is True, only X13PATH is searched for. If it is false, only X12PATH is searched for.
_find_x12
python
statsmodels/statsmodels
statsmodels/tsa/x13.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/x13.py
BSD-3-Clause
def _clean_order(order): """ Takes something like (1 1 0)(0 1 1) and returns a arma order, sarma order tuple. Also accepts (1 1 0) and return arma order and (0, 0, 0) """ order = re.findall(r"\([0-9 ]*?\)", order) def clean(x): return tuple(map(int, re.sub("[()]", "", x).split(" "))) if len(order) > 1: order, sorder = map(clean, order) else: order = clean(order[0]) sorder = (0, 0, 0) return order, sorder
Takes something like (1 1 0)(0 1 1) and returns a arma order, sarma order tuple. Also accepts (1 1 0) and return arma order and (0, 0, 0)
_clean_order
python
statsmodels/statsmodels
statsmodels/tsa/x13.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/x13.py
BSD-3-Clause
def _convert_out_to_series(x, dates, name): """ Convert x to a DataFrame where x is a string in the format given by x-13arima-seats output. """ from io import StringIO from pandas import read_csv out = read_csv(StringIO(x), skiprows=2, header=None, sep="\t", engine="python") return out.set_index(dates).rename(columns={1: name})[name]
Convert x to a DataFrame where x is a string in the format given by x-13arima-seats output.
_convert_out_to_series
python
statsmodels/statsmodels
statsmodels/tsa/x13.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/x13.py
BSD-3-Clause
def x13_arima_analysis( endog, maxorder=(2, 1), maxdiff=(2, 1), diff=None, exog=None, log=None, outlier=True, trading=False, forecast_periods=None, retspec=False, speconly=False, start=None, freq=None, print_stdout=False, x12path=None, prefer_x13=True, log_diagnostics=False, tempdir=None, ): """ Perform x13-arima analysis for monthly or quarterly data. Parameters ---------- endog : array_like, pandas.Series The series to model. It is best to use a pandas object with a DatetimeIndex or PeriodIndex. However, you can pass an array-like object. If your object does not have a dates index then ``start`` and ``freq`` are not optional. maxorder : tuple The maximum order of the regular and seasonal ARMA polynomials to examine during the model identification. The order for the regular polynomial must be greater than zero and no larger than 4. The order for the seasonal polynomial may be 1 or 2. maxdiff : tuple The maximum orders for regular and seasonal differencing in the automatic differencing procedure. Acceptable inputs for regular differencing are 1 and 2. The maximum order for seasonal differencing is 1. If ``diff`` is specified then ``maxdiff`` should be None. Otherwise, ``diff`` will be ignored. See also ``diff``. diff : tuple Fixes the orders of differencing for the regular and seasonal differencing. Regular differencing may be 0, 1, or 2. Seasonal differencing may be 0 or 1. ``maxdiff`` must be None, otherwise ``diff`` is ignored. exog : array_like Exogenous variables. log : bool or None If None, it is automatically determined whether to log the series or not. If False, logs are not taken. If True, logs are taken. outlier : bool Whether or not outliers are tested for and corrected, if detected. trading : bool Whether or not trading day effects are tested for. forecast_periods : int Number of forecasts produced. The default is None. retspec : bool Whether to return the created specification file. Can be useful for debugging. speconly : bool Whether to create the specification file and then return it without performing the analysis. Can be useful for debugging. start : str, datetime Must be given if ``endog`` does not have date information in its index. Anything accepted by pandas.DatetimeIndex for the start value. freq : str Must be givein if ``endog`` does not have date information in its index. Anything accepted by pandas.DatetimeIndex for the freq value. print_stdout : bool The stdout from X12/X13 is suppressed. To print it out, set this to True. Default is False. x12path : str or None The path to x12 or x13 binary. If None, the program will attempt to find x13as or x12a on the PATH or by looking at X13PATH or X12PATH depending on the value of prefer_x13. prefer_x13 : bool If True, will look for x13as first and will fallback to the X13PATH environmental variable. If False, will look for x12a first and will fallback to the X12PATH environmental variable. If x12path points to the path for the X12/X13 binary, it does nothing. log_diagnostics : bool If True, returns D8 F-Test, M07, and Q diagnostics from the X13 savelog. Set to False by default. tempdir : str The path to where temporary files are created by the function. If None, files are created in the default temporary file location. Returns ------- Bunch A bunch object containing the listed attributes. - results : str The full output from the X12/X13 run. - seasadj : pandas.Series The final seasonally adjusted ``endog``. - trend : pandas.Series The trend-cycle component of ``endog``. - irregular : pandas.Series The final irregular component of ``endog``. - stdout : str The captured stdout produced by x12/x13. - spec : str, optional Returned if ``retspec`` is True. The only thing returned if ``speconly`` is True. - x13_diagnostic : dict Returns F-D8, M07, and Q metrics if True. Returns dict with no metrics if False Notes ----- This works by creating a specification file, writing it to a temporary directory, invoking X12/X13 in a subprocess, and reading the output directory, invoking exog12/X13 in a subprocess, and reading the output back in. """ x12path = _check_x12(x12path) if not isinstance(endog, (pd.DataFrame, pd.Series)): if start is None or freq is None: raise ValueError( "start and freq cannot be none if endog is not " "a pandas object" ) idx = pd.date_range(start=start, periods=len(endog), freq=freq) endog = pd.Series(endog, index=idx) spec_obj = pandas_to_series_spec(endog) spec = spec_obj.create_spec() spec += f"transform{{function={_log_to_x12[log]}}}\n" if outlier: spec += "outlier{}\n" options = _make_automdl_options(maxorder, maxdiff, diff) spec += f"automdl{{{options}}}\n" spec += _make_regression_options(trading, exog) spec += _make_forecast_options(forecast_periods) spec += "x11{ save=(d11 d12 d13) \n savelog=(fd8 m7 q)}" if speconly: return spec # write it to a tempfile # TODO: make this more robust - give the user some control? ftempin = tempfile.NamedTemporaryFile(delete=False, suffix=".spc", dir=tempdir) ftempout = tempfile.NamedTemporaryFile(delete=False, dir=tempdir) try: ftempin.write(spec.encode("utf8")) ftempin.close() ftempout.close() # call x12 arima p = run_spec(x12path, ftempin.name[:-4], ftempout.name) p.wait() stdout = p.stdout.read() if print_stdout: print(p.stdout.read()) # check for errors errors = _open_and_read(ftempout.name + ".err") _check_errors(errors) # read in results results = _open_and_read(ftempout.name + ".out") seasadj = _open_and_read(ftempout.name + ".d11") trend = _open_and_read(ftempout.name + ".d12") irregular = _open_and_read(ftempout.name + ".d13") if log_diagnostics: # read f8d m7 and q diagnostics from log x13_logs = _open_and_read(ftempout.name + ".log") x13_diagnostic = { "F-D8": float(re.search(r"D8 table\s*:\s*([\d.]+)", x13_logs).group(1)), "M07": float(re.search(r"M07\s*:\s*([\d.]+)", x13_logs).group(1)), "Q": float(re.search(r"Q\s*:\s*([\d.]+)", x13_logs).group(1)) } else: x13_diagnostic = {"F-D8": "Log diagnostics not retrieved.", "M07": "Log diagnostics not retrieved.", "Q": "Log diagnostics not retrieved."} finally: try: # sometimes this gives a permission denied error? # not sure why. no process should have these open os.remove(ftempin.name) os.remove(ftempout.name) except OSError: if os.path.exists(ftempin.name): warn(f"Failed to delete resource {ftempin.name}", IOWarning) if os.path.exists(ftempout.name): warn(f"Failed to delete resource {ftempout.name}", IOWarning) seasadj = _convert_out_to_series(seasadj, endog.index, "seasadj") trend = _convert_out_to_series(trend, endog.index, "trend") irregular = _convert_out_to_series(irregular, endog.index, "irregular") # NOTE: there is not likely anything in stdout that's not in results # so may be safe to just suppress and remove it if not retspec: res = X13ArimaAnalysisResult( observed=endog, results=results, seasadj=seasadj, trend=trend, irregular=irregular, stdout=stdout, x13_diagnostic=x13_diagnostic, ) else: res = X13ArimaAnalysisResult( observed=endog, results=results, seasadj=seasadj, trend=trend, irregular=irregular, stdout=stdout, spec=spec, x13_diagnostic=x13_diagnostic, ) return res
Perform x13-arima analysis for monthly or quarterly data. Parameters ---------- endog : array_like, pandas.Series The series to model. It is best to use a pandas object with a DatetimeIndex or PeriodIndex. However, you can pass an array-like object. If your object does not have a dates index then ``start`` and ``freq`` are not optional. maxorder : tuple The maximum order of the regular and seasonal ARMA polynomials to examine during the model identification. The order for the regular polynomial must be greater than zero and no larger than 4. The order for the seasonal polynomial may be 1 or 2. maxdiff : tuple The maximum orders for regular and seasonal differencing in the automatic differencing procedure. Acceptable inputs for regular differencing are 1 and 2. The maximum order for seasonal differencing is 1. If ``diff`` is specified then ``maxdiff`` should be None. Otherwise, ``diff`` will be ignored. See also ``diff``. diff : tuple Fixes the orders of differencing for the regular and seasonal differencing. Regular differencing may be 0, 1, or 2. Seasonal differencing may be 0 or 1. ``maxdiff`` must be None, otherwise ``diff`` is ignored. exog : array_like Exogenous variables. log : bool or None If None, it is automatically determined whether to log the series or not. If False, logs are not taken. If True, logs are taken. outlier : bool Whether or not outliers are tested for and corrected, if detected. trading : bool Whether or not trading day effects are tested for. forecast_periods : int Number of forecasts produced. The default is None. retspec : bool Whether to return the created specification file. Can be useful for debugging. speconly : bool Whether to create the specification file and then return it without performing the analysis. Can be useful for debugging. start : str, datetime Must be given if ``endog`` does not have date information in its index. Anything accepted by pandas.DatetimeIndex for the start value. freq : str Must be givein if ``endog`` does not have date information in its index. Anything accepted by pandas.DatetimeIndex for the freq value. print_stdout : bool The stdout from X12/X13 is suppressed. To print it out, set this to True. Default is False. x12path : str or None The path to x12 or x13 binary. If None, the program will attempt to find x13as or x12a on the PATH or by looking at X13PATH or X12PATH depending on the value of prefer_x13. prefer_x13 : bool If True, will look for x13as first and will fallback to the X13PATH environmental variable. If False, will look for x12a first and will fallback to the X12PATH environmental variable. If x12path points to the path for the X12/X13 binary, it does nothing. log_diagnostics : bool If True, returns D8 F-Test, M07, and Q diagnostics from the X13 savelog. Set to False by default. tempdir : str The path to where temporary files are created by the function. If None, files are created in the default temporary file location. Returns ------- Bunch A bunch object containing the listed attributes. - results : str The full output from the X12/X13 run. - seasadj : pandas.Series The final seasonally adjusted ``endog``. - trend : pandas.Series The trend-cycle component of ``endog``. - irregular : pandas.Series The final irregular component of ``endog``. - stdout : str The captured stdout produced by x12/x13. - spec : str, optional Returned if ``retspec`` is True. The only thing returned if ``speconly`` is True. - x13_diagnostic : dict Returns F-D8, M07, and Q metrics if True. Returns dict with no metrics if False Notes ----- This works by creating a specification file, writing it to a temporary directory, invoking X12/X13 in a subprocess, and reading the output directory, invoking exog12/X13 in a subprocess, and reading the output back in.
x13_arima_analysis
python
statsmodels/statsmodels
statsmodels/tsa/x13.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/x13.py
BSD-3-Clause
def x13_arima_select_order( endog, maxorder=(2, 1), maxdiff=(2, 1), diff=None, exog=None, log=None, outlier=True, trading=False, forecast_periods=None, start=None, freq=None, print_stdout=False, x12path=None, prefer_x13=True, tempdir=None, ): """ Perform automatic seasonal ARIMA order identification using x12/x13 ARIMA. Parameters ---------- endog : array_like, pandas.Series The series to model. It is best to use a pandas object with a DatetimeIndex or PeriodIndex. However, you can pass an array-like object. If your object does not have a dates index then ``start`` and ``freq`` are not optional. maxorder : tuple The maximum order of the regular and seasonal ARMA polynomials to examine during the model identification. The order for the regular polynomial must be greater than zero and no larger than 4. The order for the seasonal polynomial may be 1 or 2. maxdiff : tuple The maximum orders for regular and seasonal differencing in the automatic differencing procedure. Acceptable inputs for regular differencing are 1 and 2. The maximum order for seasonal differencing is 1. If ``diff`` is specified then ``maxdiff`` should be None. Otherwise, ``diff`` will be ignored. See also ``diff``. diff : tuple Fixes the orders of differencing for the regular and seasonal differencing. Regular differencing may be 0, 1, or 2. Seasonal differencing may be 0 or 1. ``maxdiff`` must be None, otherwise ``diff`` is ignored. exog : array_like Exogenous variables. log : bool or None If None, it is automatically determined whether to log the series or not. If False, logs are not taken. If True, logs are taken. outlier : bool Whether or not outliers are tested for and corrected, if detected. trading : bool Whether or not trading day effects are tested for. forecast_periods : int Number of forecasts produced. The default is None. start : str, datetime Must be given if ``endog`` does not have date information in its index. Anything accepted by pandas.DatetimeIndex for the start value. freq : str Must be givein if ``endog`` does not have date information in its index. Anything accepted by pandas.DatetimeIndex for the freq value. print_stdout : bool The stdout from X12/X13 is suppressed. To print it out, set this to True. Default is False. x12path : str or None The path to x12 or x13 binary. If None, the program will attempt to find x13as or x12a on the PATH or by looking at X13PATH or X12PATH depending on the value of prefer_x13. prefer_x13 : bool If True, will look for x13as first and will fallback to the X13PATH environmental variable. If False, will look for x12a first and will fallback to the X12PATH environmental variable. If x12path points to the path for the X12/X13 binary, it does nothing. tempdir : str The path to where temporary files are created by the function. If None, files are created in the default temporary file location. Returns ------- Bunch A bunch object containing the listed attributes. - order : tuple The regular order. - sorder : tuple The seasonal order. - include_mean : bool Whether to include a mean or not. - results : str The full results from the X12/X13 analysis. - stdout : str The captured stdout from the X12/X13 analysis. Notes ----- This works by creating a specification file, writing it to a temporary directory, invoking X12/X13 in a subprocess, and reading the output back in. """ results = x13_arima_analysis( endog, x12path=x12path, exog=exog, log=log, outlier=outlier, trading=trading, forecast_periods=forecast_periods, maxorder=maxorder, maxdiff=maxdiff, diff=diff, start=start, freq=freq, prefer_x13=prefer_x13, tempdir=tempdir, print_stdout=print_stdout, ) model = re.search("(?<=Final automatic model choice : ).*", results.results) order = model.group() if re.search("Mean is not significant", results.results): include_mean = False elif re.search("Constant", results.results): include_mean = True else: include_mean = False order, sorder = _clean_order(order) res = Bunch( order=order, sorder=sorder, include_mean=include_mean, results=results.results, stdout=results.stdout, ) return res
Perform automatic seasonal ARIMA order identification using x12/x13 ARIMA. Parameters ---------- endog : array_like, pandas.Series The series to model. It is best to use a pandas object with a DatetimeIndex or PeriodIndex. However, you can pass an array-like object. If your object does not have a dates index then ``start`` and ``freq`` are not optional. maxorder : tuple The maximum order of the regular and seasonal ARMA polynomials to examine during the model identification. The order for the regular polynomial must be greater than zero and no larger than 4. The order for the seasonal polynomial may be 1 or 2. maxdiff : tuple The maximum orders for regular and seasonal differencing in the automatic differencing procedure. Acceptable inputs for regular differencing are 1 and 2. The maximum order for seasonal differencing is 1. If ``diff`` is specified then ``maxdiff`` should be None. Otherwise, ``diff`` will be ignored. See also ``diff``. diff : tuple Fixes the orders of differencing for the regular and seasonal differencing. Regular differencing may be 0, 1, or 2. Seasonal differencing may be 0 or 1. ``maxdiff`` must be None, otherwise ``diff`` is ignored. exog : array_like Exogenous variables. log : bool or None If None, it is automatically determined whether to log the series or not. If False, logs are not taken. If True, logs are taken. outlier : bool Whether or not outliers are tested for and corrected, if detected. trading : bool Whether or not trading day effects are tested for. forecast_periods : int Number of forecasts produced. The default is None. start : str, datetime Must be given if ``endog`` does not have date information in its index. Anything accepted by pandas.DatetimeIndex for the start value. freq : str Must be givein if ``endog`` does not have date information in its index. Anything accepted by pandas.DatetimeIndex for the freq value. print_stdout : bool The stdout from X12/X13 is suppressed. To print it out, set this to True. Default is False. x12path : str or None The path to x12 or x13 binary. If None, the program will attempt to find x13as or x12a on the PATH or by looking at X13PATH or X12PATH depending on the value of prefer_x13. prefer_x13 : bool If True, will look for x13as first and will fallback to the X13PATH environmental variable. If False, will look for x12a first and will fallback to the X12PATH environmental variable. If x12path points to the path for the X12/X13 binary, it does nothing. tempdir : str The path to where temporary files are created by the function. If None, files are created in the default temporary file location. Returns ------- Bunch A bunch object containing the listed attributes. - order : tuple The regular order. - sorder : tuple The seasonal order. - include_mean : bool Whether to include a mean or not. - results : str The full results from the X12/X13 analysis. - stdout : str The captured stdout from the X12/X13 analysis. Notes ----- This works by creating a specification file, writing it to a temporary directory, invoking X12/X13 in a subprocess, and reading the output back in.
x13_arima_select_order
python
statsmodels/statsmodels
statsmodels/tsa/x13.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/x13.py
BSD-3-Clause
def is_dummy(self) -> bool: """Flag indicating whether the values produced are dummy variables""" return self._is_dummy
Flag indicating whether the values produced are dummy variables
is_dummy
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def in_sample(self, index: Sequence[Hashable]) -> pd.DataFrame: """ Produce deterministic trends for in-sample fitting. Parameters ---------- index : index_like An index-like object. If not an index, it is converted to an index. Returns ------- DataFrame A DataFrame containing the deterministic terms. """
Produce deterministic trends for in-sample fitting. Parameters ---------- index : index_like An index-like object. If not an index, it is converted to an index. Returns ------- DataFrame A DataFrame containing the deterministic terms.
in_sample
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def out_of_sample( self, steps: int, index: Sequence[Hashable], forecast_index: Optional[Sequence[Hashable]] = None, ) -> pd.DataFrame: """ Produce deterministic trends for out-of-sample forecasts Parameters ---------- steps : int The number of steps to forecast index : index_like An index-like object. If not an index, it is converted to an index. forecast_index : index_like An Index or index-like object to use for the forecasts. If provided must have steps elements. Returns ------- DataFrame A DataFrame containing the deterministic terms. """
Produce deterministic trends for out-of-sample forecasts Parameters ---------- steps : int The number of steps to forecast index : index_like An index-like object. If not an index, it is converted to an index. forecast_index : index_like An Index or index-like object to use for the forecasts. If provided must have steps elements. Returns ------- DataFrame A DataFrame containing the deterministic terms.
out_of_sample
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def __str__(self) -> str: """A meaningful string representation of the term"""
A meaningful string representation of the term
__str__
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def _eq_attr(self) -> tuple[Hashable, ...]: """tuple of attributes that are used for equality comparison"""
tuple of attributes that are used for equality comparison
_eq_attr
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def _extend_index( index: pd.Index, steps: int, forecast_index: Optional[Sequence[Hashable]] = None, ) -> pd.Index: """Extend the forecast index""" if forecast_index is not None: forecast_index = DeterministicTerm._index_like(forecast_index) assert isinstance(forecast_index, pd.Index) if forecast_index.shape[0] != steps: raise ValueError( "The number of values in forecast_index " f"({forecast_index.shape[0]}) must match steps ({steps})." ) return forecast_index if isinstance(index, pd.PeriodIndex): return pd.period_range( index[-1] + 1, periods=steps, freq=index.freq ) elif isinstance(index, pd.DatetimeIndex) and index.freq is not None: next_obs = pd.date_range(index[-1], freq=index.freq, periods=2)[1] return pd.date_range(next_obs, freq=index.freq, periods=steps) elif isinstance(index, pd.RangeIndex): assert isinstance(index, pd.RangeIndex) try: step = index.step start = index.stop except AttributeError: # TODO: Remove after pandas min ver is 1.0.0+ step = index[-1] - index[-2] if len(index) > 1 else 1 start = index[-1] + step stop = start + step * steps return pd.RangeIndex(start, stop, step=step) elif is_int_index(index) and np.all(np.diff(index) == 1): idx_arr = np.arange(index[-1] + 1, index[-1] + steps + 1) return pd.Index(idx_arr) # default range index import warnings warnings.warn( "Only PeriodIndexes, DatetimeIndexes with a frequency set, " "RangesIndexes, and Index with a unit increment support " "extending. The index is set will contain the position relative " "to the data length.", UserWarning, stacklevel=2, ) nobs = index.shape[0] return pd.RangeIndex(nobs + 1, nobs + steps + 1)
Extend the forecast index
_extend_index
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def constant(self) -> bool: """Flag indicating that a constant is included""" return self._constant
Flag indicating that a constant is included
constant
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def order(self) -> int: """Order of the time trend""" return self._order
Order of the time trend
order
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def from_string(cls, trend: str) -> "TimeTrend": """ Create a TimeTrend from a string description. Provided for compatibility with common string names. Parameters ---------- trend : {"n", "c", "t", "ct", "ctt"} The string representation of the time trend. The terms are: * "n": No trend terms * "c": A constant only * "t": Linear time trend only * "ct": A constant and a time trend * "ctt": A constant, a time trend and a quadratic time trend Returns ------- TimeTrend The TimeTrend instance. """ constant = trend.startswith("c") order = 0 if "tt" in trend: order = 2 elif "t" in trend: order = 1 return cls(constant=constant, order=order)
Create a TimeTrend from a string description. Provided for compatibility with common string names. Parameters ---------- trend : {"n", "c", "t", "ct", "ctt"} The string representation of the time trend. The terms are: * "n": No trend terms * "c": A constant only * "t": Linear time trend only * "ct": A constant and a time trend * "ctt": A constant, a time trend and a quadratic time trend Returns ------- TimeTrend The TimeTrend instance.
from_string
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def period(self) -> int: """The period of the seasonality""" return self._period
The period of the seasonality
period
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def initial_period(self) -> int: """The seasonal index of the first observation""" return self._initial_period
The seasonal index of the first observation
initial_period
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def from_index( cls, index: Union[Sequence[Hashable], pd.DatetimeIndex, pd.PeriodIndex] ) -> "Seasonality": """ Construct a seasonality directly from an index using its frequency. Parameters ---------- index : {DatetimeIndex, PeriodIndex} An index with its frequency (`freq`) set. Returns ------- Seasonality The initialized Seasonality instance. """ index = cls._index_like(index) if isinstance(index, pd.PeriodIndex): freq = index.freq elif isinstance(index, pd.DatetimeIndex): freq = index.freq if index.freq else index.inferred_freq else: raise TypeError("index must be a DatetimeIndex or PeriodIndex") if freq is None: raise ValueError("index must have a freq or inferred_freq set") period = freq_to_period(freq) return cls(period=period)
Construct a seasonality directly from an index using its frequency. Parameters ---------- index : {DatetimeIndex, PeriodIndex} An index with its frequency (`freq`) set. Returns ------- Seasonality The initialized Seasonality instance.
from_index
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def order(self) -> int: """The order of the Fourier terms included""" return self._order
The order of the Fourier terms included
order
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def period(self) -> float: """The period of the Fourier terms""" return self._period
The period of the Fourier terms
period
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def freq(self) -> str: """The frequency of the deterministic terms""" return self._freq.freqstr
The frequency of the deterministic terms
freq
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def freq(self) -> str: """The frequency of the deterministic terms""" return self._freq.freqstr
The frequency of the deterministic terms
freq
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def period(self) -> str: """The full period""" return self._period
The full period
period
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def base_period(self) -> Optional[str]: """The base period""" return self._base_period
The base period
base_period
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def from_string( cls, freq: str, trend: str, base_period: Optional[Union[str, DateLike]] = None, ) -> "CalendarTimeTrend": """ Create a TimeTrend from a string description. Provided for compatibility with common string names. Parameters ---------- freq : str A string convertible to a pandas frequency. trend : {"n", "c", "t", "ct", "ctt"} The string representation of the time trend. The terms are: * "n": No trend terms * "c": A constant only * "t": Linear time trend only * "ct": A constant and a time trend * "ctt": A constant, a time trend and a quadratic time trend base_period : {str, pd.Timestamp}, default None The base period to use when computing the time stamps. This value is treated as 1 and so all other time indices are defined as the number of periods since or before this time stamp. If not provided, defaults to pandas base period for a PeriodIndex. Returns ------- TimeTrend The TimeTrend instance. """ constant = trend.startswith("c") order = 0 if "tt" in trend: order = 2 elif "t" in trend: order = 1 return cls(freq, constant, order, base_period=base_period)
Create a TimeTrend from a string description. Provided for compatibility with common string names. Parameters ---------- freq : str A string convertible to a pandas frequency. trend : {"n", "c", "t", "ct", "ctt"} The string representation of the time trend. The terms are: * "n": No trend terms * "c": A constant only * "t": Linear time trend only * "ct": A constant and a time trend * "ctt": A constant, a time trend and a quadratic time trend base_period : {str, pd.Timestamp}, default None The base period to use when computing the time stamps. This value is treated as 1 and so all other time indices are defined as the number of periods since or before this time stamp. If not provided, defaults to pandas base period for a PeriodIndex. Returns ------- TimeTrend The TimeTrend instance.
from_string
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def __init__( self, index: Union[Sequence[Hashable], pd.Index], *, period: Optional[Union[float, int]] = None, constant: bool = False, order: int = 0, seasonal: bool = False, fourier: int = 0, additional_terms: Sequence[DeterministicTerm] = (), drop: bool = False, ): if not isinstance(index, pd.Index): index = pd.Index(index) self._index = index self._deterministic_terms: list[DeterministicTerm] = [] self._extendable = False self._index_freq = None self._validate_index() period = float_like(period, "period", optional=True) self._constant = constant = bool_like(constant, "constant") self._order = required_int_like(order, "order") self._seasonal = seasonal = bool_like(seasonal, "seasonal") self._fourier = required_int_like(fourier, "fourier") additional_terms = tuple(additional_terms) self._cached_in_sample = None self._drop = bool_like(drop, "drop") self._additional_terms = additional_terms if constant or order: self._deterministic_terms.append(TimeTrend(constant, order)) if seasonal and fourier: raise ValueError( """seasonal and fourier can be initialized through the \ constructor since these will be necessarily perfectly collinear. Instead, \ you can pass additional components using the additional_terms input.""" ) if (seasonal or fourier) and period is None: if period is None: self._period = period = freq_to_period(self._index_freq) if seasonal: period = required_int_like(period, "period") self._deterministic_terms.append(Seasonality(period)) elif fourier: period = float_like(period, "period") assert period is not None self._deterministic_terms.append(Fourier(period, order=fourier)) for term in additional_terms: if not isinstance(term, DeterministicTerm): raise TypeError( "All additional terms must be instances of subsclasses " "of DeterministicTerm" ) if term not in self._deterministic_terms: self._deterministic_terms.append(term) else: raise ValueError( "One or more terms in additional_terms has been added " "through the parameters of the constructor. Terms must " "be unique." ) self._period = period self._retain_cols: Optional[list[Hashable]] = None
seasonal and fourier can be initialized through the \ constructor since these will be necessarily perfectly collinear. Instead, \ you can pass additional components using the additional_terms input.
__init__
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def index(self) -> pd.Index: """The index of the process""" return self._index
The index of the process
index
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def terms(self) -> list[DeterministicTerm]: """The deterministic terms included in the process""" return self._deterministic_terms
The deterministic terms included in the process
terms
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def range( self, start: Union[IntLike, DateLike, str], stop: Union[IntLike, DateLike, str], ) -> pd.DataFrame: """ Deterministic terms spanning a range of observations Parameters ---------- start : {int, str, dt.datetime, pd.Timestamp, np.datetime64} The first observation. stop : {int, str, dt.datetime, pd.Timestamp, np.datetime64} The final observation. Inclusive to match most prediction function in statsmodels. Returns ------- DataFrame A data frame of deterministic terms """ if not self._extendable: raise TypeError( """The index in the deterministic process does not \ support extension. Only PeriodIndex, DatetimeIndex with a frequency, \ RangeIndex, and integral Indexes that start at 0 and have only unit \ differences can be extended when producing out-of-sample forecasts. """ ) if type(self._index) in (pd.RangeIndex,) or is_int_index(self._index): start = required_int_like(start, "start") stop = required_int_like(stop, "stop") # Add 1 to ensure that the end point is inclusive stop += 1 return self._range_from_range_index(start, stop) if isinstance(start, (int, np.integer)): start = self._int_to_timestamp(start, "start") else: start = pd.Timestamp(start) if isinstance(stop, (int, np.integer)): stop = self._int_to_timestamp(stop, "stop") else: stop = pd.Timestamp(stop) return self._range_from_time_index(start, stop)
Deterministic terms spanning a range of observations Parameters ---------- start : {int, str, dt.datetime, pd.Timestamp, np.datetime64} The first observation. stop : {int, str, dt.datetime, pd.Timestamp, np.datetime64} The final observation. Inclusive to match most prediction function in statsmodels. Returns ------- DataFrame A data frame of deterministic terms
range
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def apply(self, index): """ Create an identical determinstic process with a different index Parameters ---------- index : index_like An index-like object. If not an index, it is converted to an index. Returns ------- DeterministicProcess The deterministic process applied to a different index """ return DeterministicProcess( index, period=self._period, constant=self._constant, order=self._order, seasonal=self._seasonal, fourier=self._fourier, additional_terms=self._additional_terms, drop=self._drop, )
Create an identical determinstic process with a different index Parameters ---------- index : index_like An index-like object. If not an index, it is converted to an index. Returns ------- DeterministicProcess The deterministic process applied to a different index
apply
python
statsmodels/statsmodels
statsmodels/tsa/deterministic.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/deterministic.py
BSD-3-Clause
def varfilter(x, a): '''apply an autoregressive filter to a series x Warning: I just found out that convolve does not work as I thought, this likely does not work correctly for nvars>3 x can be 2d, a can be 1d, 2d, or 3d Parameters ---------- x : array_like data array, 1d or 2d, if 2d then observations in rows a : array_like autoregressive filter coefficients, ar lag polynomial see Notes Returns ------- y : ndarray, 2d filtered array, number of columns determined by x and a Notes ----- In general form this uses the linear filter :: y = a(L)x where x : nobs, nvars a : nlags, nvars, npoly Depending on the shape and dimension of a this uses different Lag polynomial arrays case 1 : a is 1d or (nlags,1) one lag polynomial is applied to all variables (columns of x) case 2 : a is 2d, (nlags, nvars) each series is independently filtered with its own lag polynomial, uses loop over nvar case 3 : a is 3d, (nlags, nvars, npoly) the ith column of the output array is given by the linear filter defined by the 2d array a[:,:,i], i.e. :: y[:,i] = a(.,.,i)(L) * x y[t,i] = sum_p sum_j a(p,j,i)*x(t-p,j) for p = 0,...nlags-1, j = 0,...nvars-1, for all t >= nlags Note: maybe convert to axis=1, Not TODO: initial conditions ''' x = np.asarray(x) a = np.asarray(a) if x.ndim == 1: x = x[:,None] if x.ndim > 2: raise ValueError('x array has to be 1d or 2d') nvar = x.shape[1] nlags = a.shape[0] ntrim = nlags//2 # for x is 2d with ncols >1 if a.ndim == 1: # case: identical ar filter (lag polynomial) return signal.convolve(x, a[:,None], mode='valid') # alternative: #return signal.lfilter(a,[1],x.astype(float),axis=0) elif a.ndim == 2: if min(a.shape) == 1: # case: identical ar filter (lag polynomial) return signal.convolve(x, a, mode='valid') # case: independent ar #(a bit like recserar in gauss, but no x yet) #(no, reserar is inverse filter) result = np.zeros((x.shape[0]-nlags+1, nvar)) for i in range(nvar): # could also use np.convolve, but easier for swiching to fft result[:,i] = signal.convolve(x[:,i], a[:,i], mode='valid') return result elif a.ndim == 3: # case: vector autoregressive with lag matrices # Note: we must have shape[1] == shape[2] == nvar yf = signal.convolve(x[:,:,None], a) yvalid = yf[ntrim:-ntrim, yf.shape[1]//2,:] return yvalid
apply an autoregressive filter to a series x Warning: I just found out that convolve does not work as I thought, this likely does not work correctly for nvars>3 x can be 2d, a can be 1d, 2d, or 3d Parameters ---------- x : array_like data array, 1d or 2d, if 2d then observations in rows a : array_like autoregressive filter coefficients, ar lag polynomial see Notes Returns ------- y : ndarray, 2d filtered array, number of columns determined by x and a Notes ----- In general form this uses the linear filter :: y = a(L)x where x : nobs, nvars a : nlags, nvars, npoly Depending on the shape and dimension of a this uses different Lag polynomial arrays case 1 : a is 1d or (nlags,1) one lag polynomial is applied to all variables (columns of x) case 2 : a is 2d, (nlags, nvars) each series is independently filtered with its own lag polynomial, uses loop over nvar case 3 : a is 3d, (nlags, nvars, npoly) the ith column of the output array is given by the linear filter defined by the 2d array a[:,:,i], i.e. :: y[:,i] = a(.,.,i)(L) * x y[t,i] = sum_p sum_j a(p,j,i)*x(t-p,j) for p = 0,...nlags-1, j = 0,...nvars-1, for all t >= nlags Note: maybe convert to axis=1, Not TODO: initial conditions
varfilter
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def varinversefilter(ar, nobs, version=1): '''creates inverse ar filter (MA representation) recursively The VAR lag polynomial is defined by :: ar(L) y_t = u_t or y_t = -ar_{-1}(L) y_{t-1} + u_t the returned lagpolynomial is arinv(L)=ar^{-1}(L) in :: y_t = arinv(L) u_t Parameters ---------- ar : ndarray, (nlags,nvars,nvars) matrix lagpolynomial, currently no exog first row should be identity Returns ------- arinv : ndarray, (nobs,nvars,nvars) Notes ----- ''' nlags, nvars, nvarsex = ar.shape if nvars != nvarsex: print('exogenous variables not implemented not tested') arinv = np.zeros((nobs+1, nvarsex, nvars)) arinv[0,:,:] = ar[0] arinv[1:nlags,:,:] = -ar[1:] if version == 1: for i in range(2,nobs+1): tmp = np.zeros((nvars,nvars)) for p in range(1,nlags): tmp += np.dot(-ar[p],arinv[i-p,:,:]) arinv[i,:,:] = tmp if version == 0: for i in range(nlags+1,nobs+1): print(ar[1:].shape, arinv[i-1:i-nlags:-1,:,:].shape) #arinv[i,:,:] = np.dot(-ar[1:],arinv[i-1:i-nlags:-1,:,:]) #print(np.tensordot(-ar[1:],arinv[i-1:i-nlags:-1,:,:],axes=([2],[1])).shape #arinv[i,:,:] = np.tensordot(-ar[1:],arinv[i-1:i-nlags:-1,:,:],axes=([2],[1])) raise NotImplementedError('waiting for generalized ufuncs or something') return arinv
creates inverse ar filter (MA representation) recursively The VAR lag polynomial is defined by :: ar(L) y_t = u_t or y_t = -ar_{-1}(L) y_{t-1} + u_t the returned lagpolynomial is arinv(L)=ar^{-1}(L) in :: y_t = arinv(L) u_t Parameters ---------- ar : ndarray, (nlags,nvars,nvars) matrix lagpolynomial, currently no exog first row should be identity Returns ------- arinv : ndarray, (nobs,nvars,nvars) Notes -----
varinversefilter
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def vargenerate(ar, u, initvalues=None): '''generate an VAR process with errors u similar to gauss uses loop Parameters ---------- ar : array (nlags,nvars,nvars) matrix lagpolynomial u : array (nobs,nvars) exogenous variable, error term for VAR Returns ------- sar : array (1+nobs,nvars) sample of var process, inverse filtered u does not trim initial condition y_0 = 0 Examples -------- # generate random sample of VAR nobs, nvars = 10, 2 u = numpy.random.randn(nobs,nvars) a21 = np.array([[[ 1. , 0. ], [ 0. , 1. ]], [[-0.8, 0. ], [ 0., -0.6]]]) vargenerate(a21,u) # Impulse Response to an initial shock to the first variable imp = np.zeros((nobs, nvars)) imp[0,0] = 1 vargenerate(a21,imp) ''' nlags, nvars, nvarsex = ar.shape nlagsm1 = nlags - 1 nobs = u.shape[0] if nvars != nvarsex: print('exogenous variables not implemented not tested') if u.shape[1] != nvars: raise ValueError('u needs to have nvars columns') if initvalues is None: sar = np.zeros((nobs+nlagsm1, nvars)) start = nlagsm1 else: start = max(nlagsm1, initvalues.shape[0]) sar = np.zeros((nobs+start, nvars)) sar[start-initvalues.shape[0]:start] = initvalues #sar[nlagsm1:] = u sar[start:] = u #if version == 1: for i in range(start,start+nobs): for p in range(1,nlags): sar[i] += np.dot(sar[i-p,:],-ar[p]) return sar
generate an VAR process with errors u similar to gauss uses loop Parameters ---------- ar : array (nlags,nvars,nvars) matrix lagpolynomial u : array (nobs,nvars) exogenous variable, error term for VAR Returns ------- sar : array (1+nobs,nvars) sample of var process, inverse filtered u does not trim initial condition y_0 = 0 Examples -------- # generate random sample of VAR nobs, nvars = 10, 2 u = numpy.random.randn(nobs,nvars) a21 = np.array([[[ 1. , 0. ], [ 0. , 1. ]], [[-0.8, 0. ], [ 0., -0.6]]]) vargenerate(a21,u) # Impulse Response to an initial shock to the first variable imp = np.zeros((nobs, nvars)) imp[0,0] = 1 vargenerate(a21,imp)
vargenerate
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def padone(x, front=0, back=0, axis=0, fillvalue=0): '''pad with zeros along one axis, currently only axis=0 can be used sequentially to pad several axis Examples -------- >>> padone(np.ones((2,3)),1,3,axis=1) array([[ 0., 1., 1., 1., 0., 0., 0.], [ 0., 1., 1., 1., 0., 0., 0.]]) >>> padone(np.ones((2,3)),1,1, fillvalue=np.nan) array([[ NaN, NaN, NaN], [ 1., 1., 1.], [ 1., 1., 1.], [ NaN, NaN, NaN]]) ''' #primitive version shape = np.array(x.shape) shape[axis] += (front + back) shapearr = np.array(x.shape) out = np.empty(shape) out.fill(fillvalue) startind = np.zeros(x.ndim) startind[axis] = front endind = startind + shapearr myslice = [slice(startind[k], endind[k]) for k in range(len(endind))] #print(myslice #print(out.shape #print(out[tuple(myslice)].shape out[tuple(myslice)] = x return out
pad with zeros along one axis, currently only axis=0 can be used sequentially to pad several axis Examples -------- >>> padone(np.ones((2,3)),1,3,axis=1) array([[ 0., 1., 1., 1., 0., 0., 0.], [ 0., 1., 1., 1., 0., 0., 0.]]) >>> padone(np.ones((2,3)),1,1, fillvalue=np.nan) array([[ NaN, NaN, NaN], [ 1., 1., 1.], [ 1., 1., 1.], [ NaN, NaN, NaN]])
padone
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def trimone(x, front=0, back=0, axis=0): '''trim number of array elements along one axis Examples -------- >>> xp = padone(np.ones((2,3)),1,3,axis=1) >>> xp array([[ 0., 1., 1., 1., 0., 0., 0.], [ 0., 1., 1., 1., 0., 0., 0.]]) >>> trimone(xp,1,3,1) array([[ 1., 1., 1.], [ 1., 1., 1.]]) ''' shape = np.array(x.shape) shape[axis] -= (front + back) #print(shape, front, back startind = np.zeros(x.ndim) startind[axis] = front endind = startind + shape myslice = [slice(startind[k], endind[k]) for k in range(len(endind))] #print(myslice #print(shape, endind #print(x[tuple(myslice)].shape return x[tuple(myslice)]
trim number of array elements along one axis Examples -------- >>> xp = padone(np.ones((2,3)),1,3,axis=1) >>> xp array([[ 0., 1., 1., 1., 0., 0., 0.], [ 0., 1., 1., 1., 0., 0., 0.]]) >>> trimone(xp,1,3,1) array([[ 1., 1., 1.], [ 1., 1., 1.]])
trimone
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def ar2full(ar): '''make reduced lagpolynomial into a right side lagpoly array ''' nlags, nvar,nvarex = ar.shape return np.r_[np.eye(nvar,nvarex)[None,:,:],-ar]
make reduced lagpolynomial into a right side lagpoly array
ar2full
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def ar2lhs(ar): '''convert full (rhs) lagpolynomial into a reduced, left side lagpoly array this is mainly a reminder about the definition ''' return -ar[1:]
convert full (rhs) lagpolynomial into a reduced, left side lagpoly array this is mainly a reminder about the definition
ar2lhs
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def fit(self, nlags): '''estimate parameters using ols Parameters ---------- nlags : int number of lags to include in regression, same for all variables Returns ------- None, but attaches arhat : array (nlags, nvar, nvar) full lag polynomial array arlhs : array (nlags-1, nvar, nvar) reduced lag polynomial for left hand side other statistics as returned by linalg.lstsq : need to be completed This currently assumes all parameters are estimated without restrictions. In this case SUR is identical to OLS estimation results are attached to the class instance ''' self.nlags = nlags # without current period nvars = self.nvars #TODO: ar2s looks like a module variable, bug? #lmat = lagmat(ar2s, nlags, trim='both', original='in') lmat = lagmat(self.y, nlags, trim='both', original='in') self.yred = lmat[:,:nvars] self.xred = lmat[:,nvars:] res = np.linalg.lstsq(self.xred, self.yred, rcond=-1) self.estresults = res self.arlhs = res[0].reshape(nlags, nvars, nvars) self.arhat = ar2full(self.arlhs) self.rss = res[1] self.xredrank = res[2]
estimate parameters using ols Parameters ---------- nlags : int number of lags to include in regression, same for all variables Returns ------- None, but attaches arhat : array (nlags, nvar, nvar) full lag polynomial array arlhs : array (nlags-1, nvar, nvar) reduced lag polynomial for left hand side other statistics as returned by linalg.lstsq : need to be completed This currently assumes all parameters are estimated without restrictions. In this case SUR is identical to OLS estimation results are attached to the class instance
fit
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def predict(self): '''calculate estimated timeseries (yhat) for sample ''' if not hasattr(self, 'yhat'): self.yhat = varfilter(self.y, self.arhat) return self.yhat
calculate estimated timeseries (yhat) for sample
predict
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def covmat(self): ''' covariance matrix of estimate # not sure it's correct, need to check orientation everywhere # looks ok, display needs getting used to >>> v.rss[None,None,:]*np.linalg.inv(np.dot(v.xred.T,v.xred))[:,:,None] array([[[ 0.37247445, 0.32210609], [ 0.1002642 , 0.08670584]], [[ 0.1002642 , 0.08670584], [ 0.45903637, 0.39696255]]]) >>> >>> v.rss[0]*np.linalg.inv(np.dot(v.xred.T,v.xred)) array([[ 0.37247445, 0.1002642 ], [ 0.1002642 , 0.45903637]]) >>> v.rss[1]*np.linalg.inv(np.dot(v.xred.T,v.xred)) array([[ 0.32210609, 0.08670584], [ 0.08670584, 0.39696255]]) ''' #check if orientation is same as self.arhat self.paramcov = (self.rss[None,None,:] * np.linalg.inv(np.dot(self.xred.T, self.xred))[:,:,None])
covariance matrix of estimate # not sure it's correct, need to check orientation everywhere # looks ok, display needs getting used to >>> v.rss[None,None,:]*np.linalg.inv(np.dot(v.xred.T,v.xred))[:,:,None] array([[[ 0.37247445, 0.32210609], [ 0.1002642 , 0.08670584]], [[ 0.1002642 , 0.08670584], [ 0.45903637, 0.39696255]]]) >>> >>> v.rss[0]*np.linalg.inv(np.dot(v.xred.T,v.xred)) array([[ 0.37247445, 0.1002642 ], [ 0.1002642 , 0.45903637]]) >>> v.rss[1]*np.linalg.inv(np.dot(v.xred.T,v.xred)) array([[ 0.32210609, 0.08670584], [ 0.08670584, 0.39696255]])
covmat
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def forecast(self, horiz=1, u=None): '''calculates forcast for horiz number of periods at end of sample Parameters ---------- horiz : int (optional, default=1) forecast horizon u : array (horiz, nvars) error term for forecast periods. If None, then u is zero. Returns ------- yforecast : array (nobs+horiz, nvars) this includes the sample and the forecasts ''' if u is None: u = np.zeros((horiz, self.nvars)) return vargenerate(self.arhat, u, initvalues=self.y)
calculates forcast for horiz number of periods at end of sample Parameters ---------- horiz : int (optional, default=1) forecast horizon u : array (horiz, nvars) error term for forecast periods. If None, then u is zero. Returns ------- yforecast : array (nobs+horiz, nvars) this includes the sample and the forecasts
forecast
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def vstack(self, a=None, name='ar'): '''stack lagpolynomial vertically in 2d array ''' if a is not None: a = a elif name == 'ar': a = self.ar elif name == 'ma': a = self.ma else: raise ValueError('no array or name given') return a.reshape(-1, self.nvarall)
stack lagpolynomial vertically in 2d array
vstack
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def hstack(self, a=None, name='ar'): '''stack lagpolynomial horizontally in 2d array ''' if a is not None: a = a elif name == 'ar': a = self.ar elif name == 'ma': a = self.ma else: raise ValueError('no array or name given') return a.swapaxes(1,2).reshape(-1, self.nvarall).T
stack lagpolynomial horizontally in 2d array
hstack
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def stacksquare(self, a=None, name='ar', orientation='vertical'): '''stack lagpolynomial vertically in 2d square array with eye ''' if a is not None: a = a elif name == 'ar': a = self.ar elif name == 'ma': a = self.ma else: raise ValueError('no array or name given') astacked = a.reshape(-1, self.nvarall) lenpk, nvars = astacked.shape #[0] amat = np.eye(lenpk, k=nvars) amat[:,:nvars] = astacked return amat
stack lagpolynomial vertically in 2d square array with eye
stacksquare
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def vstackarma_minus1(self): '''stack ar and lagpolynomial vertically in 2d array ''' a = np.concatenate((self.ar[1:], self.ma[1:]),0) return a.reshape(-1, self.nvarall)
stack ar and lagpolynomial vertically in 2d array
vstackarma_minus1
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def hstackarma_minus1(self): '''stack ar and lagpolynomial vertically in 2d array this is the Kalman Filter representation, I think ''' a = np.concatenate((self.ar[1:], self.ma[1:]),0) return a.swapaxes(1,2).reshape(-1, self.nvarall)
stack ar and lagpolynomial vertically in 2d array this is the Kalman Filter representation, I think
hstackarma_minus1
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def getisstationary(self, a=None): '''check whether the auto-regressive lag-polynomial is stationary Returns ------- isstationary : bool *attaches* areigenvalues : complex array eigenvalues sorted by absolute value References ---------- formula taken from NAG manual ''' if a is not None: a = a else: if self.isstructured: a = -self.reduceform(self.ar)[1:] else: a = -self.ar[1:] amat = self.stacksquare(a) ev = np.sort(np.linalg.eigvals(amat))[::-1] self.areigenvalues = ev return (np.abs(ev) < 1).all()
check whether the auto-regressive lag-polynomial is stationary Returns ------- isstationary : bool *attaches* areigenvalues : complex array eigenvalues sorted by absolute value References ---------- formula taken from NAG manual
getisstationary
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def getisinvertible(self, a=None): '''check whether the auto-regressive lag-polynomial is stationary Returns ------- isinvertible : bool *attaches* maeigenvalues : complex array eigenvalues sorted by absolute value References ---------- formula taken from NAG manual ''' if a is not None: a = a else: if self.isindependent: a = self.reduceform(self.ma)[1:] else: a = self.ma[1:] if a.shape[0] == 0: # no ma lags self.maeigenvalues = np.array([], np.complex) return True amat = self.stacksquare(a) ev = np.sort(np.linalg.eigvals(amat))[::-1] self.maeigenvalues = ev return (np.abs(ev) < 1).all()
check whether the auto-regressive lag-polynomial is stationary Returns ------- isinvertible : bool *attaches* maeigenvalues : complex array eigenvalues sorted by absolute value References ---------- formula taken from NAG manual
getisinvertible
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def reduceform(self, apoly): ''' this assumes no exog, todo ''' if apoly.ndim != 3: raise ValueError('apoly needs to be 3d') nlags, nvarsex, nvars = apoly.shape a = np.empty_like(apoly) try: a0inv = np.linalg.inv(a[0,:nvars, :]) except np.linalg.LinAlgError: raise ValueError('matrix not invertible', 'ask for implementation of pinv') for lag in range(nlags): a[lag] = np.dot(a0inv, apoly[lag]) return a
this assumes no exog, todo
reduceform
python
statsmodels/statsmodels
statsmodels/tsa/varma_process.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/varma_process.py
BSD-3-Clause
def sumofsq(x: np.ndarray, axis: int = 0) -> float | np.ndarray: """Helper function to calculate sum of squares along first axis""" return np.sum(x**2, axis=axis)
Helper function to calculate sum of squares along first axis
sumofsq
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def _get_period(data: pd.DatetimeIndex | pd.PeriodIndex, index_freq) -> int: """Shared helper to get period from frequenc or raise""" if data.freq: return freq_to_period(index_freq) raise ValueError( "freq cannot be inferred from endog and model includes seasonal " "terms. The number of periods must be explicitly set when the " "endog's index does not contain a frequency." )
Shared helper to get period from frequenc or raise
_get_period
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def ar_lags(self) -> list[int] | None: """The autoregressive lags included in the model""" lags = list(self._lags) return None if not lags else lags
The autoregressive lags included in the model
ar_lags
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def hold_back(self) -> int | None: """The number of initial obs. excluded from the estimation sample.""" return self._hold_back
The number of initial obs. excluded from the estimation sample.
hold_back
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def trend(self) -> Literal["n", "c", "ct", "ctt"]: """The trend used in the model.""" return self._trend
The trend used in the model.
trend
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def seasonal(self) -> bool: """Flag indicating that the model contains a seasonal component.""" return self._seasonal
Flag indicating that the model contains a seasonal component.
seasonal
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def deterministic(self) -> DeterministicProcess | None: """The deterministic used to construct the model""" return self._deterministics if self._user_deterministic else None
The deterministic used to construct the model
deterministic
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def period(self) -> int | None: """The period of the seasonal component.""" return self._period
The period of the seasonal component.
period
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def df_model(self) -> int: """The model degrees of freedom.""" return self._x.shape[1]
The model degrees of freedom.
df_model
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def exog_names(self) -> list[str] | None: """Names of exogenous variables included in model""" return self._exog_names
Names of exogenous variables included in model
exog_names
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def initialize(self) -> None: """Initialize the model (no-op).""" pass
Initialize the model (no-op).
initialize
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def fit( self, cov_type: str = "nonrobust", cov_kwds: dict[str, Any] | None = None, use_t: bool = False, ) -> AutoRegResultsWrapper: """ Estimate the model parameters. Parameters ---------- cov_type : str The covariance estimator to use. The most common choices are listed below. Supports all covariance estimators that are available in ``OLS.fit``. * 'nonrobust' - The class OLS covariance estimator that assumes homoskedasticity. * 'HC0', 'HC1', 'HC2', 'HC3' - Variants of White's (or Eiker-Huber-White) covariance estimator. `HC0` is the standard implementation. The other make corrections to improve the finite sample performance of the heteroskedasticity robust covariance estimator. * 'HAC' - Heteroskedasticity-autocorrelation robust covariance estimation. Supports cov_kwds. - `maxlags` integer (required) : number of lags to use. - `kernel` callable or str (optional) : kernel currently available kernels are ['bartlett', 'uniform'], default is Bartlett. - `use_correction` bool (optional) : If true, use small sample correction. cov_kwds : dict, optional A dictionary of keyword arguments to pass to the covariance estimator. `nonrobust` and `HC#` do not support cov_kwds. use_t : bool, optional A flag indicating that inference should use the Student's t distribution that accounts for model degree of freedom. If False, uses the normal distribution. If None, defers the choice to the cov_type. It also removes degree of freedom corrections from the covariance estimator when cov_type is 'nonrobust'. Returns ------- AutoRegResults Estimation results. See Also -------- statsmodels.regression.linear_model.OLS Ordinary Least Squares estimation. statsmodels.regression.linear_model.RegressionResults See ``get_robustcov_results`` for a detailed list of available covariance estimators and options. Notes ----- Use ``OLS`` to estimate model parameters and to estimate parameter covariance. """ # TODO: Determine correction for degree-of-freedom # Special case parameterless model if self._x.shape[1] == 0: return AutoRegResultsWrapper( AutoRegResults(self, np.empty(0), np.empty((0, 0))) ) ols_mod = OLS(self._y, self._x) ols_res = ols_mod.fit( cov_type=cov_type, cov_kwds=cov_kwds, use_t=use_t ) cov_params = ols_res.cov_params() use_t = ols_res.use_t if cov_type == "nonrobust" and not use_t: nobs = self._y.shape[0] k = self._x.shape[1] scale = nobs / (nobs - k) cov_params /= scale res = AutoRegResults( self, ols_res.params, cov_params, ols_res.normalized_cov_params, use_t=use_t, ) return AutoRegResultsWrapper(res)
Estimate the model parameters. Parameters ---------- cov_type : str The covariance estimator to use. The most common choices are listed below. Supports all covariance estimators that are available in ``OLS.fit``. * 'nonrobust' - The class OLS covariance estimator that assumes homoskedasticity. * 'HC0', 'HC1', 'HC2', 'HC3' - Variants of White's (or Eiker-Huber-White) covariance estimator. `HC0` is the standard implementation. The other make corrections to improve the finite sample performance of the heteroskedasticity robust covariance estimator. * 'HAC' - Heteroskedasticity-autocorrelation robust covariance estimation. Supports cov_kwds. - `maxlags` integer (required) : number of lags to use. - `kernel` callable or str (optional) : kernel currently available kernels are ['bartlett', 'uniform'], default is Bartlett. - `use_correction` bool (optional) : If true, use small sample correction. cov_kwds : dict, optional A dictionary of keyword arguments to pass to the covariance estimator. `nonrobust` and `HC#` do not support cov_kwds. use_t : bool, optional A flag indicating that inference should use the Student's t distribution that accounts for model degree of freedom. If False, uses the normal distribution. If None, defers the choice to the cov_type. It also removes degree of freedom corrections from the covariance estimator when cov_type is 'nonrobust'. Returns ------- AutoRegResults Estimation results. See Also -------- statsmodels.regression.linear_model.OLS Ordinary Least Squares estimation. statsmodels.regression.linear_model.RegressionResults See ``get_robustcov_results`` for a detailed list of available covariance estimators and options. Notes ----- Use ``OLS`` to estimate model parameters and to estimate parameter covariance.
fit
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def loglike(self, params: ArrayLike) -> float: """ Log-likelihood of model. Parameters ---------- params : ndarray The model parameters used to compute the log-likelihood. Returns ------- float The log-likelihood value. """ nobs = self.nobs resid = self._resid(params) ssr = resid @ resid llf = -(nobs / 2) * (np.log(2 * np.pi) + np.log(ssr / nobs) + 1) return llf
Log-likelihood of model. Parameters ---------- params : ndarray The model parameters used to compute the log-likelihood. Returns ------- float The log-likelihood value.
loglike
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def score(self, params: ArrayLike) -> np.ndarray: """ Score vector of model. The gradient of logL with respect to each parameter. Parameters ---------- params : ndarray The parameters to use when evaluating the Hessian. Returns ------- ndarray The score vector evaluated at the parameters. """ resid = self._resid(params) return self._x.T @ resid
Score vector of model. The gradient of logL with respect to each parameter. Parameters ---------- params : ndarray The parameters to use when evaluating the Hessian. Returns ------- ndarray The score vector evaluated at the parameters.
score
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def information(self, params: ArrayLike) -> np.ndarray: """ Fisher information matrix of model. Returns -1 * Hessian of the log-likelihood evaluated at params. Parameters ---------- params : ndarray The model parameters. Returns ------- ndarray The information matrix. """ resid = self._resid(params) sigma2 = resid @ resid / self.nobs return (self._x.T @ self._x) * (1 / sigma2)
Fisher information matrix of model. Returns -1 * Hessian of the log-likelihood evaluated at params. Parameters ---------- params : ndarray The model parameters. Returns ------- ndarray The information matrix.
information
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def hessian(self, params: ArrayLike) -> np.ndarray: """ The Hessian matrix of the model. Parameters ---------- params : ndarray The parameters to use when evaluating the Hessian. Returns ------- ndarray The hessian evaluated at the parameters. """ return -self.information(params)
The Hessian matrix of the model. Parameters ---------- params : ndarray The parameters to use when evaluating the Hessian. Returns ------- ndarray The hessian evaluated at the parameters.
hessian
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def _dynamic_predict( self, params: ArrayLike, start: int, end: int, dynamic: int, num_oos: int, exog: Float64Array | None, exog_oos: Float64Array | None, ) -> pd.Series: """ :param params: :param start: :param end: :param dynamic: :param num_oos: :param exog: :param exog_oos: :return: """ reg = [] hold_back = self._hold_back adj = 0 if start < hold_back: # Adjust start and dynamic adj = hold_back - start start += adj # New offset shifts, but must remain non-negative dynamic = max(dynamic - adj, 0) if (start - hold_back) <= self.nobs: # _x is missing hold_back observations, which is why # it is shifted by this amount is_loc = slice(start - hold_back, end + 1 - hold_back) x = self._x[is_loc] if exog is not None: x = x.copy() # Replace final columns x[:, -exog.shape[1] :] = exog[start : end + 1] reg.append(x) if num_oos > 0: reg.append(self._setup_oos_forecast(num_oos, exog_oos)) _reg = np.vstack(reg) det_col_idx = self._x.shape[1] - len(self._lags) det_col_idx -= 0 if self.exog is None else self.exog.shape[1] # Simple 1-step static forecasts for dynamic observations forecasts = np.empty(_reg.shape[0]) forecasts[:dynamic] = _reg[:dynamic] @ params for h in range(dynamic, _reg.shape[0]): # Fill in regressor matrix for j, lag in enumerate(self._lags): fcast_loc = h - lag if fcast_loc >= dynamic: val = forecasts[fcast_loc] else: # If before the start of the forecasts, use actual values val = self.endog[fcast_loc + start] _reg[h, det_col_idx + j] = val forecasts[h] = np.squeeze(_reg[h : h + 1] @ params) return self._wrap_prediction(forecasts, start, end + 1 + num_oos, adj)
:param params: :param start: :param end: :param dynamic: :param num_oos: :param exog: :param exog_oos: :return:
_dynamic_predict
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def _static_predict( self, params: Float64Array, start: int, end: int, num_oos: int, exog: Float64Array | None, exog_oos: Float64Array | None, ) -> pd.Series: """ Path for static predictions Parameters ---------- params : ndarray The model parameters start : int Index of first observation end : int Index of last in-sample observation. Inclusive, so start:end+1 in slice notation. num_oos : int Number of out-of-sample observations, so that the returned size is num_oos + (end - start + 1). exog : {ndarray, DataFrame} Array containing replacement exog values exog_oos : {ndarray, DataFrame} Containing forecast exog values """ hold_back = self._hold_back nobs = self.endog.shape[0] x = np.empty((0, self._x.shape[1])) # Adjust start to reflect observations lost adj = max(0, hold_back - start) start += adj if start <= nobs: # Use existing regressors is_loc = slice(start - hold_back, end + 1 - hold_back) x = self._x[is_loc] if exog is not None: exog_a = np.asarray(exog) x = x.copy() # Replace final columns x[:, -exog_a.shape[1] :] = exog_a[start : end + 1] in_sample = x @ params if num_oos == 0: # No out of sample return self._wrap_prediction(in_sample, start, end + 1, adj) out_of_sample = self._static_oos_predict(params, num_oos, exog_oos) prediction = np.hstack((in_sample, out_of_sample)) return self._wrap_prediction(prediction, start, end + 1 + num_oos, adj)
Path for static predictions Parameters ---------- params : ndarray The model parameters start : int Index of first observation end : int Index of last in-sample observation. Inclusive, so start:end+1 in slice notation. num_oos : int Number of out-of-sample observations, so that the returned size is num_oos + (end - start + 1). exog : {ndarray, DataFrame} Array containing replacement exog values exog_oos : {ndarray, DataFrame} Containing forecast exog values
_static_predict
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def predict( self, params: ArrayLike, start: int | str | datetime.datetime | pd.Timestamp | None = None, end: int | str | datetime.datetime | pd.Timestamp | None = None, dynamic: bool | int = False, exog: ArrayLike2D | None = None, exog_oos: ArrayLike2D | None = None, ) -> pd.Series: """ In-sample prediction and out-of-sample forecasting. Parameters ---------- params : array_like The fitted model parameters. start : int, str, or datetime, optional Zero-indexed observation number at which to start forecasting, i.e., the first forecast is start. Can also be a date string to parse or a datetime type. Default is the the zeroth observation. end : int, str, or datetime, optional Zero-indexed observation number at which to end forecasting, i.e., the last forecast is end. Can also be a date string to parse or a datetime type. However, if the dates index does not have a fixed frequency, end must be an integer index if you want out-of-sample prediction. Default is the last observation in the sample. Unlike standard python slices, end is inclusive so that all the predictions [start, start+1, ..., end-1, end] are returned. dynamic : {bool, int, str, datetime, Timestamp}, optional Integer offset relative to `start` at which to begin dynamic prediction. Prior to this observation, true endogenous values will be used for prediction; starting with this observation and continuing through the end of prediction, forecasted endogenous values will be used instead. Datetime-like objects are not interpreted as offsets. They are instead used to find the index location of `dynamic` which is then used to to compute the offset. exog : array_like A replacement exogenous array. Must have the same shape as the exogenous data array used when the model was created. exog_oos : array_like An array containing out-of-sample values of the exogenous variable. Must has the same number of columns as the exog used when the model was created, and at least as many rows as the number of out-of-sample forecasts. Returns ------- predictions : {ndarray, Series} Array of out of in-sample predictions and / or out-of-sample forecasts. """ params, exog, exog_oos, start, end, num_oos = self._prepare_prediction( params, exog, exog_oos, start, end ) if self.exog is None and (exog is not None or exog_oos is not None): raise ValueError( "exog and exog_oos cannot be used when the model " "does not contains exogenous regressors." ) elif self.exog is not None: if exog is not None and exog.shape != self.exog.shape: msg = ( "The shape of exog {0} must match the shape of the " "exog variable used to create the model {1}." ) raise ValueError(msg.format(exog.shape, self.exog.shape)) if ( exog_oos is not None and exog_oos.shape[1] != self.exog.shape[1] ): msg = ( "The number of columns in exog_oos ({0}) must match " "the number of columns in the exog variable used to " "create the model ({1})." ) raise ValueError( msg.format(exog_oos.shape[1], self.exog.shape[1]) ) if num_oos > 0 and exog_oos is None: raise ValueError( "exog_oos must be provided when producing " "out-of-sample forecasts." ) elif exog_oos is not None and num_oos > exog_oos.shape[0]: msg = ( "start and end indicate that {0} out-of-sample " "predictions must be computed. exog_oos has {1} rows " "but must have at least {0}." ) raise ValueError(msg.format(num_oos, exog_oos.shape[0])) if (isinstance(dynamic, bool) and not dynamic) or self._maxlag == 0: # If model has no lags, static and dynamic are identical return self._static_predict( params, start, end, num_oos, exog, exog_oos ) dynamic = self._parse_dynamic(dynamic, start) return self._dynamic_predict( params, start, end, dynamic, num_oos, exog, exog_oos )
In-sample prediction and out-of-sample forecasting. Parameters ---------- params : array_like The fitted model parameters. start : int, str, or datetime, optional Zero-indexed observation number at which to start forecasting, i.e., the first forecast is start. Can also be a date string to parse or a datetime type. Default is the the zeroth observation. end : int, str, or datetime, optional Zero-indexed observation number at which to end forecasting, i.e., the last forecast is end. Can also be a date string to parse or a datetime type. However, if the dates index does not have a fixed frequency, end must be an integer index if you want out-of-sample prediction. Default is the last observation in the sample. Unlike standard python slices, end is inclusive so that all the predictions [start, start+1, ..., end-1, end] are returned. dynamic : {bool, int, str, datetime, Timestamp}, optional Integer offset relative to `start` at which to begin dynamic prediction. Prior to this observation, true endogenous values will be used for prediction; starting with this observation and continuing through the end of prediction, forecasted endogenous values will be used instead. Datetime-like objects are not interpreted as offsets. They are instead used to find the index location of `dynamic` which is then used to to compute the offset. exog : array_like A replacement exogenous array. Must have the same shape as the exogenous data array used when the model was created. exog_oos : array_like An array containing out-of-sample values of the exogenous variable. Must has the same number of columns as the exog used when the model was created, and at least as many rows as the number of out-of-sample forecasts. Returns ------- predictions : {ndarray, Series} Array of out of in-sample predictions and / or out-of-sample forecasts.
predict
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def initialize(self, model, params, **kwargs): """ Initialize (possibly re-initialize) a Results instance. Parameters ---------- model : Model The model instance. params : ndarray The model parameters. **kwargs Any additional keyword arguments required to initialize the model. """ self._params = params self.model = model
Initialize (possibly re-initialize) a Results instance. Parameters ---------- model : Model The model instance. params : ndarray The model parameters. **kwargs Any additional keyword arguments required to initialize the model.
initialize
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def ar_lags(self): """The autoregressive lags included in the model""" return self._ar_lags
The autoregressive lags included in the model
ar_lags
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def params(self): """The estimated parameters.""" return self._params
The estimated parameters.
params
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def df_model(self): """The degrees of freedom consumed by the model.""" return self._df_model
The degrees of freedom consumed by the model.
df_model
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def df_resid(self): """The remaining degrees of freedom in the residuals.""" return self.nobs - self._df_model
The remaining degrees of freedom in the residuals.
df_resid
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def nobs(self): """ The number of observations after adjusting for losses due to lags. """ return self._nobs
The number of observations after adjusting for losses due to lags.
nobs
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def bse(self): # allow user to specify? """ The standard errors of the estimated parameters. If `method` is 'cmle', then the standard errors that are returned are the OLS standard errors of the coefficients. If the `method` is 'mle' then they are computed using the numerical Hessian. """ return np.sqrt(np.diag(self.cov_params()))
The standard errors of the estimated parameters. If `method` is 'cmle', then the standard errors that are returned are the OLS standard errors of the coefficients. If the `method` is 'mle' then they are computed using the numerical Hessian.
bse
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def resid(self): """ The residuals of the model. """ model = self.model endog = model.endog.squeeze() return endog[self._hold_back :] - self.fittedvalues
The residuals of the model.
resid
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def _lag_repr(self): """Returns poly repr of an AR, (1 -phi1 L -phi2 L^2-...)""" ar_lags = self._ar_lags if self._ar_lags is not None else [] k_ar = len(ar_lags) ar_params = np.zeros(self._max_lag + 1) ar_params[0] = 1 df_model = self._df_model exog = self.model.exog k_exog = exog.shape[1] if exog is not None else 0 params = self._params[df_model - k_ar - k_exog : df_model - k_exog] for i, lag in enumerate(ar_lags): ar_params[lag] = -params[i] return ar_params
Returns poly repr of an AR, (1 -phi1 L -phi2 L^2-...)
_lag_repr
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def roots(self): """ The roots of the AR process. The roots are the solution to (1 - arparams[0]*z - arparams[1]*z**2 -...- arparams[p-1]*z**k_ar) = 0. Stability requires that the roots in modulus lie outside the unit circle. """ # TODO: Specific to AR lag_repr = self._lag_repr() if lag_repr.shape[0] == 1: return np.empty(0) return np.roots(lag_repr) ** -1
The roots of the AR process. The roots are the solution to (1 - arparams[0]*z - arparams[1]*z**2 -...- arparams[p-1]*z**k_ar) = 0. Stability requires that the roots in modulus lie outside the unit circle.
roots
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause
def fittedvalues(self): """ The in-sample predicted values of the fitted AR model. The `k_ar` initial values are computed via the Kalman Filter if the model is fit by `mle`. """ return self.model.predict(self.params)[self._hold_back :]
The in-sample predicted values of the fitted AR model. The `k_ar` initial values are computed via the Kalman Filter if the model is fit by `mle`.
fittedvalues
python
statsmodels/statsmodels
statsmodels/tsa/ar_model.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/ar_model.py
BSD-3-Clause