code
stringlengths 26
870k
| docstring
stringlengths 1
65.6k
| func_name
stringlengths 1
194
| language
stringclasses 1
value | repo
stringlengths 8
68
| path
stringlengths 5
194
| url
stringlengths 46
254
| license
stringclasses 4
values |
---|---|---|---|---|---|---|---|
def validate_matrix_shape(name, shape, nrows, ncols, nobs):
"""
Validate the shape of a possibly time-varying matrix, or raise an exception
Parameters
----------
name : str
The name of the matrix being validated (used in exception messages)
shape : array_like
The shape of the matrix to be validated. May be of size 2 or (if
the matrix is time-varying) 3.
nrows : int
The expected number of rows.
ncols : int
The expected number of columns.
nobs : int
The number of observations (used to validate the last dimension of a
time-varying matrix)
Raises
------
ValueError
If the matrix is not of the desired shape.
"""
ndim = len(shape)
# Enforce dimension
if ndim not in [2, 3]:
raise ValueError('Invalid value for %s matrix. Requires a'
' 2- or 3-dimensional array, got %d dimensions' %
(name, ndim))
# Enforce the shape of the matrix
if not shape[0] == nrows:
raise ValueError('Invalid dimensions for %s matrix: requires %d'
' rows, got %d' % (name, nrows, shape[0]))
if not shape[1] == ncols:
raise ValueError('Invalid dimensions for %s matrix: requires %d'
' columns, got %d' % (name, ncols, shape[1]))
# If we do not yet know `nobs`, do not allow time-varying arrays
if nobs is None and not (ndim == 2 or shape[-1] == 1):
raise ValueError('Invalid dimensions for %s matrix: time-varying'
' matrices cannot be given unless `nobs` is specified'
' (implicitly when a dataset is bound or else set'
' explicity)' % name)
# Enforce time-varying array size
if ndim == 3 and nobs is not None and shape[-1] not in [1, nobs]:
raise ValueError('Invalid dimensions for time-varying %s'
' matrix. Requires shape (*,*,%d), got %s' %
(name, nobs, str(shape))) | Validate the shape of a possibly time-varying matrix, or raise an exception
Parameters
----------
name : str
The name of the matrix being validated (used in exception messages)
shape : array_like
The shape of the matrix to be validated. May be of size 2 or (if
the matrix is time-varying) 3.
nrows : int
The expected number of rows.
ncols : int
The expected number of columns.
nobs : int
The number of observations (used to validate the last dimension of a
time-varying matrix)
Raises
------
ValueError
If the matrix is not of the desired shape. | validate_matrix_shape | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def validate_vector_shape(name, shape, nrows, nobs):
"""
Validate the shape of a possibly time-varying vector, or raise an exception
Parameters
----------
name : str
The name of the vector being validated (used in exception messages)
shape : array_like
The shape of the vector to be validated. May be of size 1 or (if
the vector is time-varying) 2.
nrows : int
The expected number of rows (elements of the vector).
nobs : int
The number of observations (used to validate the last dimension of a
time-varying vector)
Raises
------
ValueError
If the vector is not of the desired shape.
"""
ndim = len(shape)
# Enforce dimension
if ndim not in [1, 2]:
raise ValueError('Invalid value for %s vector. Requires a'
' 1- or 2-dimensional array, got %d dimensions' %
(name, ndim))
# Enforce the shape of the vector
if not shape[0] == nrows:
raise ValueError('Invalid dimensions for %s vector: requires %d'
' rows, got %d' % (name, nrows, shape[0]))
# If we do not yet know `nobs`, do not allow time-varying arrays
if nobs is None and not (ndim == 1 or shape[-1] == 1):
raise ValueError('Invalid dimensions for %s vector: time-varying'
' vectors cannot be given unless `nobs` is specified'
' (implicitly when a dataset is bound or else set'
' explicity)' % name)
# Enforce time-varying array size
if ndim == 2 and shape[1] not in [1, nobs]:
raise ValueError('Invalid dimensions for time-varying %s'
' vector. Requires shape (*,%d), got %s' %
(name, nobs, str(shape))) | Validate the shape of a possibly time-varying vector, or raise an exception
Parameters
----------
name : str
The name of the vector being validated (used in exception messages)
shape : array_like
The shape of the vector to be validated. May be of size 1 or (if
the vector is time-varying) 2.
nrows : int
The expected number of rows (elements of the vector).
nobs : int
The number of observations (used to validate the last dimension of a
time-varying vector)
Raises
------
ValueError
If the vector is not of the desired shape. | validate_vector_shape | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def reorder_missing_matrix(matrix, missing, reorder_rows=False,
reorder_cols=False, is_diagonal=False,
inplace=False, prefix=None):
"""
Reorder the rows or columns of a time-varying matrix where all non-missing
values are in the upper left corner of the matrix.
Parameters
----------
matrix : array_like
The matrix to be reordered. Must have shape (n, m, nobs).
missing : array_like of bool
The vector of missing indices. Must have shape (k, nobs) where `k = n`
if `reorder_rows is True` and `k = m` if `reorder_cols is True`.
reorder_rows : bool, optional
Whether or not the rows of the matrix should be re-ordered. Default
is False.
reorder_cols : bool, optional
Whether or not the columns of the matrix should be re-ordered. Default
is False.
is_diagonal : bool, optional
Whether or not the matrix is diagonal. If this is True, must also have
`n = m`. Default is False.
inplace : bool, optional
Whether or not to reorder the matrix in-place.
prefix : {'s', 'd', 'c', 'z'}, optional
The Fortran prefix of the vector. Default is to automatically detect
the dtype. This parameter should only be used with caution.
Returns
-------
reordered_matrix : array_like
The reordered matrix.
"""
if prefix is None:
prefix = find_best_blas_type((matrix,))[0]
reorder = prefix_reorder_missing_matrix_map[prefix]
if not inplace:
matrix = np.copy(matrix, order='F')
reorder(matrix, np.asfortranarray(missing), reorder_rows, reorder_cols,
is_diagonal)
return matrix | Reorder the rows or columns of a time-varying matrix where all non-missing
values are in the upper left corner of the matrix.
Parameters
----------
matrix : array_like
The matrix to be reordered. Must have shape (n, m, nobs).
missing : array_like of bool
The vector of missing indices. Must have shape (k, nobs) where `k = n`
if `reorder_rows is True` and `k = m` if `reorder_cols is True`.
reorder_rows : bool, optional
Whether or not the rows of the matrix should be re-ordered. Default
is False.
reorder_cols : bool, optional
Whether or not the columns of the matrix should be re-ordered. Default
is False.
is_diagonal : bool, optional
Whether or not the matrix is diagonal. If this is True, must also have
`n = m`. Default is False.
inplace : bool, optional
Whether or not to reorder the matrix in-place.
prefix : {'s', 'd', 'c', 'z'}, optional
The Fortran prefix of the vector. Default is to automatically detect
the dtype. This parameter should only be used with caution.
Returns
-------
reordered_matrix : array_like
The reordered matrix. | reorder_missing_matrix | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def reorder_missing_vector(vector, missing, inplace=False, prefix=None):
"""
Reorder the elements of a time-varying vector where all non-missing
values are in the first elements of the vector.
Parameters
----------
vector : array_like
The vector to be reordered. Must have shape (n, nobs).
missing : array_like of bool
The vector of missing indices. Must have shape (n, nobs).
inplace : bool, optional
Whether or not to reorder the matrix in-place. Default is False.
prefix : {'s', 'd', 'c', 'z'}, optional
The Fortran prefix of the vector. Default is to automatically detect
the dtype. This parameter should only be used with caution.
Returns
-------
reordered_vector : array_like
The reordered vector.
"""
if prefix is None:
prefix = find_best_blas_type((vector,))[0]
reorder = prefix_reorder_missing_vector_map[prefix]
if not inplace:
vector = np.copy(vector, order='F')
reorder(vector, np.asfortranarray(missing))
return vector | Reorder the elements of a time-varying vector where all non-missing
values are in the first elements of the vector.
Parameters
----------
vector : array_like
The vector to be reordered. Must have shape (n, nobs).
missing : array_like of bool
The vector of missing indices. Must have shape (n, nobs).
inplace : bool, optional
Whether or not to reorder the matrix in-place. Default is False.
prefix : {'s', 'd', 'c', 'z'}, optional
The Fortran prefix of the vector. Default is to automatically detect
the dtype. This parameter should only be used with caution.
Returns
-------
reordered_vector : array_like
The reordered vector. | reorder_missing_vector | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def copy_missing_matrix(A, B, missing, missing_rows=False, missing_cols=False,
is_diagonal=False, inplace=False, prefix=None):
"""
Copy the rows or columns of a time-varying matrix where all non-missing
values are in the upper left corner of the matrix.
Parameters
----------
A : array_like
The matrix from which to copy. Must have shape (n, m, nobs) or
(n, m, 1).
B : array_like
The matrix to copy to. Must have shape (n, m, nobs).
missing : array_like of bool
The vector of missing indices. Must have shape (k, nobs) where `k = n`
if `reorder_rows is True` and `k = m` if `reorder_cols is True`.
missing_rows : bool, optional
Whether or not the rows of the matrix are a missing dimension. Default
is False.
missing_cols : bool, optional
Whether or not the columns of the matrix are a missing dimension.
Default is False.
is_diagonal : bool, optional
Whether or not the matrix is diagonal. If this is True, must also have
`n = m`. Default is False.
inplace : bool, optional
Whether or not to copy to B in-place. Default is False.
prefix : {'s', 'd', 'c', 'z'}, optional
The Fortran prefix of the vector. Default is to automatically detect
the dtype. This parameter should only be used with caution.
Returns
-------
copied_matrix : array_like
The matrix B with the non-missing submatrix of A copied onto it.
"""
if prefix is None:
prefix = find_best_blas_type((A, B))[0]
copy = prefix_copy_missing_matrix_map[prefix]
if not inplace:
B = np.copy(B, order='F')
# We may have been given an F-contiguous memoryview; in that case, we do
# not want to alter it or convert it to a numpy array
try:
if not A.is_f_contig():
raise ValueError()
except (AttributeError, ValueError):
A = np.asfortranarray(A)
copy(A, B, np.asfortranarray(missing), missing_rows, missing_cols,
is_diagonal)
return B | Copy the rows or columns of a time-varying matrix where all non-missing
values are in the upper left corner of the matrix.
Parameters
----------
A : array_like
The matrix from which to copy. Must have shape (n, m, nobs) or
(n, m, 1).
B : array_like
The matrix to copy to. Must have shape (n, m, nobs).
missing : array_like of bool
The vector of missing indices. Must have shape (k, nobs) where `k = n`
if `reorder_rows is True` and `k = m` if `reorder_cols is True`.
missing_rows : bool, optional
Whether or not the rows of the matrix are a missing dimension. Default
is False.
missing_cols : bool, optional
Whether or not the columns of the matrix are a missing dimension.
Default is False.
is_diagonal : bool, optional
Whether or not the matrix is diagonal. If this is True, must also have
`n = m`. Default is False.
inplace : bool, optional
Whether or not to copy to B in-place. Default is False.
prefix : {'s', 'd', 'c', 'z'}, optional
The Fortran prefix of the vector. Default is to automatically detect
the dtype. This parameter should only be used with caution.
Returns
-------
copied_matrix : array_like
The matrix B with the non-missing submatrix of A copied onto it. | copy_missing_matrix | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def copy_missing_vector(a, b, missing, inplace=False, prefix=None):
"""
Reorder the elements of a time-varying vector where all non-missing
values are in the first elements of the vector.
Parameters
----------
a : array_like
The vector from which to copy. Must have shape (n, nobs) or (n, 1).
b : array_like
The vector to copy to. Must have shape (n, nobs).
missing : array_like of bool
The vector of missing indices. Must have shape (n, nobs).
inplace : bool, optional
Whether or not to copy to b in-place. Default is False.
prefix : {'s', 'd', 'c', 'z'}, optional
The Fortran prefix of the vector. Default is to automatically detect
the dtype. This parameter should only be used with caution.
Returns
-------
copied_vector : array_like
The vector b with the non-missing subvector of b copied onto it.
"""
if prefix is None:
prefix = find_best_blas_type((a, b))[0]
copy = prefix_copy_missing_vector_map[prefix]
if not inplace:
b = np.copy(b, order='F')
# We may have been given an F-contiguous memoryview; in that case, we do
# not want to alter it or convert it to a numpy array
try:
if not a.is_f_contig():
raise ValueError()
except (AttributeError, ValueError):
a = np.asfortranarray(a)
copy(a, b, np.asfortranarray(missing))
return b | Reorder the elements of a time-varying vector where all non-missing
values are in the first elements of the vector.
Parameters
----------
a : array_like
The vector from which to copy. Must have shape (n, nobs) or (n, 1).
b : array_like
The vector to copy to. Must have shape (n, nobs).
missing : array_like of bool
The vector of missing indices. Must have shape (n, nobs).
inplace : bool, optional
Whether or not to copy to b in-place. Default is False.
prefix : {'s', 'd', 'c', 'z'}, optional
The Fortran prefix of the vector. Default is to automatically detect
the dtype. This parameter should only be used with caution.
Returns
-------
copied_vector : array_like
The vector b with the non-missing subvector of b copied onto it. | copy_missing_vector | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def copy_index_matrix(A, B, index, index_rows=False, index_cols=False,
is_diagonal=False, inplace=False, prefix=None):
"""
Copy the rows or columns of a time-varying matrix where all non-index
values are in the upper left corner of the matrix.
Parameters
----------
A : array_like
The matrix from which to copy. Must have shape (n, m, nobs) or
(n, m, 1).
B : array_like
The matrix to copy to. Must have shape (n, m, nobs).
index : array_like of bool
The vector of index indices. Must have shape (k, nobs) where `k = n`
if `reorder_rows is True` and `k = m` if `reorder_cols is True`.
index_rows : bool, optional
Whether or not the rows of the matrix are a index dimension. Default
is False.
index_cols : bool, optional
Whether or not the columns of the matrix are a index dimension.
Default is False.
is_diagonal : bool, optional
Whether or not the matrix is diagonal. If this is True, must also have
`n = m`. Default is False.
inplace : bool, optional
Whether or not to copy to B in-place. Default is False.
prefix : {'s', 'd', 'c', 'z'}, optional
The Fortran prefix of the vector. Default is to automatically detect
the dtype. This parameter should only be used with caution.
Returns
-------
copied_matrix : array_like
The matrix B with the non-index submatrix of A copied onto it.
"""
if prefix is None:
prefix = find_best_blas_type((A, B))[0]
copy = prefix_copy_index_matrix_map[prefix]
if not inplace:
B = np.copy(B, order='F')
# We may have been given an F-contiguous memoryview; in that case, we do
# not want to alter it or convert it to a numpy array
try:
if not A.is_f_contig():
raise ValueError()
except (AttributeError, ValueError):
A = np.asfortranarray(A)
copy(A, B, np.asfortranarray(index), index_rows, index_cols,
is_diagonal)
return B | Copy the rows or columns of a time-varying matrix where all non-index
values are in the upper left corner of the matrix.
Parameters
----------
A : array_like
The matrix from which to copy. Must have shape (n, m, nobs) or
(n, m, 1).
B : array_like
The matrix to copy to. Must have shape (n, m, nobs).
index : array_like of bool
The vector of index indices. Must have shape (k, nobs) where `k = n`
if `reorder_rows is True` and `k = m` if `reorder_cols is True`.
index_rows : bool, optional
Whether or not the rows of the matrix are a index dimension. Default
is False.
index_cols : bool, optional
Whether or not the columns of the matrix are a index dimension.
Default is False.
is_diagonal : bool, optional
Whether or not the matrix is diagonal. If this is True, must also have
`n = m`. Default is False.
inplace : bool, optional
Whether or not to copy to B in-place. Default is False.
prefix : {'s', 'd', 'c', 'z'}, optional
The Fortran prefix of the vector. Default is to automatically detect
the dtype. This parameter should only be used with caution.
Returns
-------
copied_matrix : array_like
The matrix B with the non-index submatrix of A copied onto it. | copy_index_matrix | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def copy_index_vector(a, b, index, inplace=False, prefix=None):
"""
Reorder the elements of a time-varying vector where all non-index
values are in the first elements of the vector.
Parameters
----------
a : array_like
The vector from which to copy. Must have shape (n, nobs) or (n, 1).
b : array_like
The vector to copy to. Must have shape (n, nobs).
index : array_like of bool
The vector of index indices. Must have shape (n, nobs).
inplace : bool, optional
Whether or not to copy to b in-place. Default is False.
prefix : {'s', 'd', 'c', 'z'}, optional
The Fortran prefix of the vector. Default is to automatically detect
the dtype. This parameter should only be used with caution.
Returns
-------
copied_vector : array_like
The vector b with the non-index subvector of b copied onto it.
"""
if prefix is None:
prefix = find_best_blas_type((a, b))[0]
copy = prefix_copy_index_vector_map[prefix]
if not inplace:
b = np.copy(b, order='F')
# We may have been given an F-contiguous memoryview; in that case, we do
# not want to alter it or convert it to a numpy array
try:
if not a.is_f_contig():
raise ValueError()
except (AttributeError, ValueError):
a = np.asfortranarray(a)
copy(a, b, np.asfortranarray(index))
return b | Reorder the elements of a time-varying vector where all non-index
values are in the first elements of the vector.
Parameters
----------
a : array_like
The vector from which to copy. Must have shape (n, nobs) or (n, 1).
b : array_like
The vector to copy to. Must have shape (n, nobs).
index : array_like of bool
The vector of index indices. Must have shape (n, nobs).
inplace : bool, optional
Whether or not to copy to b in-place. Default is False.
prefix : {'s', 'd', 'c', 'z'}, optional
The Fortran prefix of the vector. Default is to automatically detect
the dtype. This parameter should only be used with caution.
Returns
-------
copied_vector : array_like
The vector b with the non-index subvector of b copied onto it. | copy_index_vector | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def _safe_cond(a):
"""Compute condition while protecting from LinAlgError"""
try:
return np.linalg.cond(a)
except np.linalg.LinAlgError:
if np.any(np.isnan(a)):
return np.nan
else:
return np.inf | Compute condition while protecting from LinAlgError | _safe_cond | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def get_impact_dates(previous_model, updated_model, impact_date=None,
start=None, end=None, periods=None):
"""
Compute start/end periods and an index, often for impacts of data updates
Parameters
----------
previous_model : MLEModel
Model used to compute default start/end periods if None are given.
In the case of computing impacts of data updates, this would be the
model estimated with the previous dataset. Otherwise, can be the same
as `updated_model`.
updated_model : MLEModel
Model used to compute the index. In the case of computing impacts of
data updates, this would be the model estimated with the updated
dataset. Otherwise, can be the same as `previous_model`.
impact_date : {int, str, datetime}, optional
Specific individual impact date. Cannot be used in combination with
`start`, `end`, or `periods`.
start : {int, str, datetime}, optional
Starting point of the impact dates. If given, one of `end` or `periods`
must also be given. If a negative integer, will be computed relative to
the dates in the `updated_model` index. Cannot be used in combination
with `impact_date`.
end : {int, str, datetime}, optional
Ending point of the impact dates. If given, one of `start` or `periods`
must also be given. If a negative integer, will be computed relative to
the dates in the `updated_model` index. Cannot be used in combination
with `impact_date`.
periods : int, optional
Number of impact date periods. If given, one of `start` or `end`
must also be given. Cannot be used in combination with `impact_date`.
Returns
-------
start : int
Integer location of the first included impact dates.
end : int
Integer location of the last included impact dates (i.e. this integer
location is included in the returned `index`).
index : pd.Index
Index associated with `start` and `end`, as computed from the
`updated_model`'s index.
Notes
-----
This function is typically used as a helper for standardizing start and
end periods for a date range where the most sensible default values are
based on some initial dataset (here contained in the `previous_model`),
while index-related operations (especially relative start/end dates given
via negative integers) are most sensibly computed from an updated dataset
(here contained in the `updated_model`).
"""
# There doesn't seem to be any universal default that both (a) make
# sense for all data update combinations, and (b) work with both
# time-invariant and time-varying models. So we require that the user
# specify exactly two of start, end, periods.
if impact_date is not None:
if not (start is None and end is None and periods is None):
raise ValueError('Cannot use the `impact_date` argument in'
' combination with `start`, `end`, or'
' `periods`.')
start = impact_date
periods = 1
if start is None and end is None and periods is None:
start = previous_model.nobs - 1
end = previous_model.nobs - 1
if int(start is None) + int(end is None) + int(periods is None) != 1:
raise ValueError('Of the three parameters: start, end, and'
' periods, exactly two must be specified')
# If we have the `periods` object, we need to convert `start`/`end` to
# integers so that we can compute the other one. That's because
# _get_prediction_index doesn't support a `periods` argument
elif start is not None and periods is not None:
start, _, _, _ = updated_model._get_prediction_index(start, start)
end = start + (periods - 1)
elif end is not None and periods is not None:
_, end, _, _ = updated_model._get_prediction_index(end, end)
start = end - (periods - 1)
elif start is not None and end is not None:
pass
# Get the integer-based start, end and the prediction index
start, end, out_of_sample, prediction_index = (
updated_model._get_prediction_index(start, end))
end = end + out_of_sample
return start, end, prediction_index | Compute start/end periods and an index, often for impacts of data updates
Parameters
----------
previous_model : MLEModel
Model used to compute default start/end periods if None are given.
In the case of computing impacts of data updates, this would be the
model estimated with the previous dataset. Otherwise, can be the same
as `updated_model`.
updated_model : MLEModel
Model used to compute the index. In the case of computing impacts of
data updates, this would be the model estimated with the updated
dataset. Otherwise, can be the same as `previous_model`.
impact_date : {int, str, datetime}, optional
Specific individual impact date. Cannot be used in combination with
`start`, `end`, or `periods`.
start : {int, str, datetime}, optional
Starting point of the impact dates. If given, one of `end` or `periods`
must also be given. If a negative integer, will be computed relative to
the dates in the `updated_model` index. Cannot be used in combination
with `impact_date`.
end : {int, str, datetime}, optional
Ending point of the impact dates. If given, one of `start` or `periods`
must also be given. If a negative integer, will be computed relative to
the dates in the `updated_model` index. Cannot be used in combination
with `impact_date`.
periods : int, optional
Number of impact date periods. If given, one of `start` or `end`
must also be given. Cannot be used in combination with `impact_date`.
Returns
-------
start : int
Integer location of the first included impact dates.
end : int
Integer location of the last included impact dates (i.e. this integer
location is included in the returned `index`).
index : pd.Index
Index associated with `start` and `end`, as computed from the
`updated_model`'s index.
Notes
-----
This function is typically used as a helper for standardizing start and
end periods for a date range where the most sensible default values are
based on some initial dataset (here contained in the `previous_model`),
while index-related operations (especially relative start/end dates given
via negative integers) are most sensibly computed from an updated dataset
(here contained in the `updated_model`). | get_impact_dates | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def _atleast_1d(*arys):
"""
Version of `np.atleast_1d`, copied from
https://github.com/numpy/numpy/blob/master/numpy/core/shape_base.py,
with the following modifications:
1. It allows for `None` arguments, and passes them directly through
"""
res = []
for ary in arys:
if ary is None:
result = None
else:
ary = np.asanyarray(ary)
if ary.ndim == 0:
result = ary.reshape(1)
else:
result = ary
res.append(result)
if len(res) == 1:
return res[0]
else:
return res | Version of `np.atleast_1d`, copied from
https://github.com/numpy/numpy/blob/master/numpy/core/shape_base.py,
with the following modifications:
1. It allows for `None` arguments, and passes them directly through | _atleast_1d | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def _atleast_2d(*arys):
"""
Version of `np.atleast_2d`, copied from
https://github.com/numpy/numpy/blob/master/numpy/core/shape_base.py,
with the following modifications:
1. It allows for `None` arguments, and passes them directly through
2. Instead of creating new axis at the beginning, it creates it at the end
"""
res = []
for ary in arys:
if ary is None:
result = None
else:
ary = np.asanyarray(ary)
if ary.ndim == 0:
result = ary.reshape(1, 1)
elif ary.ndim == 1:
result = ary[:, np.newaxis]
else:
result = ary
res.append(result)
if len(res) == 1:
return res[0]
else:
return res | Version of `np.atleast_2d`, copied from
https://github.com/numpy/numpy/blob/master/numpy/core/shape_base.py,
with the following modifications:
1. It allows for `None` arguments, and passes them directly through
2. Instead of creating new axis at the beginning, it creates it at the end | _atleast_2d | python | statsmodels/statsmodels | statsmodels/tsa/statespace/tools.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/tools.py | BSD-3-Clause |
def _clone_kwargs(self, endog, **kwargs):
"""
Construct keyword arguments for cloning a state space model
Parameters
----------
endog : array_like
An observed time-series process :math:`y`.
**kwargs
Keyword arguments to pass to the new state space representation
model constructor. Those that are not specified are copied from
the specification of the current state space model.
"""
# We always need the base dimensions, but they cannot change from
# the base model when cloning (the idea is: if these need to change,
# need to make a new instance manually, since it's not really cloning).
kwargs['nobs'] = len(endog)
kwargs['k_endog'] = self.k_endog
for key in ['k_states', 'k_posdef']:
val = getattr(self, key)
if key not in kwargs or kwargs[key] is None:
kwargs[key] = val
if kwargs[key] != val:
raise ValueError('Cannot change the dimension of %s when'
' cloning.' % key)
# Get defaults for time-invariant system matrices, if not otherwise
# provided
# Time-varying matrices must be replaced.
for name in self.shapes.keys():
if name == 'obs':
continue
if name not in kwargs:
mat = getattr(self, name)
if mat.shape[-1] != 1:
raise ValueError('The `%s` matrix is time-varying. Cloning'
' this model requires specifying an'
' updated matrix.' % name)
kwargs[name] = mat
# Default is to use the same initialization
kwargs.setdefault('initialization', self.initialization)
return kwargs | Construct keyword arguments for cloning a state space model
Parameters
----------
endog : array_like
An observed time-series process :math:`y`.
**kwargs
Keyword arguments to pass to the new state space representation
model constructor. Those that are not specified are copied from
the specification of the current state space model. | _clone_kwargs | python | statsmodels/statsmodels | statsmodels/tsa/statespace/representation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/representation.py | BSD-3-Clause |
def clone(self, endog, **kwargs):
"""
Clone a state space representation while overriding some elements
Parameters
----------
endog : array_like
An observed time-series process :math:`y`.
**kwargs
Keyword arguments to pass to the new state space representation
model constructor. Those that are not specified are copied from
the specification of the current state space model.
Returns
-------
Representation
Notes
-----
If some system matrices are time-varying, then new time-varying
matrices *must* be provided.
"""
kwargs = self._clone_kwargs(endog, **kwargs)
mod = self.__class__(**kwargs)
mod.bind(endog)
return mod | Clone a state space representation while overriding some elements
Parameters
----------
endog : array_like
An observed time-series process :math:`y`.
**kwargs
Keyword arguments to pass to the new state space representation
model constructor. Those that are not specified are copied from
the specification of the current state space model.
Returns
-------
Representation
Notes
-----
If some system matrices are time-varying, then new time-varying
matrices *must* be provided. | clone | python | statsmodels/statsmodels | statsmodels/tsa/statespace/representation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/representation.py | BSD-3-Clause |
def extend(self, endog, start=None, end=None, **kwargs):
"""
Extend the current state space model, or a specific (time) subset
Parameters
----------
endog : array_like
An observed time-series process :math:`y`.
start : int, optional
The first period of a time-varying state space model to include in
the new model. Has no effect if the state space model is
time-invariant. Default is the initial period.
end : int, optional
The last period of a time-varying state space model to include in
the new model. Has no effect if the state space model is
time-invariant. Default is the final period.
**kwargs
Keyword arguments to pass to the new state space representation
model constructor. Those that are not specified are copied from
the specification of the current state space model.
Returns
-------
Representation
Notes
-----
This method does not allow replacing a time-varying system matrix with
a time-invariant one (or vice-versa). If that is required, use `clone`.
"""
endog = np.atleast_1d(endog)
if endog.ndim == 1:
endog = endog[:, np.newaxis]
nobs = len(endog)
if start is None:
start = 0
if end is None:
end = self.nobs
if start < 0:
start = self.nobs + start
if end < 0:
end = self.nobs + end
if start > self.nobs:
raise ValueError('The `start` argument of the extension within the'
' base model cannot be after the end of the'
' base model.')
if end > self.nobs:
raise ValueError('The `end` argument of the extension within the'
' base model cannot be after the end of the'
' base model.')
if start > end:
raise ValueError('The `start` argument of the extension within the'
' base model cannot be after the `end` argument.')
# Note: if start == end or if end < self.nobs, then we're just cloning
# (no extension)
endog = tools.concat([self.endog[:, start:end].T, endog])
# Extend any time-varying arrays
error_ti = ('Model has time-invariant %s matrix, so cannot provide'
' an extended matrix.')
error_tv = ('Model has time-varying %s matrix, so an updated'
' time-varying matrix for the extension period'
' is required.')
for name, shape in self.shapes.items():
if name == 'obs':
continue
mat = getattr(self, name)
# If we were *not* given an extended value for this matrix...
if name not in kwargs:
# If this is a time-varying matrix in the existing model
if mat.shape[-1] > 1:
# If we have an extension period, then raise an error
# because we should have been given an extended value
if end + nobs > self.nobs:
raise ValueError(error_tv % name)
# If we do not have an extension period, then set the new
# time-varying matrix to be the portion of the existing
# time-varying matrix that corresponds to the period of
# interest
else:
kwargs[name] = mat[..., start:end + nobs]
elif nobs == 0:
raise ValueError('Extension is being performed within-sample'
' so cannot provide an extended matrix')
# If we were given an extended value for this matrix
else:
# TODO: Need to add a check for ndim, and if the matrix has
# one fewer dimensions than the existing matrix, add a new axis
# If this is a time-invariant matrix in the existing model,
# raise an error
if mat.shape[-1] == 1 and self.nobs > 1:
raise ValueError(error_ti % name)
# Otherwise, validate the shape of the given extended value
# Note: we do not validate the number of observations here
# (so we pass in updated_mat.shape[-1] as the nobs argument
# in the validate_* calls); instead, we check below that we
# at least `nobs` values were passed in and then only take the
# first of them as required. This can be useful when e.g. the
# end user knows the extension values up to some maximum
# endpoint, but does not know what the calling methods may
# specifically require.
updated_mat = np.asarray(kwargs[name])
if len(shape) == 2:
validate_vector_shape(name, updated_mat.shape, shape[0],
updated_mat.shape[-1])
else:
validate_matrix_shape(name, updated_mat.shape, shape[0],
shape[1], updated_mat.shape[-1])
if updated_mat.shape[-1] < nobs:
raise ValueError(error_tv % name)
else:
updated_mat = updated_mat[..., :nobs]
# Concatenate to get the new time-varying matrix
kwargs[name] = np.c_[mat[..., start:end], updated_mat]
return self.clone(endog, **kwargs) | Extend the current state space model, or a specific (time) subset
Parameters
----------
endog : array_like
An observed time-series process :math:`y`.
start : int, optional
The first period of a time-varying state space model to include in
the new model. Has no effect if the state space model is
time-invariant. Default is the initial period.
end : int, optional
The last period of a time-varying state space model to include in
the new model. Has no effect if the state space model is
time-invariant. Default is the final period.
**kwargs
Keyword arguments to pass to the new state space representation
model constructor. Those that are not specified are copied from
the specification of the current state space model.
Returns
-------
Representation
Notes
-----
This method does not allow replacing a time-varying system matrix with
a time-invariant one (or vice-versa). If that is required, use `clone`. | extend | python | statsmodels/statsmodels | statsmodels/tsa/statespace/representation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/representation.py | BSD-3-Clause |
def prefix(self):
"""
(str) BLAS prefix of currently active representation matrices
"""
arrays = (
self._design, self._obs_intercept, self._obs_cov,
self._transition, self._state_intercept, self._selection,
self._state_cov
)
if self.endog is not None:
arrays = (self.endog,) + arrays
return find_best_blas_type(arrays)[0] | (str) BLAS prefix of currently active representation matrices | prefix | python | statsmodels/statsmodels | statsmodels/tsa/statespace/representation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/representation.py | BSD-3-Clause |
def dtype(self):
"""
(dtype) Datatype of currently active representation matrices
"""
return tools.prefix_dtype_map[self.prefix] | (dtype) Datatype of currently active representation matrices | dtype | python | statsmodels/statsmodels | statsmodels/tsa/statespace/representation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/representation.py | BSD-3-Clause |
def time_invariant(self):
"""
(bool) Whether or not currently active representation matrices are
time-invariant
"""
if self._time_invariant is None:
return (
self._design.shape[2] == self._obs_intercept.shape[1] ==
self._obs_cov.shape[2] == self._transition.shape[2] ==
self._state_intercept.shape[1] == self._selection.shape[2] ==
self._state_cov.shape[2]
)
else:
return self._time_invariant | (bool) Whether or not currently active representation matrices are
time-invariant | time_invariant | python | statsmodels/statsmodels | statsmodels/tsa/statespace/representation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/representation.py | BSD-3-Clause |
def bind(self, endog):
"""
Bind data to the statespace representation
Parameters
----------
endog : ndarray
Endogenous data to bind to the model. Must be column-ordered
ndarray with shape (`k_endog`, `nobs`) or row-ordered ndarray with
shape (`nobs`, `k_endog`).
Notes
-----
The strict requirements arise because the underlying statespace and
Kalman filtering classes require Fortran-ordered arrays in the wide
format (shaped (`k_endog`, `nobs`)), and this structure is setup to
prevent copying arrays in memory.
By default, numpy arrays are row (C)-ordered and most time series are
represented in the long format (with time on the 0-th axis). In this
case, no copying or re-ordering needs to be performed, instead the
array can simply be transposed to get it in the right order and shape.
Although this class (Representation) has stringent `bind` requirements,
it is assumed that it will rarely be used directly.
"""
if not isinstance(endog, np.ndarray):
raise ValueError("Invalid endogenous array; must be an ndarray.")
# Make sure we have a 2-dimensional array
# Note: reshaping a 1-dim array into a 2-dim array by changing the
# shape tuple always results in a row (C)-ordered array, so it
# must be shaped (nobs, k_endog)
if endog.ndim == 1:
# In the case of nobs x 0 arrays
if self.k_endog == 1:
endog.shape = (endog.shape[0], 1)
# In the case of k_endog x 0 arrays
else:
endog.shape = (1, endog.shape[0])
if not endog.ndim == 2:
raise ValueError('Invalid endogenous array provided; must be'
' 2-dimensional.')
# Check for valid column-ordered arrays
if endog.flags['F_CONTIGUOUS'] and endog.shape[0] == self.k_endog:
pass
# Check for valid row-ordered arrays, and transpose them to be the
# correct column-ordered array
elif endog.flags['C_CONTIGUOUS'] and endog.shape[1] == self.k_endog:
endog = endog.T
# Invalid column-ordered arrays
elif endog.flags['F_CONTIGUOUS']:
raise ValueError('Invalid endogenous array; column-ordered'
' arrays must have first axis shape of'
' `k_endog`.')
# Invalid row-ordered arrays
elif endog.flags['C_CONTIGUOUS']:
raise ValueError('Invalid endogenous array; row-ordered'
' arrays must have last axis shape of'
' `k_endog`.')
# Non-contiguous arrays
else:
raise ValueError('Invalid endogenous array; must be ordered in'
' contiguous memory.')
# We may still have a non-fortran contiguous array, so double-check
if not endog.flags['F_CONTIGUOUS']:
endog = np.asfortranarray(endog)
# Set a flag for complex data
self._complex_endog = np.iscomplexobj(endog)
# Set the data
self.endog = endog
self.nobs = self.endog.shape[1]
# Reset shapes
if hasattr(self, 'shapes'):
self.shapes['obs'] = self.endog.shape | Bind data to the statespace representation
Parameters
----------
endog : ndarray
Endogenous data to bind to the model. Must be column-ordered
ndarray with shape (`k_endog`, `nobs`) or row-ordered ndarray with
shape (`nobs`, `k_endog`).
Notes
-----
The strict requirements arise because the underlying statespace and
Kalman filtering classes require Fortran-ordered arrays in the wide
format (shaped (`k_endog`, `nobs`)), and this structure is setup to
prevent copying arrays in memory.
By default, numpy arrays are row (C)-ordered and most time series are
represented in the long format (with time on the 0-th axis). In this
case, no copying or re-ordering needs to be performed, instead the
array can simply be transposed to get it in the right order and shape.
Although this class (Representation) has stringent `bind` requirements,
it is assumed that it will rarely be used directly. | bind | python | statsmodels/statsmodels | statsmodels/tsa/statespace/representation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/representation.py | BSD-3-Clause |
def initialize(self, initialization, approximate_diffuse_variance=None,
constant=None, stationary_cov=None, a=None, Pstar=None,
Pinf=None, A=None, R0=None, Q0=None):
"""Create an Initialization object if necessary"""
if initialization == 'known':
initialization = Initialization(self.k_states, 'known',
constant=constant,
stationary_cov=stationary_cov)
elif initialization == 'components':
initialization = Initialization.from_components(
a=a, Pstar=Pstar, Pinf=Pinf, A=A, R0=R0, Q0=Q0)
elif initialization == 'approximate_diffuse':
if approximate_diffuse_variance is None:
approximate_diffuse_variance = self.initial_variance
initialization = Initialization(
self.k_states, 'approximate_diffuse',
approximate_diffuse_variance=approximate_diffuse_variance)
elif initialization == 'stationary':
initialization = Initialization(self.k_states, 'stationary')
elif initialization == 'diffuse':
initialization = Initialization(self.k_states, 'diffuse')
# We must have an initialization object at this point
if not isinstance(initialization, Initialization):
raise ValueError("Invalid state space initialization method.")
self.initialization = initialization | Create an Initialization object if necessary | initialize | python | statsmodels/statsmodels | statsmodels/tsa/statespace/representation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/representation.py | BSD-3-Clause |
def initialize_known(self, constant, stationary_cov):
"""
Initialize the statespace model with known distribution for initial
state.
These values are assumed to be known with certainty or else
filled with parameters during, for example, maximum likelihood
estimation.
Parameters
----------
constant : array_like
Known mean of the initial state vector.
stationary_cov : array_like
Known covariance matrix of the initial state vector.
"""
constant = np.asarray(constant, order="F")
stationary_cov = np.asarray(stationary_cov, order="F")
if not constant.shape == (self.k_states,):
raise ValueError('Invalid dimensions for constant state vector.'
' Requires shape (%d,), got %s' %
(self.k_states, str(constant.shape)))
if not stationary_cov.shape == (self.k_states, self.k_states):
raise ValueError('Invalid dimensions for stationary covariance'
' matrix. Requires shape (%d,%d), got %s' %
(self.k_states, self.k_states,
str(stationary_cov.shape)))
self.initialize('known', constant=constant,
stationary_cov=stationary_cov) | Initialize the statespace model with known distribution for initial
state.
These values are assumed to be known with certainty or else
filled with parameters during, for example, maximum likelihood
estimation.
Parameters
----------
constant : array_like
Known mean of the initial state vector.
stationary_cov : array_like
Known covariance matrix of the initial state vector. | initialize_known | python | statsmodels/statsmodels | statsmodels/tsa/statespace/representation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/representation.py | BSD-3-Clause |
def initialize_approximate_diffuse(self, variance=None):
"""
Initialize the statespace model with approximate diffuse values.
Rather than following the exact diffuse treatment (which is developed
for the case that the variance becomes infinitely large), this assigns
an arbitrary large number for the variance.
Parameters
----------
variance : float, optional
The variance for approximating diffuse initial conditions. Default
is 1e6.
"""
if variance is None:
variance = self.initial_variance
self.initialize('approximate_diffuse',
approximate_diffuse_variance=variance) | Initialize the statespace model with approximate diffuse values.
Rather than following the exact diffuse treatment (which is developed
for the case that the variance becomes infinitely large), this assigns
an arbitrary large number for the variance.
Parameters
----------
variance : float, optional
The variance for approximating diffuse initial conditions. Default
is 1e6. | initialize_approximate_diffuse | python | statsmodels/statsmodels | statsmodels/tsa/statespace/representation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/representation.py | BSD-3-Clause |
def initialize_components(self, a=None, Pstar=None, Pinf=None, A=None,
R0=None, Q0=None):
"""
Initialize the statespace model with component matrices
Parameters
----------
a : array_like, optional
Vector of constant values describing the mean of the stationary
component of the initial state.
Pstar : array_like, optional
Stationary component of the initial state covariance matrix. If
given, should be a matrix shaped `k_states x k_states`. The
submatrix associated with the diffuse states should contain zeros.
Note that by definition, `Pstar = R0 @ Q0 @ R0.T`, so either
`R0,Q0` or `Pstar` may be given, but not both.
Pinf : array_like, optional
Diffuse component of the initial state covariance matrix. If given,
should be a matrix shaped `k_states x k_states` with ones in the
diagonal positions corresponding to states with diffuse
initialization and zeros otherwise. Note that by definition,
`Pinf = A @ A.T`, so either `A` or `Pinf` may be given, but not
both.
A : array_like, optional
Diffuse selection matrix, used in the definition of the diffuse
initial state covariance matrix. If given, should be a
`k_states x k_diffuse_states` matrix that contains the subset of
the columns of the identity matrix that correspond to states with
diffuse initialization. Note that by definition, `Pinf = A @ A.T`,
so either `A` or `Pinf` may be given, but not both.
R0 : array_like, optional
Stationary selection matrix, used in the definition of the
stationary initial state covariance matrix. If given, should be a
`k_states x k_nondiffuse_states` matrix that contains the subset of
the columns of the identity matrix that correspond to states with a
non-diffuse initialization. Note that by definition,
`Pstar = R0 @ Q0 @ R0.T`, so either `R0,Q0` or `Pstar` may be
given, but not both.
Q0 : array_like, optional
Covariance matrix associated with stationary initial states. If
given, should be a matrix shaped
`k_nondiffuse_states x k_nondiffuse_states`.
Note that by definition, `Pstar = R0 @ Q0 @ R0.T`, so either
`R0,Q0` or `Pstar` may be given, but not both.
Notes
-----
The matrices `a, Pstar, Pinf, A, R0, Q0` and the process for
initializing the state space model is as given in Chapter 5 of [1]_.
For the definitions of these matrices, see equation (5.2) and the
subsequent discussion there.
References
----------
.. [1] Durbin, James, and Siem Jan Koopman. 2012.
Time Series Analysis by State Space Methods: Second Edition.
Oxford University Press.
"""
self.initialize('components', a=a, Pstar=Pstar, Pinf=Pinf, A=A, R0=R0,
Q0=Q0) | Initialize the statespace model with component matrices
Parameters
----------
a : array_like, optional
Vector of constant values describing the mean of the stationary
component of the initial state.
Pstar : array_like, optional
Stationary component of the initial state covariance matrix. If
given, should be a matrix shaped `k_states x k_states`. The
submatrix associated with the diffuse states should contain zeros.
Note that by definition, `Pstar = R0 @ Q0 @ R0.T`, so either
`R0,Q0` or `Pstar` may be given, but not both.
Pinf : array_like, optional
Diffuse component of the initial state covariance matrix. If given,
should be a matrix shaped `k_states x k_states` with ones in the
diagonal positions corresponding to states with diffuse
initialization and zeros otherwise. Note that by definition,
`Pinf = A @ A.T`, so either `A` or `Pinf` may be given, but not
both.
A : array_like, optional
Diffuse selection matrix, used in the definition of the diffuse
initial state covariance matrix. If given, should be a
`k_states x k_diffuse_states` matrix that contains the subset of
the columns of the identity matrix that correspond to states with
diffuse initialization. Note that by definition, `Pinf = A @ A.T`,
so either `A` or `Pinf` may be given, but not both.
R0 : array_like, optional
Stationary selection matrix, used in the definition of the
stationary initial state covariance matrix. If given, should be a
`k_states x k_nondiffuse_states` matrix that contains the subset of
the columns of the identity matrix that correspond to states with a
non-diffuse initialization. Note that by definition,
`Pstar = R0 @ Q0 @ R0.T`, so either `R0,Q0` or `Pstar` may be
given, but not both.
Q0 : array_like, optional
Covariance matrix associated with stationary initial states. If
given, should be a matrix shaped
`k_nondiffuse_states x k_nondiffuse_states`.
Note that by definition, `Pstar = R0 @ Q0 @ R0.T`, so either
`R0,Q0` or `Pstar` may be given, but not both.
Notes
-----
The matrices `a, Pstar, Pinf, A, R0, Q0` and the process for
initializing the state space model is as given in Chapter 5 of [1]_.
For the definitions of these matrices, see equation (5.2) and the
subsequent discussion there.
References
----------
.. [1] Durbin, James, and Siem Jan Koopman. 2012.
Time Series Analysis by State Space Methods: Second Edition.
Oxford University Press. | initialize_components | python | statsmodels/statsmodels | statsmodels/tsa/statespace/representation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/representation.py | BSD-3-Clause |
def initialize_stationary(self):
"""
Initialize the statespace model as stationary.
"""
self.initialize('stationary') | Initialize the statespace model as stationary. | initialize_stationary | python | statsmodels/statsmodels | statsmodels/tsa/statespace/representation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/representation.py | BSD-3-Clause |
def initialize_diffuse(self):
"""
Initialize the statespace model as diffuse.
"""
self.initialize('diffuse') | Initialize the statespace model as diffuse. | initialize_diffuse | python | statsmodels/statsmodels | statsmodels/tsa/statespace/representation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/representation.py | BSD-3-Clause |
def update_representation(self, model):
"""Update model Representation"""
# Model
self.model = model
# Data type
self.prefix = model.prefix
self.dtype = model.dtype
# Copy the model dimensions
self.nobs = model.nobs
self.k_endog = model.k_endog
self.k_states = model.k_states
self.k_posdef = model.k_posdef
self.time_invariant = model.time_invariant
# Save the state space representation at the time
self.endog = model.endog
self.design = model._design.copy()
self.obs_intercept = model._obs_intercept.copy()
self.obs_cov = model._obs_cov.copy()
self.transition = model._transition.copy()
self.state_intercept = model._state_intercept.copy()
self.selection = model._selection.copy()
self.state_cov = model._state_cov.copy()
self.missing = np.array(model._statespaces[self.prefix].missing,
copy=True)
self.nmissing = np.array(model._statespaces[self.prefix].nmissing,
copy=True)
# Save the final shapes of the matrices
self.shapes = dict(model.shapes)
for name in self.shapes.keys():
if name == 'obs':
continue
self.shapes[name] = getattr(self, name).shape
self.shapes['obs'] = self.endog.shape
# Save the state space initialization
self.initialization = model.initialization
if model.initialization is not None:
model._initialize_state()
self.initial_state = np.array(
model._statespaces[self.prefix].initial_state, copy=True)
self.initial_state_cov = np.array(
model._statespaces[self.prefix].initial_state_cov, copy=True)
self.initial_diffuse_state_cov = np.array(
model._statespaces[self.prefix].initial_diffuse_state_cov,
copy=True) | Update model Representation | update_representation | python | statsmodels/statsmodels | statsmodels/tsa/statespace/representation.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/representation.py | BSD-3-Clause |
def unset(self, index):
"""
Unset initialization for states, either globally or for a block
Parameters
----------
index : tuple or int or None
Arguments used to create a `slice` of states. Can be a tuple with
`(start, stop)` (note that for `slice`, stop is not inclusive), or
an integer (to select a specific state), or None (to select all the
states).
Notes
-----
Note that this specifically unsets initializations previously created
using `set` with this same index. Thus you cannot use `index=None` to
unset all initializations, but only to unset a previously set global
initialization. To unset all initializations (including both global and
block level), use the `clear` method.
"""
if isinstance(index, (int, np.integer)):
index = int(index)
if index < 0 or index > self.k_states:
raise ValueError('Invalid index.')
index = (index, index + 1)
elif index is None:
index = (index,)
elif not isinstance(index, tuple):
raise ValueError('Invalid index.')
if len(index) > 2:
raise ValueError('Cannot include a slice step in `index`.')
index = self._states[slice(*index)]
# Compatibility with zero-length slices (can make it easier to set up
# initialization without lots of if statements)
if len(index) == 0:
return
# Unset the values
k_states = len(index)
if k_states == self.k_states and self.initialization_type is not None:
self.initialization_type = None
self.constant[:] = 0
self.stationary_cov[:] = 0
elif index in self.blocks:
for i in index:
self._initialization[i] = None
del self.blocks[index]
else:
raise ValueError('The given index does not correspond to a'
' previously initialized block.') | Unset initialization for states, either globally or for a block
Parameters
----------
index : tuple or int or None
Arguments used to create a `slice` of states. Can be a tuple with
`(start, stop)` (note that for `slice`, stop is not inclusive), or
an integer (to select a specific state), or None (to select all the
states).
Notes
-----
Note that this specifically unsets initializations previously created
using `set` with this same index. Thus you cannot use `index=None` to
unset all initializations, but only to unset a previously set global
initialization. To unset all initializations (including both global and
block level), use the `clear` method. | unset | python | statsmodels/statsmodels | statsmodels/tsa/statespace/initialization.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/initialization.py | BSD-3-Clause |
def clear(self):
"""
Clear all previously set initializations, either global or block level
"""
# Clear initializations
for i in self._states:
self._initialization[i] = None
# Delete block initializations
keys = list(self.blocks.keys())
for key in keys:
del self.blocks[key]
# Clear global attributes
self.initialization_type = None
self.constant[:] = 0
self.stationary_cov[:] = 0 | Clear all previously set initializations, either global or block level | clear | python | statsmodels/statsmodels | statsmodels/tsa/statespace/initialization.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/initialization.py | BSD-3-Clause |
def check_random_state(seed=None):
"""Turn `seed` into a `numpy.random.Generator` instance.
Parameters
----------
seed : {None, int, Generator, RandomState}, optional
If `seed` is None (or `np.random`), the `numpy.random.RandomState`
singleton is used.
If `seed` is an int, a new ``numpy.random.RandomState`` instance
is used, seeded with `seed`.
If `seed` is already a ``numpy.random.Generator`` or
``numpy.random.RandomState`` instance then that instance is used.
Returns
-------
seed : {`numpy.random.Generator`, `numpy.random.RandomState`}
Random number generator.
"""
if seed is None or isinstance(seed, (numbers.Integral, np.integer)):
return np.random.default_rng(seed)
elif isinstance(seed, (np.random.RandomState, np.random.Generator)):
return seed
else:
raise ValueError(f'{seed!r} cannot be used to seed a'
' numpy.random.Generator instance') | Turn `seed` into a `numpy.random.Generator` instance.
Parameters
----------
seed : {None, int, Generator, RandomState}, optional
If `seed` is None (or `np.random`), the `numpy.random.RandomState`
singleton is used.
If `seed` is an int, a new ``numpy.random.RandomState`` instance
is used, seeded with `seed`.
If `seed` is already a ``numpy.random.Generator`` or
``numpy.random.RandomState`` instance then that instance is used.
Returns
-------
seed : {`numpy.random.Generator`, `numpy.random.RandomState`}
Random number generator. | check_random_state | python | statsmodels/statsmodels | statsmodels/tsa/statespace/simulation_smoother.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/simulation_smoother.py | BSD-3-Clause |
def memory_no_forecast(self):
"""
(bool) Flag to prevent storing all forecast-related output.
"""
return self.memory_no_forecast_mean or self.memory_no_forecast_cov | (bool) Flag to prevent storing all forecast-related output. | memory_no_forecast | python | statsmodels/statsmodels | statsmodels/tsa/statespace/kalman_filter.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/kalman_filter.py | BSD-3-Clause |
def memory_no_predicted(self):
"""
(bool) Flag to prevent storing predicted state and covariance matrices.
"""
return self.memory_no_predicted_mean or self.memory_no_predicted_cov | (bool) Flag to prevent storing predicted state and covariance matrices. | memory_no_predicted | python | statsmodels/statsmodels | statsmodels/tsa/statespace/kalman_filter.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/kalman_filter.py | BSD-3-Clause |
def memory_no_filtered(self):
"""
(bool) Flag to prevent storing filtered state and covariance matrices.
"""
return self.memory_no_filtered_mean or self.memory_no_filtered_cov | (bool) Flag to prevent storing filtered state and covariance matrices. | memory_no_filtered | python | statsmodels/statsmodels | statsmodels/tsa/statespace/kalman_filter.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/kalman_filter.py | BSD-3-Clause |
def fixed_scale(self, scale):
"""
fixed_scale(scale)
Context manager for fixing the scale when FILTER_CONCENTRATED is set
Parameters
----------
scale : numeric
Scale of the model.
Notes
-----
This a no-op if scale is None.
This context manager is most useful in models which are explicitly
concentrating out the scale, so that the set of parameters they are
estimating does not include the scale.
"""
# If a scale was provided, use it and do not concentrate it out of the
# loglikelihood
if scale is not None and scale != 1:
if not self.filter_concentrated:
raise ValueError('Cannot provide scale if filter method does'
' not include FILTER_CONCENTRATED.')
self.filter_concentrated = False
self._scale = scale
obs_cov = self['obs_cov']
state_cov = self['state_cov']
self['obs_cov'] = scale * obs_cov
self['state_cov'] = scale * state_cov
try:
yield
finally:
# If a scale was provided, reset the model
if scale is not None and scale != 1:
self['state_cov'] = state_cov
self['obs_cov'] = obs_cov
self.filter_concentrated = True
self._scale = None | fixed_scale(scale)
Context manager for fixing the scale when FILTER_CONCENTRATED is set
Parameters
----------
scale : numeric
Scale of the model.
Notes
-----
This a no-op if scale is None.
This context manager is most useful in models which are explicitly
concentrating out the scale, so that the set of parameters they are
estimating does not include the scale. | fixed_scale | python | statsmodels/statsmodels | statsmodels/tsa/statespace/kalman_filter.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/kalman_filter.py | BSD-3-Clause |
def update_representation(self, model, only_options=False):
"""
Update the results to match a given model
Parameters
----------
model : Representation
The model object from which to take the updated values.
only_options : bool, optional
If set to true, only the filter options are updated, and the state
space representation is not updated. Default is False.
Notes
-----
This method is rarely required except for internal usage.
"""
if not only_options:
super().update_representation(model)
# Save the options as boolean variables
for name in self._filter_options:
setattr(self, name, getattr(model, name, None)) | Update the results to match a given model
Parameters
----------
model : Representation
The model object from which to take the updated values.
only_options : bool, optional
If set to true, only the filter options are updated, and the state
space representation is not updated. Default is False.
Notes
-----
This method is rarely required except for internal usage. | update_representation | python | statsmodels/statsmodels | statsmodels/tsa/statespace/kalman_filter.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/kalman_filter.py | BSD-3-Clause |
def update_filter(self, kalman_filter):
"""
Update the filter results
Parameters
----------
kalman_filter : statespace.kalman_filter.KalmanFilter
The model object from which to take the updated values.
Notes
-----
This method is rarely required except for internal usage.
"""
# State initialization
self.initial_state = np.array(
kalman_filter.model.initial_state, copy=True
)
self.initial_state_cov = np.array(
kalman_filter.model.initial_state_cov, copy=True
)
# Save Kalman filter parameters
self.filter_method = kalman_filter.filter_method
self.inversion_method = kalman_filter.inversion_method
self.stability_method = kalman_filter.stability_method
self.conserve_memory = kalman_filter.conserve_memory
self.filter_timing = kalman_filter.filter_timing
self.tolerance = kalman_filter.tolerance
self.loglikelihood_burn = kalman_filter.loglikelihood_burn
# Save Kalman filter output
self.converged = bool(kalman_filter.converged)
self.period_converged = kalman_filter.period_converged
self.univariate_filter = np.array(kalman_filter.univariate_filter,
copy=True)
self.filtered_state = np.array(kalman_filter.filtered_state, copy=True)
self.filtered_state_cov = np.array(
kalman_filter.filtered_state_cov, copy=True
)
self.predicted_state = np.array(
kalman_filter.predicted_state, copy=True
)
self.predicted_state_cov = np.array(
kalman_filter.predicted_state_cov, copy=True
)
# Reset caches
has_missing = np.sum(self.nmissing) > 0
if not (self.memory_no_std_forecast or self.invert_lu or
self.solve_lu or self.filter_collapsed):
if has_missing:
self._standardized_forecasts_error = np.array(
reorder_missing_vector(
kalman_filter.standardized_forecast_error,
self.missing, prefix=self.prefix))
else:
self._standardized_forecasts_error = np.array(
kalman_filter.standardized_forecast_error, copy=True)
else:
self._standardized_forecasts_error = None
# In the partially missing data case, all entries will
# be in the upper left submatrix rather than the correct placement
# Re-ordering does not make sense in the collapsed case.
if has_missing and (not self.memory_no_gain and
not self.filter_collapsed):
self._kalman_gain = np.array(reorder_missing_matrix(
kalman_filter.kalman_gain, self.missing, reorder_cols=True,
prefix=self.prefix))
self.tmp1 = np.array(reorder_missing_matrix(
kalman_filter.tmp1, self.missing, reorder_cols=True,
prefix=self.prefix))
self.tmp2 = np.array(reorder_missing_vector(
kalman_filter.tmp2, self.missing, prefix=self.prefix))
self.tmp3 = np.array(reorder_missing_matrix(
kalman_filter.tmp3, self.missing, reorder_rows=True,
prefix=self.prefix))
self.tmp4 = np.array(reorder_missing_matrix(
kalman_filter.tmp4, self.missing, reorder_cols=True,
reorder_rows=True, prefix=self.prefix))
else:
if not self.memory_no_gain:
self._kalman_gain = np.array(
kalman_filter.kalman_gain, copy=True)
self.tmp1 = np.array(kalman_filter.tmp1, copy=True)
self.tmp2 = np.array(kalman_filter.tmp2, copy=True)
self.tmp3 = np.array(kalman_filter.tmp3, copy=True)
self.tmp4 = np.array(kalman_filter.tmp4, copy=True)
self.M = np.array(kalman_filter.M, copy=True)
self.M_diffuse = np.array(kalman_filter.M_inf, copy=True)
# Note: use forecasts rather than forecast, so as not to interfer
# with the `forecast` methods in subclasses
self.forecasts = np.array(kalman_filter.forecast, copy=True)
self.forecasts_error = np.array(
kalman_filter.forecast_error, copy=True
)
self.forecasts_error_cov = np.array(
kalman_filter.forecast_error_cov, copy=True
)
# Note: below we will set self.llf, and in the memory_no_likelihood
# case we will replace self.llf_obs = None at that time.
self.llf_obs = np.array(kalman_filter.loglikelihood, copy=True)
# Diffuse objects
self.nobs_diffuse = kalman_filter.nobs_diffuse
self.initial_diffuse_state_cov = None
self.forecasts_error_diffuse_cov = None
self.predicted_diffuse_state_cov = None
if self.nobs_diffuse > 0:
self.initial_diffuse_state_cov = np.array(
kalman_filter.model.initial_diffuse_state_cov, copy=True)
self.predicted_diffuse_state_cov = np.array(
kalman_filter.predicted_diffuse_state_cov, copy=True)
if has_missing and not self.filter_collapsed:
self.forecasts_error_diffuse_cov = np.array(
reorder_missing_matrix(
kalman_filter.forecast_error_diffuse_cov,
self.missing, reorder_cols=True, reorder_rows=True,
prefix=self.prefix))
else:
self.forecasts_error_diffuse_cov = np.array(
kalman_filter.forecast_error_diffuse_cov, copy=True)
# If there was missing data, save the original values from the Kalman
# filter output, since below will set the values corresponding to
# the missing observations to nans.
self.missing_forecasts = None
self.missing_forecasts_error = None
self.missing_forecasts_error_cov = None
if np.sum(self.nmissing) > 0:
# Copy the provided arrays (which are as the Kalman filter dataset)
# into new variables
self.missing_forecasts = np.copy(self.forecasts)
self.missing_forecasts_error = np.copy(self.forecasts_error)
self.missing_forecasts_error_cov = (
np.copy(self.forecasts_error_cov)
)
# Save the collapsed values
self.collapsed_forecasts = None
self.collapsed_forecasts_error = None
self.collapsed_forecasts_error_cov = None
if self.filter_collapsed:
# Copy the provided arrays (which are from the collapsed dataset)
# into new variables
self.collapsed_forecasts = self.forecasts[:self.k_states, :]
self.collapsed_forecasts_error = (
self.forecasts_error[:self.k_states, :]
)
self.collapsed_forecasts_error_cov = (
self.forecasts_error_cov[:self.k_states, :self.k_states, :]
)
# Recreate the original arrays (which should be from the original
# dataset) in the appropriate dimension
dtype = self.collapsed_forecasts.dtype
self.forecasts = np.zeros((self.k_endog, self.nobs), dtype=dtype)
self.forecasts_error = np.zeros((self.k_endog, self.nobs),
dtype=dtype)
self.forecasts_error_cov = (
np.zeros((self.k_endog, self.k_endog, self.nobs), dtype=dtype)
)
# Fill in missing values in the forecast, forecast error, and
# forecast error covariance matrix (this is required due to how the
# Kalman filter implements observations that are either partly or
# completely missing)
# Construct the predictions, forecasts
can_compute_mean = not (self.memory_no_forecast_mean or
self.memory_no_predicted_mean)
can_compute_cov = not (self.memory_no_forecast_cov or
self.memory_no_predicted_cov)
if can_compute_mean or can_compute_cov:
for t in range(self.nobs):
design_t = 0 if self.design.shape[2] == 1 else t
obs_cov_t = 0 if self.obs_cov.shape[2] == 1 else t
obs_intercept_t = 0 if self.obs_intercept.shape[1] == 1 else t
# For completely missing observations, the Kalman filter will
# produce forecasts, but forecast errors and the forecast
# error covariance matrix will be zeros - make them nan to
# improve clarity of results.
if self.nmissing[t] > 0:
mask = ~self.missing[:, t].astype(bool)
# We can recover forecasts
# For partially missing observations, the Kalman filter
# will produce all elements (forecasts, forecast errors,
# forecast error covariance matrices) as usual, but their
# dimension will only be equal to the number of non-missing
# elements, and their location in memory will be in the
# first blocks (e.g. for the forecasts_error, the first
# k_endog - nmissing[t] columns will be filled in),
# regardless of which endogenous variables they refer to
# (i.e. the non- missing endogenous variables for that
# observation). Furthermore, the forecast error covariance
# matrix is only valid for those elements. What is done is
# to set all elements to nan for these observations so that
# they are flagged as missing. The variables
# missing_forecasts, etc. then provide the forecasts, etc.
# provided by the Kalman filter, from which the data can be
# retrieved if desired.
if can_compute_mean:
self.forecasts[:, t] = np.dot(
self.design[:, :, design_t],
self.predicted_state[:, t]
) + self.obs_intercept[:, obs_intercept_t]
self.forecasts_error[:, t] = np.nan
self.forecasts_error[mask, t] = (
self.endog[mask, t] - self.forecasts[mask, t])
# TODO: We should only fill in the non-masked elements of
# this array. Also, this will give the multivariate version
# even if univariate filtering was selected. Instead, we
# should use the reordering methods and then replace the
# masked values with NaNs
if can_compute_cov:
self.forecasts_error_cov[:, :, t] = np.dot(
np.dot(self.design[:, :, design_t],
self.predicted_state_cov[:, :, t]),
self.design[:, :, design_t].T
) + self.obs_cov[:, :, obs_cov_t]
# In the collapsed case, everything just needs to be rebuilt
# for the original observed data, since the Kalman filter
# produced these values for the collapsed data.
elif self.filter_collapsed:
if can_compute_mean:
self.forecasts[:, t] = np.dot(
self.design[:, :, design_t],
self.predicted_state[:, t]
) + self.obs_intercept[:, obs_intercept_t]
self.forecasts_error[:, t] = (
self.endog[:, t] - self.forecasts[:, t]
)
if can_compute_cov:
self.forecasts_error_cov[:, :, t] = np.dot(
np.dot(self.design[:, :, design_t],
self.predicted_state_cov[:, :, t]),
self.design[:, :, design_t].T
) + self.obs_cov[:, :, obs_cov_t]
# Note: if we concentrated out the scale, need to adjust the
# loglikelihood values and all of the covariance matrices and the
# values that depend on the covariance matrices
# Note: concentrated computation is not permitted with collapsed
# version, so we do not need to modify collapsed arrays.
self.scale = 1.
if self.filter_concentrated and self.model._scale is None:
d = max(self.loglikelihood_burn, self.nobs_diffuse)
# Compute the scale
nmissing = np.array(kalman_filter.model.nmissing)
nobs_k_endog = np.sum(self.k_endog - nmissing[d:])
# In the univariate case, we need to subtract observations
# associated with a singular forecast error covariance matrix
nobs_k_endog -= kalman_filter.nobs_kendog_univariate_singular
scale_obs = np.array(kalman_filter.scale, copy=True)
if not self.memory_no_likelihood:
self.scale = np.sum(scale_obs[d:]) / nobs_k_endog
else:
self.scale = scale_obs[0] / nobs_k_endog
# Need to modify this for diffuse initialization, since for
# diffuse periods we only need to add in the scale value if the
# diffuse forecast error covariance matrix element was singular
nsingular = 0
if kalman_filter.nobs_diffuse > 0:
Finf = kalman_filter.forecast_error_diffuse_cov
singular = (np.diagonal(Finf).real <=
kalman_filter.tolerance_diffuse)
nsingular = np.sum(~singular, axis=1)
# Adjust the loglikelihood obs (see `KalmanFilter.loglikeobs` for
# defaults on the adjustment)
if not self.memory_no_likelihood:
self.llf_obs += -0.5 * (
(self.k_endog - nmissing - nsingular) * np.log(self.scale)
+ scale_obs / self.scale)
else:
self.llf_obs[0] += -0.5 * np.squeeze(
np.sum(
(self.k_endog - nmissing - nsingular)
* np.log(self.scale)
)
+ scale_obs / self.scale
)
# Scale the filter output
self.obs_cov = self.obs_cov * self.scale
self.state_cov = self.state_cov * self.scale
self.initial_state_cov = self.initial_state_cov * self.scale
self.predicted_state_cov = self.predicted_state_cov * self.scale
self.filtered_state_cov = self.filtered_state_cov * self.scale
self.forecasts_error_cov = self.forecasts_error_cov * self.scale
if self.missing_forecasts_error_cov is not None:
self.missing_forecasts_error_cov = (
self.missing_forecasts_error_cov * self.scale)
# Note: do not have to adjust the Kalman gain or tmp4
self.tmp1 = self.tmp1 * self.scale
self.tmp2 = self.tmp2 / self.scale
self.tmp3 = self.tmp3 / self.scale
if not (self.memory_no_std_forecast or
self.invert_lu or
self.solve_lu or
self.filter_collapsed):
self._standardized_forecasts_error = (
self._standardized_forecasts_error / self.scale**0.5)
# The self.model._scale value is only not None within a fixed_scale
# context, in which case it is set and indicates that we should
# generally view this results object as using a concentrated scale
# (e.g. for d.o.f. computations), but because the fixed scale was
# actually applied to the model prior to filtering, we do not need to
# make any adjustments to the filter output, etc.
elif self.model._scale is not None:
self.filter_concentrated = True
self.scale = self.model._scale
# Now, save self.llf, and handle the memory_no_likelihood case
if not self.memory_no_likelihood:
self.llf = np.sum(self.llf_obs[self.loglikelihood_burn:])
else:
self.llf = self.llf_obs[0]
self.llf_obs = None | Update the filter results
Parameters
----------
kalman_filter : statespace.kalman_filter.KalmanFilter
The model object from which to take the updated values.
Notes
-----
This method is rarely required except for internal usage. | update_filter | python | statsmodels/statsmodels | statsmodels/tsa/statespace/kalman_filter.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/kalman_filter.py | BSD-3-Clause |
def kalman_gain(self):
"""
Kalman gain matrices
"""
if self._kalman_gain is None:
# k x n
self._kalman_gain = np.zeros(
(self.k_states, self.k_endog, self.nobs), dtype=self.dtype)
for t in range(self.nobs):
# In the case of entirely missing observations, let the Kalman
# gain be zeros.
if self.nmissing[t] == self.k_endog:
continue
design_t = 0 if self.design.shape[2] == 1 else t
transition_t = 0 if self.transition.shape[2] == 1 else t
if self.nmissing[t] == 0:
self._kalman_gain[:, :, t] = np.dot(
np.dot(
self.transition[:, :, transition_t],
self.predicted_state_cov[:, :, t]
),
np.dot(
np.transpose(self.design[:, :, design_t]),
np.linalg.inv(self.forecasts_error_cov[:, :, t])
)
)
else:
mask = ~self.missing[:, t].astype(bool)
F = self.forecasts_error_cov[np.ix_(mask, mask, [t])]
self._kalman_gain[:, mask, t] = np.dot(
np.dot(
self.transition[:, :, transition_t],
self.predicted_state_cov[:, :, t]
),
np.dot(
np.transpose(self.design[mask, :, design_t]),
np.linalg.inv(F[:, :, 0])
)
)
return self._kalman_gain | Kalman gain matrices | kalman_gain | python | statsmodels/statsmodels | statsmodels/tsa/statespace/kalman_filter.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/kalman_filter.py | BSD-3-Clause |
def __getattr__(self, attr):
"""
Provide access to the representation and filtered output in the
appropriate range (`start` - `end`).
"""
# Prevent infinite recursive lookups
if attr[0] == '_':
raise AttributeError("'%s' object has no attribute '%s'" %
(self.__class__.__name__, attr))
_attr = '_' + attr
# Cache the attribute
if not hasattr(self, _attr):
if attr == 'endog' or attr in self.filter_attributes:
# Get a copy
value = getattr(self.results, attr).copy()
if self.ndynamic > 0:
end = self.end - self.ndynamic - self.nforecast
value = value[..., :end]
if self.oos_results is not None:
oos_value = getattr(self.oos_results, attr).copy()
# Note that the last element of the results predicted state
# and state cov will overlap with the first element of the
# oos predicted state and state cov, so eliminate the
# last element of the results versions
# But if we have dynamic prediction, then we have already
# eliminated the last element of the predicted state, so
# we do not need to do it here.
if self.ndynamic == 0 and attr[:9] == 'predicted':
value = value[..., :-1]
value = np.concatenate([value, oos_value], axis=-1)
# Subset to the correct time frame
value = value[..., self.start:self.end]
elif attr in self.smoother_attributes:
if self.ndynamic > 0:
raise NotImplementedError(
'Cannot retrieve smoothed attributes when using'
' dynamic prediction, since the information set used'
' to compute the smoothed results differs from the'
' information set implied by the dynamic prediction.')
# Get a copy
value = getattr(self.results, attr).copy()
# The oos_results object is only dynamic or out-of-sample,
# so filtered == smoothed
if self.oos_results is not None:
filtered_attr = 'filtered' + attr[8:]
oos_value = getattr(self.oos_results, filtered_attr).copy()
value = np.concatenate([value, oos_value], axis=-1)
# Subset to the correct time frame
value = value[..., self.start:self.end]
elif attr in self.representation_attributes:
value = getattr(self.results, attr).copy()
# If a time-invariant matrix, return it. Otherwise, subset to
# the correct period.
if value.shape[-1] == 1:
value = value[..., 0]
else:
if self.ndynamic > 0:
end = self.end - self.ndynamic - self.nforecast
value = value[..., :end]
if self.oos_results is not None:
oos_value = getattr(self.oos_results, attr).copy()
value = np.concatenate([value, oos_value], axis=-1)
value = value[..., self.start:self.end]
else:
raise AttributeError("'%s' object has no attribute '%s'" %
(self.__class__.__name__, attr))
setattr(self, _attr, value)
return getattr(self, _attr) | Provide access to the representation and filtered output in the
appropriate range (`start` - `end`). | __getattr__ | python | statsmodels/statsmodels | statsmodels/tsa/statespace/kalman_filter.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/kalman_filter.py | BSD-3-Clause |
def _check_dynamic(dynamic, start, end, nobs):
"""
Verify dynamic and warn or error if issues
Parameters
----------
dynamic : {int, None}
The offset relative to start of the dynamic forecasts. None if no
dynamic forecasts are required.
start : int
The location of the first forecast.
end : int
The location of the final forecast (inclusive).
nobs : int
The number of observations in the time series.
Returns
-------
dynamic : {int, None}
The start location of the first dynamic forecast. None if there
are no in-sample dynamic forecasts.
ndynamic : int
The number of dynamic forecasts
"""
if dynamic is None:
return dynamic, 0
# Replace the relative dynamic offset with an absolute offset
dynamic = start + dynamic
# Validate the `dynamic` parameter
if dynamic < 0:
raise ValueError('Dynamic prediction cannot begin prior to the'
' first observation in the sample.')
elif dynamic > end:
warn('Dynamic prediction specified to begin after the end of'
' prediction, and so has no effect.', ValueWarning)
return None, 0
elif dynamic > nobs:
warn('Dynamic prediction specified to begin during'
' out-of-sample forecasting period, and so has no'
' effect.', ValueWarning)
return None, 0
# Get the total size of the desired dynamic forecasting component
# Note: the first `dynamic` periods of prediction are actually
# *not* dynamic, because dynamic prediction begins at observation
# `dynamic`.
ndynamic = max(0, min(end, nobs) - dynamic)
return dynamic, ndynamic | Verify dynamic and warn or error if issues
Parameters
----------
dynamic : {int, None}
The offset relative to start of the dynamic forecasts. None if no
dynamic forecasts are required.
start : int
The location of the first forecast.
end : int
The location of the final forecast (inclusive).
nobs : int
The number of observations in the time series.
Returns
-------
dynamic : {int, None}
The start location of the first dynamic forecast. None if there
are no in-sample dynamic forecasts.
ndynamic : int
The number of dynamic forecasts | _check_dynamic | python | statsmodels/statsmodels | statsmodels/tsa/statespace/kalman_filter.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/kalman_filter.py | BSD-3-Clause |
def set_smoother_output(self, smoother_output=None, **kwargs):
"""
Set the smoother output
The smoother can produce several types of results. The smoother output
variable controls which are calculated and returned.
Parameters
----------
smoother_output : int, optional
Bitmask value to set the smoother output to. See notes for details.
**kwargs
Keyword arguments may be used to influence the smoother output by
setting individual boolean flags. See notes for details.
Notes
-----
The smoother output is defined by a collection of boolean flags, and
is internally stored as a bitmask. The methods available are:
SMOOTHER_STATE = 0x01
Calculate and return the smoothed states.
SMOOTHER_STATE_COV = 0x02
Calculate and return the smoothed state covariance matrices.
SMOOTHER_STATE_AUTOCOV = 0x10
Calculate and return the smoothed state lag-one autocovariance
matrices.
SMOOTHER_DISTURBANCE = 0x04
Calculate and return the smoothed state and observation
disturbances.
SMOOTHER_DISTURBANCE_COV = 0x08
Calculate and return the covariance matrices for the smoothed state
and observation disturbances.
SMOOTHER_ALL
Calculate and return all results.
If the bitmask is set directly via the `smoother_output` argument, then
the full method must be provided.
If keyword arguments are used to set individual boolean flags, then
the lowercase of the method must be used as an argument name, and the
value is the desired value of the boolean flag (True or False).
Note that the smoother output may also be specified by directly
modifying the class attributes which are defined similarly to the
keyword arguments.
The default smoother output is SMOOTHER_ALL.
If performance is a concern, only those results which are needed should
be specified as any results that are not specified will not be
calculated. For example, if the smoother output is set to only include
SMOOTHER_STATE, the smoother operates much more quickly than if all
output is required.
Examples
--------
>>> import statsmodels.tsa.statespace.kalman_smoother as ks
>>> mod = ks.KalmanSmoother(1,1)
>>> mod.smoother_output
15
>>> mod.set_smoother_output(smoother_output=0)
>>> mod.smoother_state = True
>>> mod.smoother_output
1
>>> mod.smoother_state
True
"""
if smoother_output is not None:
self.smoother_output = smoother_output
for name in KalmanSmoother.smoother_outputs:
if name in kwargs:
setattr(self, name, kwargs[name]) | Set the smoother output
The smoother can produce several types of results. The smoother output
variable controls which are calculated and returned.
Parameters
----------
smoother_output : int, optional
Bitmask value to set the smoother output to. See notes for details.
**kwargs
Keyword arguments may be used to influence the smoother output by
setting individual boolean flags. See notes for details.
Notes
-----
The smoother output is defined by a collection of boolean flags, and
is internally stored as a bitmask. The methods available are:
SMOOTHER_STATE = 0x01
Calculate and return the smoothed states.
SMOOTHER_STATE_COV = 0x02
Calculate and return the smoothed state covariance matrices.
SMOOTHER_STATE_AUTOCOV = 0x10
Calculate and return the smoothed state lag-one autocovariance
matrices.
SMOOTHER_DISTURBANCE = 0x04
Calculate and return the smoothed state and observation
disturbances.
SMOOTHER_DISTURBANCE_COV = 0x08
Calculate and return the covariance matrices for the smoothed state
and observation disturbances.
SMOOTHER_ALL
Calculate and return all results.
If the bitmask is set directly via the `smoother_output` argument, then
the full method must be provided.
If keyword arguments are used to set individual boolean flags, then
the lowercase of the method must be used as an argument name, and the
value is the desired value of the boolean flag (True or False).
Note that the smoother output may also be specified by directly
modifying the class attributes which are defined similarly to the
keyword arguments.
The default smoother output is SMOOTHER_ALL.
If performance is a concern, only those results which are needed should
be specified as any results that are not specified will not be
calculated. For example, if the smoother output is set to only include
SMOOTHER_STATE, the smoother operates much more quickly than if all
output is required.
Examples
--------
>>> import statsmodels.tsa.statespace.kalman_smoother as ks
>>> mod = ks.KalmanSmoother(1,1)
>>> mod.smoother_output
15
>>> mod.set_smoother_output(smoother_output=0)
>>> mod.smoother_state = True
>>> mod.smoother_output
1
>>> mod.smoother_state
True | set_smoother_output | python | statsmodels/statsmodels | statsmodels/tsa/statespace/kalman_smoother.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/kalman_smoother.py | BSD-3-Clause |
def smooth(self, smoother_output=None, smooth_method=None, results=None,
run_filter=True, prefix=None, complex_step=False,
update_representation=True, update_filter=True,
update_smoother=True, **kwargs):
"""
Apply the Kalman smoother to the statespace model.
Parameters
----------
smoother_output : int, optional
Determines which Kalman smoother output calculate. Default is all
(including state, disturbances, and all covariances).
results : class or object, optional
If a class, then that class is instantiated and returned with the
result of both filtering and smoothing.
If an object, then that object is updated with the smoothing data.
If None, then a SmootherResults object is returned with both
filtering and smoothing results.
run_filter : bool, optional
Whether or not to run the Kalman filter prior to smoothing. Default
is True.
prefix : str
The prefix of the datatype. Usually only used internally.
Returns
-------
SmootherResults object
"""
# Run the filter
kfilter = self._filter(**kwargs)
# Create the results object
results = self.results_class(self)
if update_representation:
results.update_representation(self)
if update_filter:
results.update_filter(kfilter)
else:
# (even if we don't update all filter results, still need to
# update this)
results.nobs_diffuse = kfilter.nobs_diffuse
# Run the smoother
if smoother_output is None:
smoother_output = self.smoother_output
smoother = self._smooth(smoother_output, results=results, **kwargs)
# Update the results
if update_smoother:
results.update_smoother(smoother)
return results | Apply the Kalman smoother to the statespace model.
Parameters
----------
smoother_output : int, optional
Determines which Kalman smoother output calculate. Default is all
(including state, disturbances, and all covariances).
results : class or object, optional
If a class, then that class is instantiated and returned with the
result of both filtering and smoothing.
If an object, then that object is updated with the smoothing data.
If None, then a SmootherResults object is returned with both
filtering and smoothing results.
run_filter : bool, optional
Whether or not to run the Kalman filter prior to smoothing. Default
is True.
prefix : str
The prefix of the datatype. Usually only used internally.
Returns
-------
SmootherResults object | smooth | python | statsmodels/statsmodels | statsmodels/tsa/statespace/kalman_smoother.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/kalman_smoother.py | BSD-3-Clause |
def update_representation(self, model, only_options=False):
"""
Update the results to match a given model
Parameters
----------
model : Representation
The model object from which to take the updated values.
only_options : bool, optional
If set to true, only the smoother and filter options are updated,
and the state space representation is not updated. Default is
False.
Notes
-----
This method is rarely required except for internal usage.
"""
super().update_representation(model, only_options)
# Save the options as boolean variables
for name in self._smoother_options:
setattr(self, name, getattr(model, name, None))
# Initialize holders for smoothed forecasts
self._smoothed_forecasts = None
self._smoothed_forecasts_error = None
self._smoothed_forecasts_error_cov = None | Update the results to match a given model
Parameters
----------
model : Representation
The model object from which to take the updated values.
only_options : bool, optional
If set to true, only the smoother and filter options are updated,
and the state space representation is not updated. Default is
False.
Notes
-----
This method is rarely required except for internal usage. | update_representation | python | statsmodels/statsmodels | statsmodels/tsa/statespace/kalman_smoother.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/kalman_smoother.py | BSD-3-Clause |
def update_smoother(self, smoother):
"""
Update the smoother results
Parameters
----------
smoother : KalmanSmoother
The model object from which to take the updated values.
Notes
-----
This method is rarely required except for internal usage.
"""
# Copy the appropriate output
attributes = []
# Since update_representation will already have been called, we can
# use the boolean options smoother_* and know they match the smoother
# itself
if self.smoother_state or self.smoother_disturbance:
attributes.append('scaled_smoothed_estimator')
if self.smoother_state_cov or self.smoother_disturbance_cov:
attributes.append('scaled_smoothed_estimator_cov')
if self.smoother_state:
attributes.append('smoothed_state')
if self.smoother_state_cov:
attributes.append('smoothed_state_cov')
if self.smoother_state_autocov:
attributes.append('smoothed_state_autocov')
if self.smoother_disturbance:
attributes += [
'smoothing_error',
'smoothed_measurement_disturbance',
'smoothed_state_disturbance'
]
if self.smoother_disturbance_cov:
attributes += [
'smoothed_measurement_disturbance_cov',
'smoothed_state_disturbance_cov'
]
has_missing = np.sum(self.nmissing) > 0
for name in self._smoother_attributes:
if name == 'smoother_output':
pass
elif name in attributes:
if name in ['smoothing_error',
'smoothed_measurement_disturbance']:
vector = getattr(smoother, name, None)
if vector is not None and has_missing:
vector = np.array(reorder_missing_vector(
vector, self.missing, prefix=self.prefix))
else:
vector = np.array(vector, copy=True)
setattr(self, name, vector)
elif name == 'smoothed_measurement_disturbance_cov':
matrix = getattr(smoother, name, None)
if matrix is not None and has_missing:
matrix = reorder_missing_matrix(
matrix, self.missing, reorder_rows=True,
reorder_cols=True, prefix=self.prefix)
# In the missing data case, we want to set the missing
# components equal to their unconditional distribution
copy_index_matrix(
self.obs_cov, matrix, self.missing,
index_rows=True, index_cols=True, inplace=True,
prefix=self.prefix)
else:
matrix = np.array(matrix, copy=True)
setattr(self, name, matrix)
else:
setattr(self, name,
np.array(getattr(smoother, name, None), copy=True))
else:
setattr(self, name, None)
self.innovations_transition = (
np.array(smoother.innovations_transition, copy=True))
# Diffuse objects
self.scaled_smoothed_diffuse_estimator = None
self.scaled_smoothed_diffuse1_estimator_cov = None
self.scaled_smoothed_diffuse2_estimator_cov = None
if self.nobs_diffuse > 0:
self.scaled_smoothed_diffuse_estimator = np.array(
smoother.scaled_smoothed_diffuse_estimator, copy=True)
self.scaled_smoothed_diffuse1_estimator_cov = np.array(
smoother.scaled_smoothed_diffuse1_estimator_cov, copy=True)
self.scaled_smoothed_diffuse2_estimator_cov = np.array(
smoother.scaled_smoothed_diffuse2_estimator_cov, copy=True)
# Adjustments
# For r_t (and similarly for N_t), what was calculated was
# r_T, ..., r_{-1}. We only want r_0, ..., r_T
# so exclude the appropriate element so that the time index is
# consistent with the other returned output
# r_t stored such that scaled_smoothed_estimator[0] == r_{-1}
start = 1
end = None
if 'scaled_smoothed_estimator' in attributes:
self.scaled_smoothed_estimator_presample = (
self.scaled_smoothed_estimator[:, 0])
self.scaled_smoothed_estimator = (
self.scaled_smoothed_estimator[:, start:end]
)
if 'scaled_smoothed_estimator_cov' in attributes:
self.scaled_smoothed_estimator_cov_presample = (
self.scaled_smoothed_estimator_cov[:, :, 0])
self.scaled_smoothed_estimator_cov = (
self.scaled_smoothed_estimator_cov[:, :, start:end]
)
# Clear the smoothed forecasts
self._smoothed_forecasts = None
self._smoothed_forecasts_error = None
self._smoothed_forecasts_error_cov = None
# Note: if we concentrated out the scale, need to adjust the
# loglikelihood values and all of the covariance matrices and the
# values that depend on the covariance matrices
if self.filter_concentrated and self.model._scale is None:
self.smoothed_state_cov *= self.scale
self.smoothed_state_autocov *= self.scale
self.smoothed_state_disturbance_cov *= self.scale
self.smoothed_measurement_disturbance_cov *= self.scale
self.scaled_smoothed_estimator_presample /= self.scale
self.scaled_smoothed_estimator /= self.scale
self.scaled_smoothed_estimator_cov_presample /= self.scale
self.scaled_smoothed_estimator_cov /= self.scale
self.smoothing_error /= self.scale
# Cache
self.__smoothed_state_autocovariance = {} | Update the smoother results
Parameters
----------
smoother : KalmanSmoother
The model object from which to take the updated values.
Notes
-----
This method is rarely required except for internal usage. | update_smoother | python | statsmodels/statsmodels | statsmodels/tsa/statespace/kalman_smoother.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/kalman_smoother.py | BSD-3-Clause |
def _smoothed_state_autocovariance(self, shift, start, end,
extend_kwargs=None):
"""
Compute "forward" autocovariances, Cov(t, t+j)
Parameters
----------
shift : int
The number of period to shift forwards when computing the
autocovariance. This has the opposite sign as `lag` from the
`smoothed_state_autocovariance` method.
start : int, optional
The start of the interval (inclusive) of autocovariances to compute
and return.
end : int, optional
The end of the interval (exclusive) autocovariances to compute and
return. Note that since it is an exclusive endpoint, the returned
autocovariances do not include the value at this index.
extend_kwargs : dict, optional
Keyword arguments containing updated state space system matrices
for handling out-of-sample autocovariance computations in
time-varying state space models.
"""
if extend_kwargs is None:
extend_kwargs = {}
# Size of returned array in the time dimension
n = end - start
# Get number of post-sample periods we need to create an extended
# model to compute
if shift == 0:
max_insample = self.nobs - shift
else:
max_insample = self.nobs - shift + 1
n_postsample = max(0, end - max_insample)
# Get full in-sample arrays
if shift != 0:
L = self.innovations_transition
P = self.predicted_state_cov
N = self.scaled_smoothed_estimator_cov
else:
acov = self.smoothed_state_cov
# If applicable, append out-of-sample arrays
if n_postsample > 0:
# Note: we need 1 less than the number of post
endog = np.zeros((n_postsample, self.k_endog)) * np.nan
mod = self.model.extend(endog, start=self.nobs, **extend_kwargs)
mod.initialize_known(self.predicted_state[..., self.nobs],
self.predicted_state_cov[..., self.nobs])
res = mod.smooth()
if shift != 0:
start_insample = max(0, start)
L = np.concatenate((L[..., start_insample:],
res.innovations_transition), axis=2)
P = np.concatenate((P[..., start_insample:],
res.predicted_state_cov[..., 1:]),
axis=2)
N = np.concatenate((N[..., start_insample:],
res.scaled_smoothed_estimator_cov),
axis=2)
end -= start_insample
start -= start_insample
else:
acov = np.concatenate((acov, res.predicted_state_cov), axis=2)
if shift != 0:
# Subset to appropriate start, end
start_insample = max(0, start)
LT = L[..., start_insample:end + shift - 1].T
P = P[..., start_insample:end + shift].T
N = N[..., start_insample:end + shift - 1].T
# Intermediate computations
tmpLT = np.eye(self.k_states)[None, :, :]
length = P.shape[0] - shift # this is the required length of LT
for i in range(1, shift + 1):
tmpLT = LT[shift - i:length + shift - i] @ tmpLT
eye = np.eye(self.k_states)[None, ...]
# Compute the autocovariance
acov = np.zeros((n, self.k_states, self.k_states))
acov[:start_insample - start] = np.nan
acov[start_insample - start:] = (
P[:-shift] @ tmpLT @ (eye - N[shift - 1:] @ P[shift:]))
else:
acov = acov.T[start:end]
return acov | Compute "forward" autocovariances, Cov(t, t+j)
Parameters
----------
shift : int
The number of period to shift forwards when computing the
autocovariance. This has the opposite sign as `lag` from the
`smoothed_state_autocovariance` method.
start : int, optional
The start of the interval (inclusive) of autocovariances to compute
and return.
end : int, optional
The end of the interval (exclusive) autocovariances to compute and
return. Note that since it is an exclusive endpoint, the returned
autocovariances do not include the value at this index.
extend_kwargs : dict, optional
Keyword arguments containing updated state space system matrices
for handling out-of-sample autocovariance computations in
time-varying state space models. | _smoothed_state_autocovariance | python | statsmodels/statsmodels | statsmodels/tsa/statespace/kalman_smoother.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/kalman_smoother.py | BSD-3-Clause |
def factors_ix(self):
"""Factor state index array, shaped (k_factors, lags)."""
# i.e. the position in the state vector of the second lag of the third
# factor is factors_ix[2, 1]
# ravel(order='F') gives e.g (f0.L1, f1.L1, f0.L2, f1.L2, f0.L3, ...)
# while
# ravel(order='C') gives e.g (f0.L1, f0.L2, f0.L3, f1.L1, f1.L2, ...)
o = self.state_offset
return np.reshape(o + np.arange(self.k_factors * self._factor_order),
(self._factor_order, self.k_factors)).T | Factor state index array, shaped (k_factors, lags). | factors_ix | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def factors(self):
"""Factors and all lags in the state vector (max(5, p))."""
# Note that this is equivalent to factors_ix with ravel(order='F')
o = self.state_offset
return np.s_[o:o + self.k_factors * self._factor_order] | Factors and all lags in the state vector (max(5, p)). | factors | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def factors_ar(self):
"""Factors and all lags used in the factor autoregression (p)."""
o = self.state_offset
return np.s_[o:o + self.k_factors * self.factor_order] | Factors and all lags used in the factor autoregression (p). | factors_ar | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def factors_L1(self):
"""Factors (first block / lag only)."""
o = self.state_offset
return np.s_[o:o + self.k_factors] | Factors (first block / lag only). | factors_L1 | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def factors_L1_5(self):
"""Factors plus four lags."""
o = self.state_offset
return np.s_[o:o + self.k_factors * 5] | Factors plus four lags. | factors_L1_5 | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def _apply_factor_multiplicities(self, factors, factor_orders,
factor_multiplicities):
"""
Expand `factors` and `factor_orders` to account for factor multiplity.
For example, if there is a `global` factor with multiplicity 2, then
this method expands that into `global.1` and `global.2` in both the
`factors` and `factor_orders` dictionaries.
Parameters
----------
factors : dict
Dictionary of {endog_name: list of factor names}
factor_orders : dict
Dictionary of {tuple of factor names: factor order}
factor_multiplicities : dict
Dictionary of {factor name: factor multiplicity}
Returns
-------
new_factors : dict
Dictionary of {endog_name: list of factor names}, with factor names
expanded to incorporate multiplicities.
new_factors : dict
Dictionary of {tuple of factor names: factor order}, with factor
names in each tuple expanded to incorporate multiplicities.
"""
# Expand the factors to account for the multiplicities
new_factors = {}
for endog_name, factors_list in factors.items():
new_factor_list = []
for factor_name in factors_list:
n = factor_multiplicities.get(factor_name, 1)
if n > 1:
new_factor_list += [f'{factor_name}.{i + 1}'
for i in range(n)]
else:
new_factor_list.append(factor_name)
new_factors[endog_name] = new_factor_list
# Expand the factor orders to account for the multiplicities
new_factor_orders = {}
for block, factor_order in factor_orders.items():
if not isinstance(block, tuple):
block = (block,)
new_block = []
for factor_name in block:
n = factor_multiplicities.get(factor_name, 1)
if n > 1:
new_block += [f'{factor_name}.{i + 1}'
for i in range(n)]
else:
new_block += [factor_name]
new_factor_orders[tuple(new_block)] = factor_order
return new_factors, new_factor_orders | Expand `factors` and `factor_orders` to account for factor multiplity.
For example, if there is a `global` factor with multiplicity 2, then
this method expands that into `global.1` and `global.2` in both the
`factors` and `factor_orders` dictionaries.
Parameters
----------
factors : dict
Dictionary of {endog_name: list of factor names}
factor_orders : dict
Dictionary of {tuple of factor names: factor order}
factor_multiplicities : dict
Dictionary of {factor name: factor multiplicity}
Returns
-------
new_factors : dict
Dictionary of {endog_name: list of factor names}, with factor names
expanded to incorporate multiplicities.
new_factors : dict
Dictionary of {tuple of factor names: factor order}, with factor
names in each tuple expanded to incorporate multiplicities. | _apply_factor_multiplicities | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def _construct_endog_factor_map(self, factors, endog_names):
"""
Construct mapping of observed variables to factors.
Parameters
----------
factors : dict
Dictionary of {endog_name: list of factor names}
endog_names : list of str
List of the names of the observed variables.
Returns
-------
endog_factor_map : pd.DataFrame
Boolean dataframe with `endog_names` as the index and the factor
names (computed from the `factors` input) as the columns. Each cell
is True if the associated factor is allowed to load on the
associated observed variable.
"""
# Validate that all entries in the factors dictionary have associated
# factors
missing = []
for key, value in factors.items():
if not isinstance(value, (list, tuple)) or len(value) == 0:
missing.append(key)
if len(missing):
raise ValueError('Each observed variable must be mapped to at'
' least one factor in the `factors` dictionary.'
f' Variables missing factors are: {missing}.')
# Validate that we have been told about the factors for each endog
# variable. This is because it doesn't make sense to include an
# observed variable that doesn't load on any factor
missing = set(endog_names).difference(set(factors.keys()))
if len(missing):
raise ValueError('If a `factors` dictionary is provided, then'
' it must include entries for each observed'
f' variable. Missing variables are: {missing}.')
# Figure out the set of factor names
# (0 is just a dummy value for the dict - we just do it this way to
# collect the keys, in order, without duplicates.)
factor_names = {}
for key, value in factors.items():
if isinstance(value, str):
factor_names[value] = 0
else:
factor_names.update({v: 0 for v in value})
factor_names = list(factor_names.keys())
k_factors = len(factor_names)
endog_factor_map = pd.DataFrame(
np.zeros((self.k_endog, k_factors), dtype=bool),
index=pd.Index(endog_names, name='endog'),
columns=pd.Index(factor_names, name='factor'))
for key, value in factors.items():
endog_factor_map.loc[key, value] = True
return endog_factor_map | Construct mapping of observed variables to factors.
Parameters
----------
factors : dict
Dictionary of {endog_name: list of factor names}
endog_names : list of str
List of the names of the observed variables.
Returns
-------
endog_factor_map : pd.DataFrame
Boolean dataframe with `endog_names` as the index and the factor
names (computed from the `factors` input) as the columns. Each cell
is True if the associated factor is allowed to load on the
associated observed variable. | _construct_endog_factor_map | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def factors_L1(self):
"""Factors."""
ix = np.arange(self.k_states_factors)
iloc = tuple(ix[block.factors_L1] for block in self.factor_blocks)
return np.concatenate(iloc) | Factors. | factors_L1 | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def factors_L1_5_ix(self):
"""Factors plus any lags, index shaped (5, k_factors)."""
ix = np.arange(self.k_states_factors)
iloc = []
for block in self.factor_blocks:
iloc.append(ix[block.factors_L1_5].reshape(5, block.k_factors))
return np.concatenate(iloc, axis=1) | Factors plus any lags, index shaped (5, k_factors). | factors_L1_5_ix | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def idio_ar_L1(self):
"""Idiosyncratic AR states, (first block / lag only)."""
ix1 = self.k_states_factors
if self.idiosyncratic_ar1:
ix2 = ix1 + self.k_endog
else:
ix2 = ix1 + self.k_endog_Q
return np.s_[ix1:ix2] | Idiosyncratic AR states, (first block / lag only). | idio_ar_L1 | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def idio_ar_M(self):
"""Idiosyncratic AR states for monthly variables."""
ix1 = self.k_states_factors
ix2 = ix1
if self.idiosyncratic_ar1:
ix2 += self.k_endog_M
return np.s_[ix1:ix2] | Idiosyncratic AR states for monthly variables. | idio_ar_M | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def idio_ar_Q(self):
"""Idiosyncratic AR states and all lags for quarterly variables."""
# Note that this is equivalent to idio_ar_Q_ix with ravel(order='F')
ix1 = self.k_states_factors
if self.idiosyncratic_ar1:
ix1 += self.k_endog_M
ix2 = ix1 + self.k_endog_Q * 5
return np.s_[ix1:ix2] | Idiosyncratic AR states and all lags for quarterly variables. | idio_ar_Q | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def idio_ar_Q_ix(self):
"""Idiosyncratic AR (quarterly) state index, (k_endog_Q, lags)."""
# i.e. the position in the state vector of the second lag of the third
# quarterly variable is idio_ar_Q_ix[2, 1]
# ravel(order='F') gives e.g (y1.L1, y2.L1, y1.L2, y2.L3, y1.L3, ...)
# while
# ravel(order='C') gives e.g (y1.L1, y1.L2, y1.L3, y2.L1, y2.L2, ...)
start = self.k_states_factors
if self.idiosyncratic_ar1:
start += self.k_endog_M
return (start + np.reshape(
np.arange(5 * self.k_endog_Q), (5, self.k_endog_Q)).T) | Idiosyncratic AR (quarterly) state index, (k_endog_Q, lags). | idio_ar_Q_ix | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def endog_factor_iloc(self):
"""List of list of int, factor indexes for each observed variable."""
# i.e. endog_factor_iloc[i] is a list of integer locations of the
# factors that load on the ith observed variable
if self._endog_factor_iloc is None:
ilocs = []
for i in range(self.k_endog):
ilocs.append(np.where(self.endog_factor_map.iloc[i])[0])
self._endog_factor_iloc = ilocs
return self._endog_factor_iloc | List of list of int, factor indexes for each observed variable. | endog_factor_iloc | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def __getitem__(self, key):
"""
Use square brackets to access index / slice elements.
This is convenient in highlighting the indexing / slice quality of
these attributes in the code below.
"""
if key in ['factors_L1', 'factors_L1_5_ix', 'idio_ar_L1', 'idio_ar_M',
'idio_ar_Q', 'idio_ar_Q_ix']:
return getattr(self, key)
else:
raise KeyError(key) | Use square brackets to access index / slice elements.
This is convenient in highlighting the indexing / slice quality of
these attributes in the code below. | __getitem__ | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def construct_endog(cls, endog_monthly, endog_quarterly):
"""
Construct a combined dataset from separate monthly and quarterly data.
Parameters
----------
endog_monthly : array_like
Monthly dataset. If a quarterly dataset is given, then this must
be a Pandas object with a PeriodIndex or DatetimeIndex at a monthly
frequency.
endog_quarterly : array_like or None
Quarterly dataset. If not None, then this must be a Pandas object
with a PeriodIndex or DatetimeIndex at a quarterly frequency.
Returns
-------
endog : array_like
If both endog_monthly and endog_quarterly were given, this is a
Pandas DataFrame with a PeriodIndex at the monthly frequency, with
all of the columns from `endog_monthly` ordered first and the
columns from `endog_quarterly` ordered afterwards. Otherwise it is
simply the input `endog_monthly` dataset.
k_endog_monthly : int
The number of monthly variables (which are ordered first) in the
returned `endog` dataset.
"""
# Create combined dataset
if endog_quarterly is not None:
# Validate endog_monthly
base_msg = ('If given both monthly and quarterly data'
' then the monthly dataset must be a Pandas'
' object with a date index at a monthly frequency.')
if not isinstance(endog_monthly, (pd.Series, pd.DataFrame)):
raise ValueError('Given monthly dataset is not a'
' Pandas object. ' + base_msg)
elif endog_monthly.index.inferred_type not in ("datetime64",
"period"):
raise ValueError('Given monthly dataset has an'
' index with non-date values. ' + base_msg)
elif not getattr(endog_monthly.index, 'freqstr', 'N')[0] == 'M':
freqstr = getattr(endog_monthly.index, 'freqstr', 'None')
raise ValueError('Index of given monthly dataset has a'
' non-monthly frequency (to check this,'
' examine the `freqstr` attribute of the'
' index of the dataset - it should start with'
' M if it is monthly).'
f' Got {freqstr}. ' + base_msg)
# Validate endog_quarterly
base_msg = ('If a quarterly dataset is given, then it must be a'
' Pandas object with a date index at a quarterly'
' frequency.')
if not isinstance(endog_quarterly, (pd.Series, pd.DataFrame)):
raise ValueError('Given quarterly dataset is not a'
' Pandas object. ' + base_msg)
elif endog_quarterly.index.inferred_type not in ("datetime64",
"period"):
raise ValueError('Given quarterly dataset has an'
' index with non-date values. ' + base_msg)
elif not getattr(endog_quarterly.index, 'freqstr', 'N')[0] == 'Q':
freqstr = getattr(endog_quarterly.index, 'freqstr', 'None')
raise ValueError('Index of given quarterly dataset'
' has a non-quarterly frequency (to check'
' this, examine the `freqstr` attribute of'
' the index of the dataset - it should start'
' with Q if it is quarterly).'
f' Got {freqstr}. ' + base_msg)
# Convert to PeriodIndex, if applicable
if hasattr(endog_monthly.index, 'to_period'):
endog_monthly = endog_monthly.to_period('M')
if hasattr(endog_quarterly.index, 'to_period'):
endog_quarterly = endog_quarterly.to_period('Q')
# Combine the datasets
quarterly_resamp = endog_quarterly.copy()
quarterly_resamp.index = quarterly_resamp.index.to_timestamp()
quarterly_resamp = quarterly_resamp.resample(QUARTER_END).first()
quarterly_resamp = quarterly_resamp.resample(MONTH_END).first()
quarterly_resamp.index = quarterly_resamp.index.to_period()
endog = pd.concat([endog_monthly, quarterly_resamp], axis=1)
# Make sure we didn't accidentally get duplicate column names
column_counts = endog.columns.value_counts()
if column_counts.max() > 1:
columns = endog.columns.values.astype(object)
for name in column_counts.index:
count = column_counts.loc[name]
if count == 1:
continue
mask = columns == name
columns[mask] = [f'{name}{i + 1}' for i in range(count)]
endog.columns = columns
else:
endog = endog_monthly.copy()
shape = endog_monthly.shape
k_endog_monthly = shape[1] if len(shape) == 2 else 1
return endog, k_endog_monthly | Construct a combined dataset from separate monthly and quarterly data.
Parameters
----------
endog_monthly : array_like
Monthly dataset. If a quarterly dataset is given, then this must
be a Pandas object with a PeriodIndex or DatetimeIndex at a monthly
frequency.
endog_quarterly : array_like or None
Quarterly dataset. If not None, then this must be a Pandas object
with a PeriodIndex or DatetimeIndex at a quarterly frequency.
Returns
-------
endog : array_like
If both endog_monthly and endog_quarterly were given, this is a
Pandas DataFrame with a PeriodIndex at the monthly frequency, with
all of the columns from `endog_monthly` ordered first and the
columns from `endog_quarterly` ordered afterwards. Otherwise it is
simply the input `endog_monthly` dataset.
k_endog_monthly : int
The number of monthly variables (which are ordered first) in the
returned `endog` dataset. | construct_endog | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def clone(self, endog, k_endog_monthly=None, endog_quarterly=None,
retain_standardization=False, **kwargs):
"""
Clone state space model with new data and optionally new specification.
Parameters
----------
endog : array_like
The observed time-series process :math:`y`
k_endog_monthly : int, optional
If specifying a monthly/quarterly mixed frequency model in which
the provided `endog` dataset contains both the monthly and
quarterly data, this variable should be used to indicate how many
of the variables are monthly.
endog_quarterly : array_like, optional
Observations of quarterly variables. If provided, must be a
Pandas Series or DataFrame with a DatetimeIndex or PeriodIndex at
the quarterly frequency.
kwargs
Keyword arguments to pass to the new model class to change the
model specification.
Returns
-------
model : DynamicFactorMQ instance
"""
if retain_standardization and self.standardize:
kwargs['standardize'] = (self._endog_mean, self._endog_std)
mod = self._clone_from_init_kwds(
endog, k_endog_monthly=k_endog_monthly,
endog_quarterly=endog_quarterly, **kwargs)
return mod | Clone state space model with new data and optionally new specification.
Parameters
----------
endog : array_like
The observed time-series process :math:`y`
k_endog_monthly : int, optional
If specifying a monthly/quarterly mixed frequency model in which
the provided `endog` dataset contains both the monthly and
quarterly data, this variable should be used to indicate how many
of the variables are monthly.
endog_quarterly : array_like, optional
Observations of quarterly variables. If provided, must be a
Pandas Series or DataFrame with a DatetimeIndex or PeriodIndex at
the quarterly frequency.
kwargs
Keyword arguments to pass to the new model class to change the
model specification.
Returns
-------
model : DynamicFactorMQ instance | clone | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def summary(self, truncate_endog_names=None):
"""
Create a summary table describing the model.
Parameters
----------
truncate_endog_names : int, optional
The number of characters to show for names of observed variables.
Default is 24 if there is more than one observed variable, or
an unlimited number of there is only one.
"""
# Get endog names
endog_names = self._get_endog_names(truncate=truncate_endog_names,
as_string=True)
title = 'Model Specification: Dynamic Factor Model'
if self._index_dates:
ix = self._index
d = ix[0]
sample = ['%s' % d]
d = ix[-1]
sample += ['- ' + '%s' % d]
else:
sample = [str(0), ' - ' + str(self.nobs)]
# Standardize the model name as a list of str
model_name = self._model_name
# - Top summary table ------------------------------------------------
top_left = []
top_left.append(('Model:', [model_name[0]]))
for i in range(1, len(model_name)):
top_left.append(('', ['+ ' + model_name[i]]))
top_left += [
('Sample:', [sample[0]]),
('', [sample[1]])]
top_right = []
if self.k_endog_Q > 0:
top_right += [
('# of monthly variables:', [self.k_endog_M]),
('# of quarterly variables:', [self.k_endog_Q])]
else:
top_right += [('# of observed variables:', [self.k_endog])]
if self.k_factor_blocks == 1:
top_right += [('# of factors:', [self.k_factors])]
else:
top_right += [('# of factor blocks:', [self.k_factor_blocks])]
top_right += [('Idiosyncratic disturbances:',
['AR(1)' if self.idiosyncratic_ar1 else 'iid']),
('Standardize variables:', [self.standardize])]
summary = Summary()
self.model = self
summary.add_table_2cols(self, gleft=top_left, gright=top_right,
title=title)
table_ix = 1
del self.model
# - Endog / factor map -----------------------------------------------
data = self.endog_factor_map.replace({True: 'X', False: ''})
data.index = endog_names
try:
items = data.items()
except AttributeError:
# Remove after pandas 1.5 is minimum
items = data.iteritems()
for name, col in items:
data[name] = data[name] + (' ' * (len(name) // 2))
data.index.name = 'Dep. variable'
data = data.reset_index()
params_data = data.values
params_header = data.columns.map(str).tolist()
params_stubs = None
title = 'Observed variables / factor loadings'
table = SimpleTable(
params_data, params_header, params_stubs,
txt_fmt=fmt_params, title=title)
summary.tables.insert(table_ix, table)
table_ix += 1
# - Factor blocks summary table --------------------------------------
data = self.factor_block_orders.reset_index()
data['block'] = data['block'].map(
lambda factor_names: ', '.join(factor_names))
try:
data[['order']] = data[['order']].map(str)
except AttributeError:
data[['order']] = data[['order']].applymap(str)
params_data = data.values
params_header = data.columns.map(str).tolist()
params_stubs = None
title = 'Factor blocks:'
table = SimpleTable(
params_data, params_header, params_stubs,
txt_fmt=fmt_params, title=title)
summary.tables.insert(table_ix, table)
table_ix += 1
return summary | Create a summary table describing the model.
Parameters
----------
truncate_endog_names : int, optional
The number of characters to show for names of observed variables.
Default is 24 if there is more than one observed variable, or
an unlimited number of there is only one. | summary | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def __str__(self):
"""Summary tables showing model specification."""
return str(self.summary()) | Summary tables showing model specification. | __str__ | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def state_names(self):
"""(list of str) List of human readable names for unobserved states."""
# Factors
state_names = []
for block in self._s.factor_blocks:
state_names += [f'{name}' for name in block.factor_names[:]]
for s in range(1, block._factor_order):
state_names += [f'L{s}.{name}'
for name in block.factor_names]
# Monthly error
endog_names = self._get_endog_names()
if self.idiosyncratic_ar1:
endog_names_M = endog_names[self._o['M']]
state_names += [f'eps_M.{name}' for name in endog_names_M]
endog_names_Q = endog_names[self._o['Q']]
# Quarterly error
state_names += [f'eps_Q.{name}' for name in endog_names_Q]
for s in range(1, 5):
state_names += [f'L{s}.eps_Q.{name}' for name in endog_names_Q]
return state_names | (list of str) List of human readable names for unobserved states. | state_names | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def param_names(self):
"""(list of str) List of human readable parameter names."""
param_names = []
# Loadings
# So that Lambda = params[ix].reshape(self.k_endog, self.k_factors)
# (where Lambda stacks Lambda_M and Lambda_Q)
endog_names = self._get_endog_names(as_string=False)
for endog_name in endog_names:
for block in self._s.factor_blocks:
for factor_name in block.factor_names:
if self.endog_factor_map.loc[endog_name, factor_name]:
param_names.append(
f'loading.{factor_name}->{endog_name}')
# Factor VAR
for block in self._s.factor_blocks:
for to_factor in block.factor_names:
param_names += [f'L{i}.{from_factor}->{to_factor}'
for i in range(1, block.factor_order + 1)
for from_factor in block.factor_names]
# Factor covariance
for i in range(len(self._s.factor_blocks)):
block = self._s.factor_blocks[i]
param_names += [f'fb({i}).cov.chol[{j + 1},{k + 1}]'
for j in range(block.k_factors)
for k in range(j + 1)]
# Error AR(1)
if self.idiosyncratic_ar1:
endog_names_M = endog_names[self._o['M']]
param_names += [f'L1.eps_M.{name}' for name in endog_names_M]
endog_names_Q = endog_names[self._o['Q']]
param_names += [f'L1.eps_Q.{name}' for name in endog_names_Q]
# Error innovation variances
param_names += [f'sigma2.{name}' for name in endog_names]
return param_names | (list of str) List of human readable parameter names. | param_names | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def start_params(self):
"""(array) Starting parameters for maximum likelihood estimation."""
params = np.zeros(self.k_params, dtype=np.float64)
# (1) estimate factors one at a time, where the first step uses
# PCA on all `endog` variables that load on the first factor, and
# subsequent steps use residuals from the previous steps.
# TODO: what about factors that only load on quarterly variables?
endog_factor_map_M = self.endog_factor_map.iloc[:self.k_endog_M]
factors = []
endog = np.require(
pd.DataFrame(self.endog).interpolate().bfill(),
requirements="W"
)
for name in self.factor_names:
# Try to retrieve this from monthly variables, which is most
# consistent
endog_ix = np.where(endog_factor_map_M.loc[:, name])[0]
# But fall back to quarterly if necessary
if len(endog_ix) == 0:
endog_ix = np.where(self.endog_factor_map.loc[:, name])[0]
factor_endog = endog[:, endog_ix]
res_pca = PCA(factor_endog, ncomp=1, method='eig', normalize=False)
factors.append(res_pca.factors)
endog[:, endog_ix] -= res_pca.projection
factors = np.concatenate(factors, axis=1)
# (2) Estimate coefficients for each endog, one at a time (OLS for
# monthly variables, restricted OLS for quarterly). Also, compute
# residuals.
loadings = []
resid = []
for i in range(self.k_endog_M):
factor_ix = self._s.endog_factor_iloc[i]
factor_exog = factors[:, factor_ix]
mod_ols = OLS(self.endog[:, i], exog=factor_exog, missing='drop')
res_ols = mod_ols.fit()
loadings += res_ols.params.tolist()
resid.append(res_ols.resid)
for i in range(self.k_endog_M, self.k_endog):
factor_ix = self._s.endog_factor_iloc[i]
factor_exog = lagmat(factors[:, factor_ix], 4, original='in')
mod_glm = GLM(self.endog[:, i], factor_exog, missing='drop')
res_glm = mod_glm.fit_constrained(self.loading_constraints(i))
loadings += res_glm.params[:len(factor_ix)].tolist()
resid.append(res_glm.resid_response)
params[self._p['loadings']] = loadings
# (3) For each factor block, use an AR or VAR model to get coefficients
# and covariance estimate
# Factor transitions
stationary = True
factor_ar = []
factor_cov = []
i = 0
for block in self._s.factor_blocks:
factors_endog = factors[:, i:i + block.k_factors]
i += block.k_factors
if block.factor_order == 0:
continue
if block.k_factors == 1:
mod_factors = SARIMAX(factors_endog,
order=(block.factor_order, 0, 0))
sp = mod_factors.start_params
block_factor_ar = sp[:-1]
block_factor_cov = sp[-1:]
coefficient_matrices = mod_factors.start_params[:-1]
elif block.k_factors > 1:
mod_factors = VAR(factors_endog)
res_factors = mod_factors.fit(
maxlags=block.factor_order, ic=None, trend='n')
block_factor_ar = res_factors.params.T.ravel()
L = np.linalg.cholesky(res_factors.sigma_u)
block_factor_cov = L[np.tril_indices_from(L)]
coefficient_matrices = np.transpose(
np.reshape(block_factor_ar,
(block.k_factors, block.k_factors,
block.factor_order)), (2, 0, 1))
# Test for stationarity
stationary = is_invertible([1] + list(-coefficient_matrices))
# Check for stationarity
if not stationary:
warn('Non-stationary starting factor autoregressive'
' parameters found for factor block'
f' {block.factor_names}. Using zeros as starting'
' parameters.')
block_factor_ar[:] = 0
cov_factor = np.diag(factors_endog.std(axis=0))
block_factor_cov = (
cov_factor[np.tril_indices(block.k_factors)])
factor_ar += block_factor_ar.tolist()
factor_cov += block_factor_cov.tolist()
params[self._p['factor_ar']] = factor_ar
params[self._p['factor_cov']] = factor_cov
# (4) Use residuals from step (2) to estimate the idiosyncratic
# component
# Idiosyncratic component
if self.idiosyncratic_ar1:
idio_ar1 = []
idio_var = []
for i in range(self.k_endog_M):
mod_idio = SARIMAX(resid[i], order=(1, 0, 0), trend='c')
sp = mod_idio.start_params
idio_ar1.append(np.clip(sp[1], -0.99, 0.99))
idio_var.append(np.clip(sp[-1], 1e-5, np.inf))
for i in range(self.k_endog_M, self.k_endog):
y = self.endog[:, i].copy()
y[~np.isnan(y)] = resid[i]
mod_idio = QuarterlyAR1(y)
res_idio = mod_idio.fit(maxiter=10, return_params=True,
disp=False)
res_idio = mod_idio.fit_em(res_idio, maxiter=5,
return_params=True)
idio_ar1.append(np.clip(res_idio[0], -0.99, 0.99))
idio_var.append(np.clip(res_idio[1], 1e-5, np.inf))
params[self._p['idiosyncratic_ar1']] = idio_ar1
params[self._p['idiosyncratic_var']] = idio_var
else:
idio_var = [np.var(resid[i]) for i in range(self.k_endog_M)]
for i in range(self.k_endog_M, self.k_endog):
y = self.endog[:, i].copy()
y[~np.isnan(y)] = resid[i]
mod_idio = QuarterlyAR1(y)
res_idio = mod_idio.fit(return_params=True, disp=False)
idio_var.append(np.clip(res_idio[1], 1e-5, np.inf))
params[self._p['idiosyncratic_var']] = idio_var
return params | (array) Starting parameters for maximum likelihood estimation. | start_params | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def transform_params(self, unconstrained):
"""
Transform parameters from optimizer space to model space.
Transform unconstrained parameters used by the optimizer to constrained
parameters used in likelihood evaluation.
Parameters
----------
unconstrained : array_like
Array of unconstrained parameters used by the optimizer, to be
transformed.
Returns
-------
constrained : array_like
Array of constrained parameters which may be used in likelihood
evaluation.
"""
constrained = unconstrained.copy()
# Stationary factor VAR
unconstrained_factor_ar = unconstrained[self._p['factor_ar']]
constrained_factor_ar = []
i = 0
for block in self._s.factor_blocks:
length = block.k_factors**2 * block.factor_order
tmp_coeff = np.reshape(
unconstrained_factor_ar[i:i + length],
(block.k_factors, block.k_factors * block.factor_order))
tmp_cov = np.eye(block.k_factors)
tmp_coeff, _ = constrain_stationary_multivariate(tmp_coeff,
tmp_cov)
constrained_factor_ar += tmp_coeff.ravel().tolist()
i += length
constrained[self._p['factor_ar']] = constrained_factor_ar
# Stationary idiosyncratic AR(1)
if self.idiosyncratic_ar1:
idio_ar1 = unconstrained[self._p['idiosyncratic_ar1']]
constrained[self._p['idiosyncratic_ar1']] = [
constrain_stationary_univariate(idio_ar1[i:i + 1])[0]
for i in range(self.k_endog)]
# Positive idiosyncratic variances
constrained[self._p['idiosyncratic_var']] = (
constrained[self._p['idiosyncratic_var']]**2)
return constrained | Transform parameters from optimizer space to model space.
Transform unconstrained parameters used by the optimizer to constrained
parameters used in likelihood evaluation.
Parameters
----------
unconstrained : array_like
Array of unconstrained parameters used by the optimizer, to be
transformed.
Returns
-------
constrained : array_like
Array of constrained parameters which may be used in likelihood
evaluation. | transform_params | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def untransform_params(self, constrained):
"""
Transform parameters from model space to optimizer space.
Transform constrained parameters used in likelihood evaluation
to unconstrained parameters used by the optimizer.
Parameters
----------
constrained : array_like
Array of constrained parameters used in likelihood evaluation, to
be transformed.
Returns
-------
unconstrained : array_like
Array of unconstrained parameters used by the optimizer.
"""
unconstrained = constrained.copy()
# Stationary factor VAR
constrained_factor_ar = constrained[self._p['factor_ar']]
unconstrained_factor_ar = []
i = 0
for block in self._s.factor_blocks:
length = block.k_factors**2 * block.factor_order
tmp_coeff = np.reshape(
constrained_factor_ar[i:i + length],
(block.k_factors, block.k_factors * block.factor_order))
tmp_cov = np.eye(block.k_factors)
tmp_coeff, _ = unconstrain_stationary_multivariate(tmp_coeff,
tmp_cov)
unconstrained_factor_ar += tmp_coeff.ravel().tolist()
i += length
unconstrained[self._p['factor_ar']] = unconstrained_factor_ar
# Stationary idiosyncratic AR(1)
if self.idiosyncratic_ar1:
idio_ar1 = constrained[self._p['idiosyncratic_ar1']]
unconstrained[self._p['idiosyncratic_ar1']] = [
unconstrain_stationary_univariate(idio_ar1[i:i + 1])[0]
for i in range(self.k_endog)]
# Positive idiosyncratic variances
unconstrained[self._p['idiosyncratic_var']] = (
unconstrained[self._p['idiosyncratic_var']]**0.5)
return unconstrained | Transform parameters from model space to optimizer space.
Transform constrained parameters used in likelihood evaluation
to unconstrained parameters used by the optimizer.
Parameters
----------
constrained : array_like
Array of constrained parameters used in likelihood evaluation, to
be transformed.
Returns
-------
unconstrained : array_like
Array of unconstrained parameters used by the optimizer. | untransform_params | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def update(self, params, **kwargs):
"""
Update the parameters of the model.
Parameters
----------
params : array_like
Array of new parameters.
transformed : bool, optional
Whether or not `params` is already transformed. If set to False,
`transform_params` is called. Default is True.
"""
params = super().update(params, **kwargs)
# Local copies
o = self._o
s = self._s
p = self._p
# Loadings
loadings = params[p['loadings']]
start = 0
for i in range(self.k_endog_M):
iloc = self._s.endog_factor_iloc[i]
k_factors = len(iloc)
factor_ix = s['factors_L1'][iloc]
self['design', i, factor_ix] = loadings[start:start + k_factors]
start += k_factors
multipliers = np.array([1, 2, 3, 2, 1])[:, None]
for i in range(self.k_endog_M, self.k_endog):
iloc = self._s.endog_factor_iloc[i]
k_factors = len(iloc)
factor_ix = s['factors_L1_5_ix'][:, iloc]
self['design', i, factor_ix.ravel()] = np.ravel(
loadings[start:start + k_factors] * multipliers)
start += k_factors
# Factor VAR
factor_ar = params[p['factor_ar']]
start = 0
for block in s.factor_blocks:
k_params = block.k_factors**2 * block.factor_order
A = np.reshape(
factor_ar[start:start + k_params],
(block.k_factors, block.k_factors * block.factor_order))
start += k_params
self['transition', block['factors_L1'], block['factors_ar']] = A
# Factor covariance
factor_cov = params[p['factor_cov']]
start = 0
ix1 = 0
for block in s.factor_blocks:
k_params = block.k_factors * (block.k_factors + 1) // 2
L = np.zeros((block.k_factors, block.k_factors),
dtype=params.dtype)
L[np.tril_indices_from(L)] = factor_cov[start:start + k_params]
start += k_params
Q = L @ L.T
ix2 = ix1 + block.k_factors
self['state_cov', ix1:ix2, ix1:ix2] = Q
ix1 = ix2
# Error AR(1)
if self.idiosyncratic_ar1:
alpha = np.diag(params[p['idiosyncratic_ar1']])
self['transition', s['idio_ar_L1'], s['idio_ar_L1']] = alpha
# Error variances
if self.idiosyncratic_ar1:
self['state_cov', self.k_factors:, self.k_factors:] = (
np.diag(params[p['idiosyncratic_var']]))
else:
idio_var = params[p['idiosyncratic_var']]
self['obs_cov', o['M'], o['M']] = np.diag(idio_var[o['M']])
self['state_cov', self.k_factors:, self.k_factors:] = (
np.diag(idio_var[o['Q']])) | Update the parameters of the model.
Parameters
----------
params : array_like
Array of new parameters.
transformed : bool, optional
Whether or not `params` is already transformed. If set to False,
`transform_params` is called. Default is True. | update | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def loglike_constant(self):
"""
Constant term in the joint log-likelihood function.
Useful in facilitating comparisons to other packages that exclude the
constant from the log-likelihood computation.
"""
return -0.5 * (1 - np.isnan(self.endog)).sum() * np.log(2 * np.pi) | Constant term in the joint log-likelihood function.
Useful in facilitating comparisons to other packages that exclude the
constant from the log-likelihood computation. | loglike_constant | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def fit(self, start_params=None, transformed=True, includes_fixed=False,
cov_type='none', cov_kwds=None, method='em', maxiter=500,
tolerance=1e-6, em_initialization=True, mstep_method=None,
full_output=1, disp=False, callback=None, return_params=False,
optim_score=None, optim_complex_step=None, optim_hessian=None,
flags=None, low_memory=False, llf_decrease_action='revert',
llf_decrease_tolerance=1e-4, **kwargs):
"""
Fits the model by maximum likelihood via Kalman filter.
Parameters
----------
start_params : array_like, optional
Initial guess of the solution for the loglikelihood maximization.
If None, the default is given by Model.start_params.
transformed : bool, optional
Whether or not `start_params` is already transformed. Default is
True.
includes_fixed : bool, optional
If parameters were previously fixed with the `fix_params` method,
this argument describes whether or not `start_params` also includes
the fixed parameters, in addition to the free parameters. Default
is False.
cov_type : str, optional
The `cov_type` keyword governs the method for calculating the
covariance matrix of parameter estimates. Can be one of:
- 'opg' for the outer product of gradient estimator
- 'oim' for the observed information matrix estimator, calculated
using the method of Harvey (1989)
- 'approx' for the observed information matrix estimator,
calculated using a numerical approximation of the Hessian matrix.
- 'robust' for an approximate (quasi-maximum likelihood) covariance
matrix that may be valid even in the presence of some
misspecifications. Intermediate calculations use the 'oim'
method.
- 'robust_approx' is the same as 'robust' except that the
intermediate calculations use the 'approx' method.
- 'none' for no covariance matrix calculation.
Default is 'none', since computing this matrix can be very slow
when there are a large number of parameters.
cov_kwds : dict or None, optional
A dictionary of arguments affecting covariance matrix computation.
**opg, oim, approx, robust, robust_approx**
- 'approx_complex_step' : bool, optional - If True, numerical
approximations are computed using complex-step methods. If False,
numerical approximations are computed using finite difference
methods. Default is True.
- 'approx_centered' : bool, optional - If True, numerical
approximations computed using finite difference methods use a
centered approximation. Default is False.
method : str, optional
The `method` determines which solver from `scipy.optimize`
is used, and it can be chosen from among the following strings:
- 'em' for the EM algorithm
- 'newton' for Newton-Raphson
- 'nm' for Nelder-Mead
- 'bfgs' for Broyden-Fletcher-Goldfarb-Shanno (BFGS)
- 'lbfgs' for limited-memory BFGS with optional box constraints
- 'powell' for modified Powell's method
- 'cg' for conjugate gradient
- 'ncg' for Newton-conjugate gradient
- 'basinhopping' for global basin-hopping solver
The explicit arguments in `fit` are passed to the solver,
with the exception of the basin-hopping solver. Each
solver has several optional arguments that are not the same across
solvers. See the notes section below (or scipy.optimize) for the
available arguments and for the list of explicit arguments that the
basin-hopping solver supports.
maxiter : int, optional
The maximum number of iterations to perform.
tolerance : float, optional
Tolerance to use for convergence checking when using the EM
algorithm. To set the tolerance for other methods, pass
the optimizer-specific keyword argument(s).
full_output : bool, optional
Set to True to have all available output in the Results object's
mle_retvals attribute. The output is dependent on the solver.
See LikelihoodModelResults notes section for more information.
disp : bool, optional
Set to True to print convergence messages.
callback : callable callback(xk), optional
Called after each iteration, as callback(xk), where xk is the
current parameter vector.
return_params : bool, optional
Whether or not to return only the array of maximizing parameters.
Default is False.
optim_score : {'harvey', 'approx'} or None, optional
The method by which the score vector is calculated. 'harvey' uses
the method from Harvey (1989), 'approx' uses either finite
difference or complex step differentiation depending upon the
value of `optim_complex_step`, and None uses the built-in gradient
approximation of the optimizer. Default is None. This keyword is
only relevant if the optimization method uses the score.
optim_complex_step : bool, optional
Whether or not to use complex step differentiation when
approximating the score; if False, finite difference approximation
is used. Default is True. This keyword is only relevant if
`optim_score` is set to 'harvey' or 'approx'.
optim_hessian : {'opg','oim','approx'}, optional
The method by which the Hessian is numerically approximated. 'opg'
uses outer product of gradients, 'oim' uses the information
matrix formula from Harvey (1989), and 'approx' uses numerical
approximation. This keyword is only relevant if the
optimization method uses the Hessian matrix.
low_memory : bool, optional
If set to True, techniques are applied to substantially reduce
memory usage. If used, some features of the results object will
not be available (including smoothed results and in-sample
prediction), although out-of-sample forecasting is possible.
Note that this option is not available when using the EM algorithm
(which is the default for this model). Default is False.
llf_decrease_action : {'ignore', 'warn', 'revert'}, optional
Action to take if the log-likelihood decreases in an EM iteration.
'ignore' continues the iterations, 'warn' issues a warning but
continues the iterations, while 'revert' ends the iterations and
returns the result from the last good iteration. Default is 'warn'.
llf_decrease_tolerance : float, optional
Minimum size of the log-likelihood decrease required to trigger a
warning or to end the EM iterations. Setting this value slightly
larger than zero allows small decreases in the log-likelihood that
may be caused by numerical issues. If set to zero, then any
decrease will trigger the `llf_decrease_action`. Default is 1e-4.
**kwargs
Additional keyword arguments to pass to the optimizer.
Returns
-------
MLEResults
See Also
--------
statsmodels.base.model.LikelihoodModel.fit
statsmodels.tsa.statespace.mlemodel.MLEResults
"""
if method == 'em':
return self.fit_em(
start_params=start_params, transformed=transformed,
cov_type=cov_type, cov_kwds=cov_kwds, maxiter=maxiter,
tolerance=tolerance, em_initialization=em_initialization,
mstep_method=mstep_method, full_output=full_output, disp=disp,
return_params=return_params, low_memory=low_memory,
llf_decrease_action=llf_decrease_action,
llf_decrease_tolerance=llf_decrease_tolerance, **kwargs)
else:
return super().fit(
start_params=start_params, transformed=transformed,
includes_fixed=includes_fixed, cov_type=cov_type,
cov_kwds=cov_kwds, method=method, maxiter=maxiter,
full_output=full_output, disp=disp,
callback=callback, return_params=return_params,
optim_score=optim_score,
optim_complex_step=optim_complex_step,
optim_hessian=optim_hessian, flags=flags,
low_memory=low_memory, **kwargs) | Fits the model by maximum likelihood via Kalman filter.
Parameters
----------
start_params : array_like, optional
Initial guess of the solution for the loglikelihood maximization.
If None, the default is given by Model.start_params.
transformed : bool, optional
Whether or not `start_params` is already transformed. Default is
True.
includes_fixed : bool, optional
If parameters were previously fixed with the `fix_params` method,
this argument describes whether or not `start_params` also includes
the fixed parameters, in addition to the free parameters. Default
is False.
cov_type : str, optional
The `cov_type` keyword governs the method for calculating the
covariance matrix of parameter estimates. Can be one of:
- 'opg' for the outer product of gradient estimator
- 'oim' for the observed information matrix estimator, calculated
using the method of Harvey (1989)
- 'approx' for the observed information matrix estimator,
calculated using a numerical approximation of the Hessian matrix.
- 'robust' for an approximate (quasi-maximum likelihood) covariance
matrix that may be valid even in the presence of some
misspecifications. Intermediate calculations use the 'oim'
method.
- 'robust_approx' is the same as 'robust' except that the
intermediate calculations use the 'approx' method.
- 'none' for no covariance matrix calculation.
Default is 'none', since computing this matrix can be very slow
when there are a large number of parameters.
cov_kwds : dict or None, optional
A dictionary of arguments affecting covariance matrix computation.
**opg, oim, approx, robust, robust_approx**
- 'approx_complex_step' : bool, optional - If True, numerical
approximations are computed using complex-step methods. If False,
numerical approximations are computed using finite difference
methods. Default is True.
- 'approx_centered' : bool, optional - If True, numerical
approximations computed using finite difference methods use a
centered approximation. Default is False.
method : str, optional
The `method` determines which solver from `scipy.optimize`
is used, and it can be chosen from among the following strings:
- 'em' for the EM algorithm
- 'newton' for Newton-Raphson
- 'nm' for Nelder-Mead
- 'bfgs' for Broyden-Fletcher-Goldfarb-Shanno (BFGS)
- 'lbfgs' for limited-memory BFGS with optional box constraints
- 'powell' for modified Powell's method
- 'cg' for conjugate gradient
- 'ncg' for Newton-conjugate gradient
- 'basinhopping' for global basin-hopping solver
The explicit arguments in `fit` are passed to the solver,
with the exception of the basin-hopping solver. Each
solver has several optional arguments that are not the same across
solvers. See the notes section below (or scipy.optimize) for the
available arguments and for the list of explicit arguments that the
basin-hopping solver supports.
maxiter : int, optional
The maximum number of iterations to perform.
tolerance : float, optional
Tolerance to use for convergence checking when using the EM
algorithm. To set the tolerance for other methods, pass
the optimizer-specific keyword argument(s).
full_output : bool, optional
Set to True to have all available output in the Results object's
mle_retvals attribute. The output is dependent on the solver.
See LikelihoodModelResults notes section for more information.
disp : bool, optional
Set to True to print convergence messages.
callback : callable callback(xk), optional
Called after each iteration, as callback(xk), where xk is the
current parameter vector.
return_params : bool, optional
Whether or not to return only the array of maximizing parameters.
Default is False.
optim_score : {'harvey', 'approx'} or None, optional
The method by which the score vector is calculated. 'harvey' uses
the method from Harvey (1989), 'approx' uses either finite
difference or complex step differentiation depending upon the
value of `optim_complex_step`, and None uses the built-in gradient
approximation of the optimizer. Default is None. This keyword is
only relevant if the optimization method uses the score.
optim_complex_step : bool, optional
Whether or not to use complex step differentiation when
approximating the score; if False, finite difference approximation
is used. Default is True. This keyword is only relevant if
`optim_score` is set to 'harvey' or 'approx'.
optim_hessian : {'opg','oim','approx'}, optional
The method by which the Hessian is numerically approximated. 'opg'
uses outer product of gradients, 'oim' uses the information
matrix formula from Harvey (1989), and 'approx' uses numerical
approximation. This keyword is only relevant if the
optimization method uses the Hessian matrix.
low_memory : bool, optional
If set to True, techniques are applied to substantially reduce
memory usage. If used, some features of the results object will
not be available (including smoothed results and in-sample
prediction), although out-of-sample forecasting is possible.
Note that this option is not available when using the EM algorithm
(which is the default for this model). Default is False.
llf_decrease_action : {'ignore', 'warn', 'revert'}, optional
Action to take if the log-likelihood decreases in an EM iteration.
'ignore' continues the iterations, 'warn' issues a warning but
continues the iterations, while 'revert' ends the iterations and
returns the result from the last good iteration. Default is 'warn'.
llf_decrease_tolerance : float, optional
Minimum size of the log-likelihood decrease required to trigger a
warning or to end the EM iterations. Setting this value slightly
larger than zero allows small decreases in the log-likelihood that
may be caused by numerical issues. If set to zero, then any
decrease will trigger the `llf_decrease_action`. Default is 1e-4.
**kwargs
Additional keyword arguments to pass to the optimizer.
Returns
-------
MLEResults
See Also
--------
statsmodels.base.model.LikelihoodModel.fit
statsmodels.tsa.statespace.mlemodel.MLEResults | fit | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def fit_em(self, start_params=None, transformed=True, cov_type='none',
cov_kwds=None, maxiter=500, tolerance=1e-6, disp=False,
em_initialization=True, mstep_method=None, full_output=True,
return_params=False, low_memory=False,
llf_decrease_action='revert', llf_decrease_tolerance=1e-4):
"""
Fits the model by maximum likelihood via the EM algorithm.
Parameters
----------
start_params : array_like, optional
Initial guess of the solution for the loglikelihood maximization.
The default is to use `DynamicFactorMQ.start_params`.
transformed : bool, optional
Whether or not `start_params` is already transformed. Default is
True.
cov_type : str, optional
The `cov_type` keyword governs the method for calculating the
covariance matrix of parameter estimates. Can be one of:
- 'opg' for the outer product of gradient estimator
- 'oim' for the observed information matrix estimator, calculated
using the method of Harvey (1989)
- 'approx' for the observed information matrix estimator,
calculated using a numerical approximation of the Hessian matrix.
- 'robust' for an approximate (quasi-maximum likelihood) covariance
matrix that may be valid even in the presence of some
misspecifications. Intermediate calculations use the 'oim'
method.
- 'robust_approx' is the same as 'robust' except that the
intermediate calculations use the 'approx' method.
- 'none' for no covariance matrix calculation.
Default is 'none', since computing this matrix can be very slow
when there are a large number of parameters.
cov_kwds : dict or None, optional
A dictionary of arguments affecting covariance matrix computation.
**opg, oim, approx, robust, robust_approx**
- 'approx_complex_step' : bool, optional - If True, numerical
approximations are computed using complex-step methods. If False,
numerical approximations are computed using finite difference
methods. Default is True.
- 'approx_centered' : bool, optional - If True, numerical
approximations computed using finite difference methods use a
centered approximation. Default is False.
maxiter : int, optional
The maximum number of EM iterations to perform.
tolerance : float, optional
Parameter governing convergence of the EM algorithm. The
`tolerance` is the minimum relative increase in the likelihood
for which convergence will be declared. A smaller value for the
`tolerance` will typically yield more precise parameter estimates,
but will typically require more EM iterations. Default is 1e-6.
disp : int or bool, optional
Controls printing of EM iteration progress. If an integer, progress
is printed at every `disp` iterations. A value of True is
interpreted as the value of 1. Default is False (nothing will be
printed).
em_initialization : bool, optional
Whether or not to also update the Kalman filter initialization
using the EM algorithm. Default is True.
mstep_method : {None, 'missing', 'nonmissing'}, optional
The EM algorithm maximization step. If there are no NaN values
in the dataset, this can be set to "nonmissing" (which is slightly
faster) or "missing", otherwise it must be "missing". Default is
"nonmissing" if there are no NaN values or "missing" if there are.
full_output : bool, optional
Set to True to have all available output from EM iterations in
the Results object's mle_retvals attribute.
return_params : bool, optional
Whether or not to return only the array of maximizing parameters.
Default is False.
low_memory : bool, optional
This option cannot be used with the EM algorithm and will raise an
error if set to True. Default is False.
llf_decrease_action : {'ignore', 'warn', 'revert'}, optional
Action to take if the log-likelihood decreases in an EM iteration.
'ignore' continues the iterations, 'warn' issues a warning but
continues the iterations, while 'revert' ends the iterations and
returns the result from the last good iteration. Default is 'warn'.
llf_decrease_tolerance : float, optional
Minimum size of the log-likelihood decrease required to trigger a
warning or to end the EM iterations. Setting this value slightly
larger than zero allows small decreases in the log-likelihood that
may be caused by numerical issues. If set to zero, then any
decrease will trigger the `llf_decrease_action`. Default is 1e-4.
Returns
-------
DynamicFactorMQResults
See Also
--------
statsmodels.tsa.statespace.mlemodel.MLEModel.fit
statsmodels.tsa.statespace.mlemodel.MLEResults
"""
if self._has_fixed_params:
raise NotImplementedError('Cannot fit using the EM algorithm while'
' holding some parameters fixed.')
if low_memory:
raise ValueError('Cannot fit using the EM algorithm when using'
' low_memory option.')
if start_params is None:
start_params = self.start_params
transformed = True
else:
start_params = np.array(start_params, ndmin=1)
if not transformed:
start_params = self.transform_params(start_params)
llf_decrease_action = string_like(
llf_decrease_action, 'llf_decrease_action',
options=['ignore', 'warn', 'revert'])
disp = int(disp)
# Perform expectation-maximization
s = self._s
llf = []
params = [start_params]
init = None
inits = [self.ssm.initialization]
i = 0
delta = 0
terminate = False
# init_stationary = None if em_initialization else True
while i < maxiter and not terminate and (i < 1 or (delta > tolerance)):
out = self._em_iteration(params[-1], init=init,
mstep_method=mstep_method)
new_llf = out[0].llf_obs.sum()
# If we are not using EM initialization, then we need to check for
# non-stationary parameters
if not em_initialization:
self.update(out[1])
switch_init = []
T = self['transition']
init = self.ssm.initialization
iloc = np.arange(self.k_states)
# We may only have global initialization if we have no
# quarterly variables and idiosyncratic_ar1=False
if self.k_endog_Q == 0 and not self.idiosyncratic_ar1:
block = s.factor_blocks[0]
if init.initialization_type == 'stationary':
Tb = T[block['factors'], block['factors']]
if not np.all(np.linalg.eigvals(Tb) < (1 - 1e-10)):
init.set(block['factors'], 'diffuse')
switch_init.append(
'factor block:'
f' {tuple(block.factor_names)}')
else:
# Factor blocks
for block in s.factor_blocks:
b = tuple(iloc[block['factors']])
init_type = init.blocks[b].initialization_type
if init_type == 'stationary':
Tb = T[block['factors'], block['factors']]
if not np.all(np.linalg.eigvals(Tb) < (1 - 1e-10)):
init.set(block['factors'], 'diffuse')
switch_init.append(
'factor block:'
f' {tuple(block.factor_names)}')
if self.idiosyncratic_ar1:
endog_names = self._get_endog_names(as_string=True)
# Monthly variables
for j in range(s['idio_ar_M'].start, s['idio_ar_M'].stop):
init_type = init.blocks[(j,)].initialization_type
if init_type == 'stationary':
if not np.abs(T[j, j]) < (1 - 1e-10):
init.set(j, 'diffuse')
name = endog_names[j - s['idio_ar_M'].start]
switch_init.append(
'idiosyncratic AR(1) for monthly'
f' variable: {name}')
# Quarterly variables
if self.k_endog_Q > 0:
b = tuple(iloc[s['idio_ar_Q']])
init_type = init.blocks[b].initialization_type
if init_type == 'stationary':
Tb = T[s['idio_ar_Q'], s['idio_ar_Q']]
if not np.all(np.linalg.eigvals(Tb) < (1 - 1e-10)):
init.set(s['idio_ar_Q'], 'diffuse')
switch_init.append(
'idiosyncratic AR(1) for the'
' block of quarterly variables')
if len(switch_init) > 0:
warn('Non-stationary parameters found at EM iteration'
f' {i + 1}, which is not compatible with'
' stationary initialization. Initialization was'
' switched to diffuse for the following: '
f' {switch_init}, and fitting was restarted.')
results = self.fit_em(
start_params=params[-1], transformed=transformed,
cov_type=cov_type, cov_kwds=cov_kwds,
maxiter=maxiter, tolerance=tolerance,
em_initialization=em_initialization,
mstep_method=mstep_method, full_output=full_output,
disp=disp, return_params=return_params,
low_memory=low_memory,
llf_decrease_action=llf_decrease_action,
llf_decrease_tolerance=llf_decrease_tolerance)
self.ssm.initialize(self._default_initialization())
return results
# Check for decrease in the log-likelihood
# Note: allow a little numerical error before declaring a decrease
llf_decrease = (
i > 0 and (new_llf - llf[-1]) < -llf_decrease_tolerance)
if llf_decrease_action == 'revert' and llf_decrease:
warn(f'Log-likelihood decreased at EM iteration {i + 1}.'
f' Reverting to the results from EM iteration {i}'
' (prior to the decrease) and returning the solution.')
# Terminated iteration
i -= 1
terminate = True
else:
if llf_decrease_action == 'warn' and llf_decrease:
warn(f'Log-likelihood decreased at EM iteration {i + 1},'
' which can indicate numerical issues.')
llf.append(new_llf)
params.append(out[1])
if em_initialization:
init = initialization.Initialization(
self.k_states, 'known',
constant=out[0].smoothed_state[..., 0],
stationary_cov=out[0].smoothed_state_cov[..., 0])
inits.append(init)
if i > 0:
delta = (2 * np.abs(llf[-1] - llf[-2]) /
(np.abs(llf[-1]) + np.abs(llf[-2])))
else:
delta = np.inf
# If `disp` is not False, display the first iteration
if disp and i == 0:
print(f'EM start iterations, llf={llf[-1]:.5g}')
# Print output every `disp` observations
elif disp and ((i + 1) % disp) == 0:
print(f'EM iteration {i + 1}, llf={llf[-1]:.5g},'
f' convergence criterion={delta:.5g}')
# Advance the iteration counter
i += 1
# Check for convergence
not_converged = (i == maxiter and delta > tolerance)
# If no convergence without explicit termination, warn users
if not_converged:
warn(f'EM reached maximum number of iterations ({maxiter}),'
f' without achieving convergence: llf={llf[-1]:.5g},'
f' convergence criterion={delta:.5g}'
f' (while specified tolerance was {tolerance:.5g})')
# If `disp` is not False, display the final iteration
if disp:
if terminate:
print(f'EM terminated at iteration {i}, llf={llf[-1]:.5g},'
f' convergence criterion={delta:.5g}'
f' (while specified tolerance was {tolerance:.5g})')
elif not_converged:
print(f'EM reached maximum number of iterations ({maxiter}),'
f' without achieving convergence: llf={llf[-1]:.5g},'
f' convergence criterion={delta:.5g}'
f' (while specified tolerance was {tolerance:.5g})')
else:
print(f'EM converged at iteration {i}, llf={llf[-1]:.5g},'
f' convergence criterion={delta:.5g}'
f' < tolerance={tolerance:.5g}')
# Just return the fitted parameters if requested
if return_params:
result = params[-1]
# Otherwise construct the results class if desired
else:
if em_initialization:
base_init = self.ssm.initialization
self.ssm.initialization = init
# Note that because we are using params[-1], we are actually using
# the results from one additional iteration compared to the
# iteration at which we declared convergence.
result = self.smooth(params[-1], transformed=True,
cov_type=cov_type, cov_kwds=cov_kwds)
if em_initialization:
self.ssm.initialization = base_init
# Save the output
if full_output:
llf.append(result.llf)
em_retvals = Bunch(**{'params': np.array(params),
'llf': np.array(llf),
'iter': i,
'inits': inits})
em_settings = Bunch(**{'method': 'em',
'tolerance': tolerance,
'maxiter': maxiter})
else:
em_retvals = None
em_settings = None
result._results.mle_retvals = em_retvals
result._results.mle_settings = em_settings
return result | Fits the model by maximum likelihood via the EM algorithm.
Parameters
----------
start_params : array_like, optional
Initial guess of the solution for the loglikelihood maximization.
The default is to use `DynamicFactorMQ.start_params`.
transformed : bool, optional
Whether or not `start_params` is already transformed. Default is
True.
cov_type : str, optional
The `cov_type` keyword governs the method for calculating the
covariance matrix of parameter estimates. Can be one of:
- 'opg' for the outer product of gradient estimator
- 'oim' for the observed information matrix estimator, calculated
using the method of Harvey (1989)
- 'approx' for the observed information matrix estimator,
calculated using a numerical approximation of the Hessian matrix.
- 'robust' for an approximate (quasi-maximum likelihood) covariance
matrix that may be valid even in the presence of some
misspecifications. Intermediate calculations use the 'oim'
method.
- 'robust_approx' is the same as 'robust' except that the
intermediate calculations use the 'approx' method.
- 'none' for no covariance matrix calculation.
Default is 'none', since computing this matrix can be very slow
when there are a large number of parameters.
cov_kwds : dict or None, optional
A dictionary of arguments affecting covariance matrix computation.
**opg, oim, approx, robust, robust_approx**
- 'approx_complex_step' : bool, optional - If True, numerical
approximations are computed using complex-step methods. If False,
numerical approximations are computed using finite difference
methods. Default is True.
- 'approx_centered' : bool, optional - If True, numerical
approximations computed using finite difference methods use a
centered approximation. Default is False.
maxiter : int, optional
The maximum number of EM iterations to perform.
tolerance : float, optional
Parameter governing convergence of the EM algorithm. The
`tolerance` is the minimum relative increase in the likelihood
for which convergence will be declared. A smaller value for the
`tolerance` will typically yield more precise parameter estimates,
but will typically require more EM iterations. Default is 1e-6.
disp : int or bool, optional
Controls printing of EM iteration progress. If an integer, progress
is printed at every `disp` iterations. A value of True is
interpreted as the value of 1. Default is False (nothing will be
printed).
em_initialization : bool, optional
Whether or not to also update the Kalman filter initialization
using the EM algorithm. Default is True.
mstep_method : {None, 'missing', 'nonmissing'}, optional
The EM algorithm maximization step. If there are no NaN values
in the dataset, this can be set to "nonmissing" (which is slightly
faster) or "missing", otherwise it must be "missing". Default is
"nonmissing" if there are no NaN values or "missing" if there are.
full_output : bool, optional
Set to True to have all available output from EM iterations in
the Results object's mle_retvals attribute.
return_params : bool, optional
Whether or not to return only the array of maximizing parameters.
Default is False.
low_memory : bool, optional
This option cannot be used with the EM algorithm and will raise an
error if set to True. Default is False.
llf_decrease_action : {'ignore', 'warn', 'revert'}, optional
Action to take if the log-likelihood decreases in an EM iteration.
'ignore' continues the iterations, 'warn' issues a warning but
continues the iterations, while 'revert' ends the iterations and
returns the result from the last good iteration. Default is 'warn'.
llf_decrease_tolerance : float, optional
Minimum size of the log-likelihood decrease required to trigger a
warning or to end the EM iterations. Setting this value slightly
larger than zero allows small decreases in the log-likelihood that
may be caused by numerical issues. If set to zero, then any
decrease will trigger the `llf_decrease_action`. Default is 1e-4.
Returns
-------
DynamicFactorMQResults
See Also
--------
statsmodels.tsa.statespace.mlemodel.MLEModel.fit
statsmodels.tsa.statespace.mlemodel.MLEResults | fit_em | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def _em_iteration(self, params0, init=None, mstep_method=None):
"""EM iteration."""
# (E)xpectation step
res = self._em_expectation_step(params0, init=init)
# (M)aximization step
params1 = self._em_maximization_step(res, params0,
mstep_method=mstep_method)
return res, params1 | EM iteration. | _em_iteration | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def _em_expectation_step(self, params0, init=None):
"""EM expectation step."""
# (E)xpectation step
self.update(params0)
# Re-initialize state, if new initialization is given
if init is not None:
base_init = self.ssm.initialization
self.ssm.initialization = init
# Perform smoothing, only saving what is required
res = self.ssm.smooth(
SMOOTHER_STATE | SMOOTHER_STATE_COV | SMOOTHER_STATE_AUTOCOV,
update_filter=False)
res.llf_obs = np.array(
self.ssm._kalman_filter.loglikelihood, copy=True)
# Reset initialization
if init is not None:
self.ssm.initialization = base_init
return res | EM expectation step. | _em_expectation_step | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def _em_maximization_step(self, res, params0, mstep_method=None):
"""EM maximization step."""
s = self._s
a = res.smoothed_state.T[..., None]
cov_a = res.smoothed_state_cov.transpose(2, 0, 1)
acov_a = res.smoothed_state_autocov.transpose(2, 0, 1)
# E[a_t a_t'], t = 0, ..., T
Eaa = cov_a.copy() + np.matmul(a, a.transpose(0, 2, 1))
# E[a_t a_{t-1}'], t = 1, ..., T
Eaa1 = acov_a[:-1] + np.matmul(a[1:], a[:-1].transpose(0, 2, 1))
# Observation equation
has_missing = np.any(res.nmissing)
if mstep_method is None:
mstep_method = 'missing' if has_missing else 'nonmissing'
mstep_method = mstep_method.lower()
if mstep_method == 'nonmissing' and has_missing:
raise ValueError('Cannot use EM algorithm option'
' `mstep_method="nonmissing"` with missing data.')
if mstep_method == 'nonmissing':
func = self._em_maximization_obs_nonmissing
elif mstep_method == 'missing':
func = self._em_maximization_obs_missing
else:
raise ValueError('Invalid maximization step method: "%s".'
% mstep_method)
# TODO: compute H is pretty slow
Lambda, H = func(res, Eaa, a, compute_H=(not self.idiosyncratic_ar1))
# Factor VAR and covariance
factor_ar = []
factor_cov = []
for b in s.factor_blocks:
A = Eaa[:-1, b['factors_ar'], b['factors_ar']].sum(axis=0)
B = Eaa1[:, b['factors_L1'], b['factors_ar']].sum(axis=0)
C = Eaa[1:, b['factors_L1'], b['factors_L1']].sum(axis=0)
nobs = Eaa.shape[0] - 1
# want: x = B A^{-1}, so solve: x A = B or solve: A' x' = B'
try:
f_A = cho_solve(cho_factor(A), B.T).T
except LinAlgError:
# Fall back to general solver if there are problems with
# positive-definiteness
f_A = np.linalg.solve(A, B.T).T
f_Q = (C - f_A @ B.T) / nobs
factor_ar += f_A.ravel().tolist()
factor_cov += (
np.linalg.cholesky(f_Q)[np.tril_indices_from(f_Q)].tolist())
# Idiosyncratic AR(1) and variances
if self.idiosyncratic_ar1:
ix = s['idio_ar_L1']
Ad = Eaa[:-1, ix, ix].sum(axis=0).diagonal()
Bd = Eaa1[:, ix, ix].sum(axis=0).diagonal()
Cd = Eaa[1:, ix, ix].sum(axis=0).diagonal()
nobs = Eaa.shape[0] - 1
alpha = Bd / Ad
sigma2 = (Cd - alpha * Bd) / nobs
else:
ix = s['idio_ar_L1']
C = Eaa[:, ix, ix].sum(axis=0)
sigma2 = np.r_[H.diagonal()[self._o['M']],
C.diagonal() / Eaa.shape[0]]
# Save parameters
params1 = np.zeros_like(params0)
loadings = []
for i in range(self.k_endog):
iloc = self._s.endog_factor_iloc[i]
factor_ix = s['factors_L1'][iloc]
loadings += Lambda[i, factor_ix].tolist()
params1[self._p['loadings']] = loadings
params1[self._p['factor_ar']] = factor_ar
params1[self._p['factor_cov']] = factor_cov
if self.idiosyncratic_ar1:
params1[self._p['idiosyncratic_ar1']] = alpha
params1[self._p['idiosyncratic_var']] = sigma2
return params1 | EM maximization step. | _em_maximization_step | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def _em_maximization_obs_nonmissing(self, res, Eaa, a, compute_H=False):
"""EM maximization step, observation equation without missing data."""
s = self._s
dtype = Eaa.dtype
# Observation equation (non-missing)
# Note: we only compute loadings for monthly variables because
# quarterly variables will always have missing entries, so we would
# never choose this method in that case
k = s.k_states_factors
Lambda = np.zeros((self.k_endog, k), dtype=dtype)
for i in range(self.k_endog):
y = self.endog[:, i:i + 1]
iloc = self._s.endog_factor_iloc[i]
factor_ix = s['factors_L1'][iloc]
ix = (np.s_[:],) + np.ix_(factor_ix, factor_ix)
A = Eaa[ix].sum(axis=0)
B = y.T @ a[:, factor_ix, 0]
if self.idiosyncratic_ar1:
ix1 = s.k_states_factors + i
ix2 = ix1 + 1
B -= Eaa[:, ix1:ix2, factor_ix].sum(axis=0)
# want: x = B A^{-1}, so solve: x A = B or solve: A' x' = B'
try:
Lambda[i, factor_ix] = cho_solve(cho_factor(A), B.T).T
except LinAlgError:
# Fall back to general solver if there are problems with
# positive-definiteness
Lambda[i, factor_ix] = np.linalg.solve(A, B.T).T
# Compute new obs cov
# Note: this is unnecessary if `idiosyncratic_ar1=True`.
# This is written in a slightly more general way than
# Banbura and Modugno (2014), equation (7); see instead equation (13)
# of Wu et al. (1996)
# "An algorithm for estimating parameters of state-space models"
if compute_H:
Z = self['design'].copy()
Z[:, :k] = Lambda
BL = self.endog.T @ a[..., 0] @ Z.T
C = self.endog.T @ self.endog
H = (C + -BL - BL.T + Z @ Eaa.sum(axis=0) @ Z.T) / self.nobs
else:
H = np.zeros((self.k_endog, self.k_endog), dtype=dtype) * np.nan
return Lambda, H | EM maximization step, observation equation without missing data. | _em_maximization_obs_nonmissing | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def _em_maximization_obs_missing(self, res, Eaa, a, compute_H=False):
"""EM maximization step, observation equation with missing data."""
s = self._s
dtype = Eaa.dtype
# Observation equation (missing)
k = s.k_states_factors
Lambda = np.zeros((self.k_endog, k), dtype=dtype)
W = (1 - res.missing.T)
mask = W.astype(bool)
# Compute design for monthly
# Note: the relevant A changes for each i
for i in range(self.k_endog_M):
iloc = self._s.endog_factor_iloc[i]
factor_ix = s['factors_L1'][iloc]
m = mask[:, i]
yt = self.endog[m, i:i + 1]
ix = np.ix_(m, factor_ix, factor_ix)
Ai = Eaa[ix].sum(axis=0)
Bi = yt.T @ a[np.ix_(m, factor_ix)][..., 0]
if self.idiosyncratic_ar1:
ix1 = s.k_states_factors + i
ix2 = ix1 + 1
Bi -= Eaa[m, ix1:ix2][..., factor_ix].sum(axis=0)
# want: x = B A^{-1}, so solve: x A = B or solve: A' x' = B'
try:
Lambda[i, factor_ix] = cho_solve(cho_factor(Ai), Bi.T).T
except LinAlgError:
# Fall back to general solver if there are problems with
# positive-definiteness
Lambda[i, factor_ix] = np.linalg.solve(Ai, Bi.T).T
# Compute unrestricted design for quarterly
# See Banbura at al. (2011), where this is described in Appendix C,
# between equations (13) and (14).
if self.k_endog_Q > 0:
# Note: the relevant A changes for each i
multipliers = np.array([1, 2, 3, 2, 1])[:, None]
for i in range(self.k_endog_M, self.k_endog):
iloc = self._s.endog_factor_iloc[i]
factor_ix = s['factors_L1_5_ix'][:, iloc].ravel().tolist()
R, _ = self.loading_constraints(i)
iQ = i - self.k_endog_M
m = mask[:, i]
yt = self.endog[m, i:i + 1]
ix = np.ix_(m, factor_ix, factor_ix)
Ai = Eaa[ix].sum(axis=0)
BiQ = yt.T @ a[np.ix_(m, factor_ix)][..., 0]
if self.idiosyncratic_ar1:
ix = (np.s_[:],) + np.ix_(s['idio_ar_Q_ix'][iQ], factor_ix)
Eepsf = Eaa[ix]
BiQ -= (multipliers * Eepsf[m].sum(axis=0)).sum(axis=0)
# Note that there was a typo in Banbura et al. (2011) for
# the formula applying the restrictions. In their notation,
# they show (C D C')^{-1} while it should be (C D^{-1} C')^{-1}
# Note: in reality, this is:
# unrestricted - Aii @ R.T @ RARi @ (R @ unrestricted - q)
# where the restrictions are defined as: R @ unrestricted = q
# However, here q = 0, so we can simplify.
try:
L_and_lower = cho_factor(Ai)
# x = BQ A^{-1}, or x A = BQ, so solve A' x' = (BQ)'
unrestricted = cho_solve(L_and_lower, BiQ.T).T[0]
AiiRT = cho_solve(L_and_lower, R.T)
L_and_lower = cho_factor(R @ AiiRT)
RAiiRTiR = cho_solve(L_and_lower, R)
restricted = unrestricted - AiiRT @ RAiiRTiR @ unrestricted
except LinAlgError:
# Fall back to slower method if there are problems with
# positive-definiteness
Aii = np.linalg.inv(Ai)
unrestricted = (BiQ @ Aii)[0]
RARi = np.linalg.inv(R @ Aii @ R.T)
restricted = (unrestricted -
Aii @ R.T @ RARi @ R @ unrestricted)
Lambda[i, factor_ix] = restricted
# Compute new obs cov
# Note: this is unnecessary if `idiosyncratic_ar1=True`.
# See Banbura and Modugno (2014), equation (12)
# This does not literally follow their formula, e.g. multiplying by the
# W_t selection matrices, because those formulas require loops that are
# relatively slow. The formulation here is vectorized.
if compute_H:
Z = self['design'].copy()
Z[:, :Lambda.shape[1]] = Lambda
y = np.nan_to_num(self.endog)
C = y.T @ y
W = W[..., None]
IW = 1 - W
WL = W * Z
WLT = WL.transpose(0, 2, 1)
BL = y[..., None] @ a.transpose(0, 2, 1) @ WLT
A = Eaa
BLT = BL.transpose(0, 2, 1)
IWT = IW.transpose(0, 2, 1)
H = (C + (-BL - BLT + WL @ A @ WLT +
IW * self['obs_cov'] * IWT).sum(axis=0)) / self.nobs
else:
H = np.zeros((self.k_endog, self.k_endog), dtype=dtype) * np.nan
return Lambda, H | EM maximization step, observation equation with missing data. | _em_maximization_obs_missing | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def smooth(self, params, transformed=True, includes_fixed=False,
complex_step=False, cov_type='none', cov_kwds=None,
return_ssm=False, results_class=None,
results_wrapper_class=None, **kwargs):
"""
Kalman smoothing.
Parameters
----------
params : array_like
Array of parameters at which to evaluate the loglikelihood
function.
transformed : bool, optional
Whether or not `params` is already transformed. Default is True.
return_ssm : bool,optional
Whether or not to return only the state space output or a full
results object. Default is to return a full results object.
cov_type : str, optional
See `MLEResults.fit` for a description of covariance matrix types
for results object. Default is None.
cov_kwds : dict or None, optional
See `MLEResults.get_robustcov_results` for a description required
keywords for alternative covariance estimators
**kwargs
Additional keyword arguments to pass to the Kalman filter. See
`KalmanFilter.filter` for more details.
"""
return super().smooth(
params, transformed=transformed, includes_fixed=includes_fixed,
complex_step=complex_step, cov_type=cov_type, cov_kwds=cov_kwds,
return_ssm=return_ssm, results_class=results_class,
results_wrapper_class=results_wrapper_class, **kwargs) | Kalman smoothing.
Parameters
----------
params : array_like
Array of parameters at which to evaluate the loglikelihood
function.
transformed : bool, optional
Whether or not `params` is already transformed. Default is True.
return_ssm : bool,optional
Whether or not to return only the state space output or a full
results object. Default is to return a full results object.
cov_type : str, optional
See `MLEResults.fit` for a description of covariance matrix types
for results object. Default is None.
cov_kwds : dict or None, optional
See `MLEResults.get_robustcov_results` for a description required
keywords for alternative covariance estimators
**kwargs
Additional keyword arguments to pass to the Kalman filter. See
`KalmanFilter.filter` for more details. | smooth | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def filter(self, params, transformed=True, includes_fixed=False,
complex_step=False, cov_type='none', cov_kwds=None,
return_ssm=False, results_class=None,
results_wrapper_class=None, low_memory=False, **kwargs):
"""
Kalman filtering.
Parameters
----------
params : array_like
Array of parameters at which to evaluate the loglikelihood
function.
transformed : bool, optional
Whether or not `params` is already transformed. Default is True.
return_ssm : bool,optional
Whether or not to return only the state space output or a full
results object. Default is to return a full results object.
cov_type : str, optional
See `MLEResults.fit` for a description of covariance matrix types
for results object. Default is 'none'.
cov_kwds : dict or None, optional
See `MLEResults.get_robustcov_results` for a description required
keywords for alternative covariance estimators
low_memory : bool, optional
If set to True, techniques are applied to substantially reduce
memory usage. If used, some features of the results object will
not be available (including in-sample prediction), although
out-of-sample forecasting is possible. Default is False.
**kwargs
Additional keyword arguments to pass to the Kalman filter. See
`KalmanFilter.filter` for more details.
"""
return super().filter(
params, transformed=transformed, includes_fixed=includes_fixed,
complex_step=complex_step, cov_type=cov_type, cov_kwds=cov_kwds,
return_ssm=return_ssm, results_class=results_class,
results_wrapper_class=results_wrapper_class, **kwargs) | Kalman filtering.
Parameters
----------
params : array_like
Array of parameters at which to evaluate the loglikelihood
function.
transformed : bool, optional
Whether or not `params` is already transformed. Default is True.
return_ssm : bool,optional
Whether or not to return only the state space output or a full
results object. Default is to return a full results object.
cov_type : str, optional
See `MLEResults.fit` for a description of covariance matrix types
for results object. Default is 'none'.
cov_kwds : dict or None, optional
See `MLEResults.get_robustcov_results` for a description required
keywords for alternative covariance estimators
low_memory : bool, optional
If set to True, techniques are applied to substantially reduce
memory usage. If used, some features of the results object will
not be available (including in-sample prediction), although
out-of-sample forecasting is possible. Default is False.
**kwargs
Additional keyword arguments to pass to the Kalman filter. See
`KalmanFilter.filter` for more details. | filter | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def impulse_responses(self, params, steps=1, impulse=0,
orthogonalized=False, cumulative=False, anchor=None,
exog=None, extend_model=None, extend_kwargs=None,
transformed=True, includes_fixed=False,
original_scale=True, **kwargs):
"""
Impulse response function.
Parameters
----------
params : array_like
Array of model parameters.
steps : int, optional
The number of steps for which impulse responses are calculated.
Default is 1. Note that for time-invariant models, the initial
impulse is not counted as a step, so if `steps=1`, the output will
have 2 entries.
impulse : int or array_like
If an integer, the state innovation to pulse; must be between 0
and `k_posdef-1`. Alternatively, a custom impulse vector may be
provided; must be shaped `k_posdef x 1`.
orthogonalized : bool, optional
Whether or not to perform impulse using orthogonalized innovations.
Note that this will also affect custum `impulse` vectors. Default
is False.
cumulative : bool, optional
Whether or not to return cumulative impulse responses. Default is
False.
anchor : int, str, or datetime, optional
Time point within the sample for the state innovation impulse. Type
depends on the index of the given `endog` in the model. Two special
cases are the strings 'start' and 'end', which refer to setting the
impulse at the first and last points of the sample, respectively.
Integer values can run from 0 to `nobs - 1`, or can be negative to
apply negative indexing. Finally, if a date/time index was provided
to the model, then this argument can be a date string to parse or a
datetime type. Default is 'start'.
exog : array_like, optional
New observations of exogenous regressors for our-of-sample periods,
if applicable.
transformed : bool, optional
Whether or not `params` is already transformed. Default is
True.
includes_fixed : bool, optional
If parameters were previously fixed with the `fix_params` method,
this argument describes whether or not `params` also includes
the fixed parameters, in addition to the free parameters. Default
is False.
original_scale : bool, optional
If the model specification standardized the data, whether or not
to return impulse responses in the original scale of the data (i.e.
before it was standardized by the model). Default is True.
**kwargs
If the model has time-varying design or transition matrices and the
combination of `anchor` and `steps` implies creating impulse
responses for the out-of-sample period, then these matrices must
have updated values provided for the out-of-sample steps. For
example, if `design` is a time-varying component, `nobs` is 10,
`anchor=1`, and `steps` is 15, a (`k_endog` x `k_states` x 7)
matrix must be provided with the new design matrix values.
Returns
-------
impulse_responses : ndarray
Responses for each endogenous variable due to the impulse
given by the `impulse` argument. For a time-invariant model, the
impulse responses are given for `steps + 1` elements (this gives
the "initial impulse" followed by `steps` responses for the
important cases of VAR and SARIMAX models), while for time-varying
models the impulse responses are only given for `steps` elements
(to avoid having to unexpectedly provide updated time-varying
matrices).
"""
# Get usual simulations (in the possibly-standardized scale)
irfs = super().impulse_responses(
params, steps=steps, impulse=impulse,
orthogonalized=orthogonalized, cumulative=cumulative,
anchor=anchor, exog=exog, extend_model=extend_model,
extend_kwargs=extend_kwargs, transformed=transformed,
includes_fixed=includes_fixed, **kwargs)
# If applicable, convert predictions back to original space
if self.standardize and original_scale:
use_pandas = isinstance(self.data, PandasData)
shape = irfs.shape
if use_pandas:
# pd.Series (k_endog=1, replications=None)
if len(shape) == 1:
irfs = irfs * self._endog_std.iloc[0]
# pd.DataFrame (k_endog > 1)
# [or]
# pd.DataFrame with MultiIndex (replications > 0)
elif len(shape) == 2:
irfs = irfs.multiply(self._endog_std, axis=1, level=0)
else:
# 1-dim array (k_endog=1)
if len(shape) == 1:
irfs = irfs * self._endog_std
# 2-dim array (k_endog > 1)
elif len(shape) == 2:
irfs = irfs * self._endog_std
return irfs | Impulse response function.
Parameters
----------
params : array_like
Array of model parameters.
steps : int, optional
The number of steps for which impulse responses are calculated.
Default is 1. Note that for time-invariant models, the initial
impulse is not counted as a step, so if `steps=1`, the output will
have 2 entries.
impulse : int or array_like
If an integer, the state innovation to pulse; must be between 0
and `k_posdef-1`. Alternatively, a custom impulse vector may be
provided; must be shaped `k_posdef x 1`.
orthogonalized : bool, optional
Whether or not to perform impulse using orthogonalized innovations.
Note that this will also affect custum `impulse` vectors. Default
is False.
cumulative : bool, optional
Whether or not to return cumulative impulse responses. Default is
False.
anchor : int, str, or datetime, optional
Time point within the sample for the state innovation impulse. Type
depends on the index of the given `endog` in the model. Two special
cases are the strings 'start' and 'end', which refer to setting the
impulse at the first and last points of the sample, respectively.
Integer values can run from 0 to `nobs - 1`, or can be negative to
apply negative indexing. Finally, if a date/time index was provided
to the model, then this argument can be a date string to parse or a
datetime type. Default is 'start'.
exog : array_like, optional
New observations of exogenous regressors for our-of-sample periods,
if applicable.
transformed : bool, optional
Whether or not `params` is already transformed. Default is
True.
includes_fixed : bool, optional
If parameters were previously fixed with the `fix_params` method,
this argument describes whether or not `params` also includes
the fixed parameters, in addition to the free parameters. Default
is False.
original_scale : bool, optional
If the model specification standardized the data, whether or not
to return impulse responses in the original scale of the data (i.e.
before it was standardized by the model). Default is True.
**kwargs
If the model has time-varying design or transition matrices and the
combination of `anchor` and `steps` implies creating impulse
responses for the out-of-sample period, then these matrices must
have updated values provided for the out-of-sample steps. For
example, if `design` is a time-varying component, `nobs` is 10,
`anchor=1`, and `steps` is 15, a (`k_endog` x `k_states` x 7)
matrix must be provided with the new design matrix values.
Returns
-------
impulse_responses : ndarray
Responses for each endogenous variable due to the impulse
given by the `impulse` argument. For a time-invariant model, the
impulse responses are given for `steps + 1` elements (this gives
the "initial impulse" followed by `steps` responses for the
important cases of VAR and SARIMAX models), while for time-varying
models the impulse responses are only given for `steps` elements
(to avoid having to unexpectedly provide updated time-varying
matrices). | impulse_responses | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def factors(self):
"""
Estimates of unobserved factors.
Returns
-------
out : Bunch
Has the following attributes shown in Notes.
Notes
-----
The output is a bunch of the following format:
- `filtered`: a time series array with the filtered estimate of
the component
- `filtered_cov`: a time series array with the filtered estimate of
the variance/covariance of the component
- `smoothed`: a time series array with the smoothed estimate of
the component
- `smoothed_cov`: a time series array with the smoothed estimate of
the variance/covariance of the component
- `offset`: an integer giving the offset in the state vector where
this component begins
"""
out = None
if self.model.k_factors > 0:
iloc = self.model._s.factors_L1
ix = np.array(self.model.state_names)[iloc].tolist()
out = Bunch(
filtered=self.states.filtered.loc[:, ix],
filtered_cov=self.states.filtered_cov.loc[np.s_[ix, :], ix],
smoothed=None, smoothed_cov=None)
if self.smoothed_state is not None:
out.smoothed = self.states.smoothed.loc[:, ix]
if self.smoothed_state_cov is not None:
out.smoothed_cov = (
self.states.smoothed_cov.loc[np.s_[ix, :], ix])
return out | Estimates of unobserved factors.
Returns
-------
out : Bunch
Has the following attributes shown in Notes.
Notes
-----
The output is a bunch of the following format:
- `filtered`: a time series array with the filtered estimate of
the component
- `filtered_cov`: a time series array with the filtered estimate of
the variance/covariance of the component
- `smoothed`: a time series array with the smoothed estimate of
the component
- `smoothed_cov`: a time series array with the smoothed estimate of
the variance/covariance of the component
- `offset`: an integer giving the offset in the state vector where
this component begins | factors | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def get_coefficients_of_determination(self, method='individual',
which=None):
"""
Get coefficients of determination (R-squared) for variables / factors.
Parameters
----------
method : {'individual', 'joint', 'cumulative'}, optional
The type of R-squared values to generate. "individual" plots
the R-squared of each variable on each factor; "joint" plots the
R-squared of each variable on each factor that it loads on;
"cumulative" plots the successive R-squared values as each
additional factor is added to the regression, for each variable.
Default is 'individual'.
which: {None, 'filtered', 'smoothed'}, optional
Whether to compute R-squared values based on filtered or smoothed
estimates of the factors. Default is 'smoothed' if smoothed results
are available and 'filtered' otherwise.
Returns
-------
rsquared : pd.DataFrame or pd.Series
The R-squared values from regressions of observed variables on
one or more of the factors. If method='individual' or
method='cumulative', this will be a Pandas DataFrame with observed
variables as the index and factors as the columns . If
method='joint', will be a Pandas Series with observed variables as
the index.
See Also
--------
plot_coefficients_of_determination
coefficients_of_determination
"""
from statsmodels.tools import add_constant
method = string_like(method, 'method', options=['individual', 'joint',
'cumulative'])
if which is None:
which = 'filtered' if self.smoothed_state is None else 'smoothed'
k_endog = self.model.k_endog
k_factors = self.model.k_factors
ef_map = self.model._s.endog_factor_map
endog_names = self.model.endog_names
factor_names = self.model.factor_names
if method == 'individual':
coefficients = np.zeros((k_endog, k_factors))
for i in range(k_factors):
exog = add_constant(self.factors[which].iloc[:, i])
for j in range(k_endog):
if ef_map.iloc[j, i]:
endog = self.filter_results.endog[j]
coefficients[j, i] = (
OLS(endog, exog, missing='drop').fit().rsquared)
else:
coefficients[j, i] = np.nan
coefficients = pd.DataFrame(coefficients, index=endog_names,
columns=factor_names)
elif method == 'joint':
coefficients = np.zeros((k_endog,))
exog = add_constant(self.factors[which])
for j in range(k_endog):
endog = self.filter_results.endog[j]
ix = np.r_[True, ef_map.iloc[j]].tolist()
X = exog.loc[:, ix]
coefficients[j] = (
OLS(endog, X, missing='drop').fit().rsquared)
coefficients = pd.Series(coefficients, index=endog_names)
elif method == 'cumulative':
coefficients = np.zeros((k_endog, k_factors))
exog = add_constant(self.factors[which])
for j in range(k_endog):
endog = self.filter_results.endog[j]
for i in range(k_factors):
if self.model._s.endog_factor_map.iloc[j, i]:
ix = np.r_[True, ef_map.iloc[j, :i + 1],
[False] * (k_factors - i - 1)]
X = exog.loc[:, ix.astype(bool).tolist()]
coefficients[j, i] = (
OLS(endog, X, missing='drop').fit().rsquared)
else:
coefficients[j, i] = np.nan
coefficients = pd.DataFrame(coefficients, index=endog_names,
columns=factor_names)
return coefficients | Get coefficients of determination (R-squared) for variables / factors.
Parameters
----------
method : {'individual', 'joint', 'cumulative'}, optional
The type of R-squared values to generate. "individual" plots
the R-squared of each variable on each factor; "joint" plots the
R-squared of each variable on each factor that it loads on;
"cumulative" plots the successive R-squared values as each
additional factor is added to the regression, for each variable.
Default is 'individual'.
which: {None, 'filtered', 'smoothed'}, optional
Whether to compute R-squared values based on filtered or smoothed
estimates of the factors. Default is 'smoothed' if smoothed results
are available and 'filtered' otherwise.
Returns
-------
rsquared : pd.DataFrame or pd.Series
The R-squared values from regressions of observed variables on
one or more of the factors. If method='individual' or
method='cumulative', this will be a Pandas DataFrame with observed
variables as the index and factors as the columns . If
method='joint', will be a Pandas Series with observed variables as
the index.
See Also
--------
plot_coefficients_of_determination
coefficients_of_determination | get_coefficients_of_determination | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def coefficients_of_determination(self):
"""
Individual coefficients of determination (:math:`R^2`).
Coefficients of determination (:math:`R^2`) from regressions of
endogenous variables on individual estimated factors.
Returns
-------
coefficients_of_determination : ndarray
A `k_endog` x `k_factors` array, where
`coefficients_of_determination[i, j]` represents the :math:`R^2`
value from a regression of factor `j` and a constant on endogenous
variable `i`.
Notes
-----
Although it can be difficult to interpret the estimated factor loadings
and factors, it is often helpful to use the coefficients of
determination from univariate regressions to assess the importance of
each factor in explaining the variation in each endogenous variable.
In models with many variables and factors, this can sometimes lend
interpretation to the factors (for example sometimes one factor will
load primarily on real variables and another on nominal variables).
See Also
--------
get_coefficients_of_determination
plot_coefficients_of_determination
"""
return self.get_coefficients_of_determination(method='individual') | Individual coefficients of determination (:math:`R^2`).
Coefficients of determination (:math:`R^2`) from regressions of
endogenous variables on individual estimated factors.
Returns
-------
coefficients_of_determination : ndarray
A `k_endog` x `k_factors` array, where
`coefficients_of_determination[i, j]` represents the :math:`R^2`
value from a regression of factor `j` and a constant on endogenous
variable `i`.
Notes
-----
Although it can be difficult to interpret the estimated factor loadings
and factors, it is often helpful to use the coefficients of
determination from univariate regressions to assess the importance of
each factor in explaining the variation in each endogenous variable.
In models with many variables and factors, this can sometimes lend
interpretation to the factors (for example sometimes one factor will
load primarily on real variables and another on nominal variables).
See Also
--------
get_coefficients_of_determination
plot_coefficients_of_determination | coefficients_of_determination | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def plot_coefficients_of_determination(self, method='individual',
which=None, endog_labels=None,
fig=None, figsize=None):
"""
Plot coefficients of determination (R-squared) for variables / factors.
Parameters
----------
method : {'individual', 'joint', 'cumulative'}, optional
The type of R-squared values to generate. "individual" plots
the R-squared of each variable on each factor; "joint" plots the
R-squared of each variable on each factor that it loads on;
"cumulative" plots the successive R-squared values as each
additional factor is added to the regression, for each variable.
Default is 'individual'.
which: {None, 'filtered', 'smoothed'}, optional
Whether to compute R-squared values based on filtered or smoothed
estimates of the factors. Default is 'smoothed' if smoothed results
are available and 'filtered' otherwise.
endog_labels : bool, optional
Whether or not to label the endogenous variables along the x-axis
of the plots. Default is to include labels if there are 5 or fewer
endogenous variables.
fig : Figure, optional
If given, subplots are created in this figure instead of in a new
figure. Note that the grid will be created in the provided
figure using `fig.add_subplot()`.
figsize : tuple, optional
If a figure is created, this argument allows specifying a size.
The tuple is (width, height).
Notes
-----
The endogenous variables are arranged along the x-axis according to
their position in the model's `endog` array.
See Also
--------
get_coefficients_of_determination
"""
from statsmodels.graphics.utils import _import_mpl, create_mpl_fig
_import_mpl()
fig = create_mpl_fig(fig, figsize)
method = string_like(method, 'method', options=['individual', 'joint',
'cumulative'])
# Should we label endogenous variables?
if endog_labels is None:
endog_labels = self.model.k_endog <= 5
# Plot the coefficients of determination
rsquared = self.get_coefficients_of_determination(method=method,
which=which)
if method in ['individual', 'cumulative']:
plot_idx = 1
for factor_name, coeffs in rsquared.T.iterrows():
# Create the new axis
ax = fig.add_subplot(self.model.k_factors, 1, plot_idx)
ax.set_ylim((0, 1))
ax.set(title=f'{factor_name}', ylabel=r'$R^2$')
coeffs.plot(ax=ax, kind='bar')
if plot_idx < len(rsquared.columns) or not endog_labels:
ax.xaxis.set_ticklabels([])
plot_idx += 1
elif method == 'joint':
ax = fig.add_subplot(1, 1, 1)
ax.set_ylim((0, 1))
ax.set(title=r'$R^2$ - regression on all loaded factors',
ylabel=r'$R^2$')
rsquared.plot(ax=ax, kind='bar')
if not endog_labels:
ax.xaxis.set_ticklabels([])
return fig | Plot coefficients of determination (R-squared) for variables / factors.
Parameters
----------
method : {'individual', 'joint', 'cumulative'}, optional
The type of R-squared values to generate. "individual" plots
the R-squared of each variable on each factor; "joint" plots the
R-squared of each variable on each factor that it loads on;
"cumulative" plots the successive R-squared values as each
additional factor is added to the regression, for each variable.
Default is 'individual'.
which: {None, 'filtered', 'smoothed'}, optional
Whether to compute R-squared values based on filtered or smoothed
estimates of the factors. Default is 'smoothed' if smoothed results
are available and 'filtered' otherwise.
endog_labels : bool, optional
Whether or not to label the endogenous variables along the x-axis
of the plots. Default is to include labels if there are 5 or fewer
endogenous variables.
fig : Figure, optional
If given, subplots are created in this figure instead of in a new
figure. Note that the grid will be created in the provided
figure using `fig.add_subplot()`.
figsize : tuple, optional
If a figure is created, this argument allows specifying a size.
The tuple is (width, height).
Notes
-----
The endogenous variables are arranged along the x-axis according to
their position in the model's `endog` array.
See Also
--------
get_coefficients_of_determination | plot_coefficients_of_determination | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def news(self, comparison, impact_date=None, impacted_variable=None,
start=None, end=None, periods=None, exog=None,
comparison_type=None, revisions_details_start=False,
state_index=None, return_raw=False, tolerance=1e-10,
endog_quarterly=None, original_scale=True, **kwargs):
"""
Compute impacts from updated data (news and revisions).
Parameters
----------
comparison : array_like or MLEResults
An updated dataset with updated and/or revised data from which the
news can be computed, or an updated or previous results object
to use in computing the news.
impact_date : int, str, or datetime, optional
A single specific period of impacts from news and revisions to
compute. Can also be a date string to parse or a datetime type.
This argument cannot be used in combination with `start`, `end`, or
`periods`. Default is the first out-of-sample observation.
impacted_variable : str, list, array, or slice, optional
Observation variable label or slice of labels specifying that only
specific impacted variables should be shown in the News output. The
impacted variable(s) describe the variables that were *affected* by
the news. If you do not know the labels for the variables, check
the `endog_names` attribute of the model instance.
start : int, str, or datetime, optional
The first period of impacts from news and revisions to compute.
Can also be a date string to parse or a datetime type. Default is
the first out-of-sample observation.
end : int, str, or datetime, optional
The last period of impacts from news and revisions to compute.
Can also be a date string to parse or a datetime type. Default is
the first out-of-sample observation.
periods : int, optional
The number of periods of impacts from news and revisions to
compute.
exog : array_like, optional
Array of exogenous regressors for the out-of-sample period, if
applicable.
comparison_type : {None, 'previous', 'updated'}
This denotes whether the `comparison` argument represents a
*previous* results object or dataset or an *updated* results object
or dataset. If not specified, then an attempt is made to determine
the comparison type.
state_index : array_like or "common", optional
An optional index specifying a subset of states to use when
constructing the impacts of revisions and news. For example, if
`state_index=[0, 1]` is passed, then only the impacts to the
observed variables arising from the impacts to the first two
states will be returned. If the string "common" is passed and the
model includes idiosyncratic AR(1) components, news will only be
computed based on the common states. Default is to use all states.
return_raw : bool, optional
Whether or not to return only the specific output or a full
results object. Default is to return a full results object.
tolerance : float, optional
The numerical threshold for determining zero impact. Default is
that any impact less than 1e-10 is assumed to be zero.
endog_quarterly : array_like, optional
New observations of quarterly variables, if `comparison` was
provided as an updated monthly dataset. If this argument is
provided, it must be a Pandas Series or DataFrame with a
DatetimeIndex or PeriodIndex at the quarterly frequency.
References
----------
.. [1] Bańbura, Marta, and Michele Modugno.
"Maximum likelihood estimation of factor models on datasets with
arbitrary pattern of missing data."
Journal of Applied Econometrics 29, no. 1 (2014): 133-160.
.. [2] Bańbura, Marta, Domenico Giannone, and Lucrezia Reichlin.
"Nowcasting."
The Oxford Handbook of Economic Forecasting. July 8, 2011.
.. [3] Bańbura, Marta, Domenico Giannone, Michele Modugno, and Lucrezia
Reichlin.
"Now-casting and the real-time data flow."
In Handbook of economic forecasting, vol. 2, pp. 195-237.
Elsevier, 2013.
"""
if state_index == 'common':
state_index = (
np.arange(self.model.k_states - self.model.k_endog))
news_results = super().news(
comparison, impact_date=impact_date,
impacted_variable=impacted_variable, start=start, end=end,
periods=periods, exog=exog, comparison_type=comparison_type,
revisions_details_start=revisions_details_start,
state_index=state_index, return_raw=return_raw,
tolerance=tolerance, endog_quarterly=endog_quarterly, **kwargs)
# If we have standardized the data, we may want to report the news in
# the original scale. If so, we need to modify the data to "undo" the
# standardization.
if not return_raw and self.model.standardize and original_scale:
endog_mean = self.model._endog_mean
endog_std = self.model._endog_std
# Don't need to add in the mean for the impacts, since they are
# the difference of two forecasts
news_results.total_impacts = (
news_results.total_impacts * endog_std)
news_results.update_impacts = (
news_results.update_impacts * endog_std)
if news_results.revision_impacts is not None:
news_results.revision_impacts = (
news_results.revision_impacts * endog_std)
if news_results.revision_detailed_impacts is not None:
news_results.revision_detailed_impacts = (
news_results.revision_detailed_impacts * endog_std)
if news_results.revision_grouped_impacts is not None:
news_results.revision_grouped_impacts = (
news_results.revision_grouped_impacts * endog_std)
# Update forecasts
for name in ['prev_impacted_forecasts', 'news', 'revisions',
'update_realized', 'update_forecasts',
'revised', 'revised_prev', 'post_impacted_forecasts',
'revisions_all', 'revised_all', 'revised_prev_all']:
dta = getattr(news_results, name)
# for pd.Series, dta.multiply(...) and (sometimes) dta.add(...)
# remove the name attribute; save it now so that we can add it
# back in
orig_name = None
if hasattr(dta, 'name'):
orig_name = dta.name
dta = dta.multiply(endog_std, level=1)
if name not in ['news', 'revisions']:
dta = dta.add(endog_mean, level=1)
# add back in the name attribute if it was removed
if orig_name is not None:
dta.name = orig_name
setattr(news_results, name, dta)
# For the weights: rows correspond to update (date, variable) and
# columns correspond to the impacted variable.
# 1. Because we have modified the updates (realized, forecasts, and
# forecast errors) to be in the scale of the original updated
# variable, we need to essentially reverse that change for each
# row of the weights by dividing by the standard deviation of
# that row's updated variable
# 2. Because we want the impacts to be in the scale of the original
# impacted variable, we need to multiply each column by the
# standard deviation of that column's impacted variable
news_results.weights = (
news_results.weights.divide(endog_std, axis=0, level=1)
.multiply(endog_std, axis=1, level=1))
news_results.revision_weights = (
news_results.revision_weights
.divide(endog_std, axis=0, level=1)
.multiply(endog_std, axis=1, level=1))
return news_results | Compute impacts from updated data (news and revisions).
Parameters
----------
comparison : array_like or MLEResults
An updated dataset with updated and/or revised data from which the
news can be computed, or an updated or previous results object
to use in computing the news.
impact_date : int, str, or datetime, optional
A single specific period of impacts from news and revisions to
compute. Can also be a date string to parse or a datetime type.
This argument cannot be used in combination with `start`, `end`, or
`periods`. Default is the first out-of-sample observation.
impacted_variable : str, list, array, or slice, optional
Observation variable label or slice of labels specifying that only
specific impacted variables should be shown in the News output. The
impacted variable(s) describe the variables that were *affected* by
the news. If you do not know the labels for the variables, check
the `endog_names` attribute of the model instance.
start : int, str, or datetime, optional
The first period of impacts from news and revisions to compute.
Can also be a date string to parse or a datetime type. Default is
the first out-of-sample observation.
end : int, str, or datetime, optional
The last period of impacts from news and revisions to compute.
Can also be a date string to parse or a datetime type. Default is
the first out-of-sample observation.
periods : int, optional
The number of periods of impacts from news and revisions to
compute.
exog : array_like, optional
Array of exogenous regressors for the out-of-sample period, if
applicable.
comparison_type : {None, 'previous', 'updated'}
This denotes whether the `comparison` argument represents a
*previous* results object or dataset or an *updated* results object
or dataset. If not specified, then an attempt is made to determine
the comparison type.
state_index : array_like or "common", optional
An optional index specifying a subset of states to use when
constructing the impacts of revisions and news. For example, if
`state_index=[0, 1]` is passed, then only the impacts to the
observed variables arising from the impacts to the first two
states will be returned. If the string "common" is passed and the
model includes idiosyncratic AR(1) components, news will only be
computed based on the common states. Default is to use all states.
return_raw : bool, optional
Whether or not to return only the specific output or a full
results object. Default is to return a full results object.
tolerance : float, optional
The numerical threshold for determining zero impact. Default is
that any impact less than 1e-10 is assumed to be zero.
endog_quarterly : array_like, optional
New observations of quarterly variables, if `comparison` was
provided as an updated monthly dataset. If this argument is
provided, it must be a Pandas Series or DataFrame with a
DatetimeIndex or PeriodIndex at the quarterly frequency.
References
----------
.. [1] Bańbura, Marta, and Michele Modugno.
"Maximum likelihood estimation of factor models on datasets with
arbitrary pattern of missing data."
Journal of Applied Econometrics 29, no. 1 (2014): 133-160.
.. [2] Bańbura, Marta, Domenico Giannone, and Lucrezia Reichlin.
"Nowcasting."
The Oxford Handbook of Economic Forecasting. July 8, 2011.
.. [3] Bańbura, Marta, Domenico Giannone, Michele Modugno, and Lucrezia
Reichlin.
"Now-casting and the real-time data flow."
In Handbook of economic forecasting, vol. 2, pp. 195-237.
Elsevier, 2013. | news | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def append(self, endog, endog_quarterly=None, refit=False, fit_kwargs=None,
copy_initialization=True, retain_standardization=True,
**kwargs):
"""
Recreate the results object with new data appended to original data.
Creates a new result object applied to a dataset that is created by
appending new data to the end of the model's original data. The new
results can then be used for analysis or forecasting.
Parameters
----------
endog : array_like
New observations from the modeled time-series process.
endog_quarterly : array_like, optional
New observations of quarterly variables. If provided, must be a
Pandas Series or DataFrame with a DatetimeIndex or PeriodIndex at
the quarterly frequency.
refit : bool, optional
Whether to re-fit the parameters, based on the combined dataset.
Default is False (so parameters from the current results object
are used to create the new results object).
fit_kwargs : dict, optional
Keyword arguments to pass to `fit` (if `refit=True`) or `filter` /
`smooth`.
copy_initialization : bool, optional
Whether or not to copy the initialization from the current results
set to the new model. Default is True.
retain_standardization : bool, optional
Whether or not to use the mean and standard deviations that were
used to standardize the data in the current model in the new model.
Default is True.
**kwargs
Keyword arguments may be used to modify model specification
arguments when created the new model object.
Returns
-------
results
Updated Results object, that includes results from both the
original dataset and the new dataset.
Notes
-----
The `endog` and `exog` arguments to this method must be formatted in
the same way (e.g. Pandas Series versus Numpy array) as were the
`endog` and `exog` arrays passed to the original model.
The `endog` (and, if applicable, `endog_quarterly`) arguments to this
method should consist of new observations that occurred directly after
the last element of `endog`. For any other kind of dataset, see the
`apply` method.
This method will apply filtering to all of the original data as well
as to the new data. To apply filtering only to the new data (which
can be much faster if the original dataset is large), see the `extend`
method.
See Also
--------
extend
apply
"""
# Construct the combined dataset, if necessary
endog, k_endog_monthly = DynamicFactorMQ.construct_endog(
endog, endog_quarterly)
# Check for compatible dimensions
k_endog = endog.shape[1] if len(endog.shape) == 2 else 1
if (k_endog_monthly != self.model.k_endog_M or
k_endog != self.model.k_endog):
raise ValueError('Cannot append data of a different dimension to'
' a model.')
kwargs['k_endog_monthly'] = k_endog_monthly
return super().append(
endog, refit=refit, fit_kwargs=fit_kwargs,
copy_initialization=copy_initialization,
retain_standardization=retain_standardization, **kwargs) | Recreate the results object with new data appended to original data.
Creates a new result object applied to a dataset that is created by
appending new data to the end of the model's original data. The new
results can then be used for analysis or forecasting.
Parameters
----------
endog : array_like
New observations from the modeled time-series process.
endog_quarterly : array_like, optional
New observations of quarterly variables. If provided, must be a
Pandas Series or DataFrame with a DatetimeIndex or PeriodIndex at
the quarterly frequency.
refit : bool, optional
Whether to re-fit the parameters, based on the combined dataset.
Default is False (so parameters from the current results object
are used to create the new results object).
fit_kwargs : dict, optional
Keyword arguments to pass to `fit` (if `refit=True`) or `filter` /
`smooth`.
copy_initialization : bool, optional
Whether or not to copy the initialization from the current results
set to the new model. Default is True.
retain_standardization : bool, optional
Whether or not to use the mean and standard deviations that were
used to standardize the data in the current model in the new model.
Default is True.
**kwargs
Keyword arguments may be used to modify model specification
arguments when created the new model object.
Returns
-------
results
Updated Results object, that includes results from both the
original dataset and the new dataset.
Notes
-----
The `endog` and `exog` arguments to this method must be formatted in
the same way (e.g. Pandas Series versus Numpy array) as were the
`endog` and `exog` arrays passed to the original model.
The `endog` (and, if applicable, `endog_quarterly`) arguments to this
method should consist of new observations that occurred directly after
the last element of `endog`. For any other kind of dataset, see the
`apply` method.
This method will apply filtering to all of the original data as well
as to the new data. To apply filtering only to the new data (which
can be much faster if the original dataset is large), see the `extend`
method.
See Also
--------
extend
apply | append | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def extend(self, endog, endog_quarterly=None, fit_kwargs=None,
retain_standardization=True, **kwargs):
"""
Recreate the results object for new data that extends original data.
Creates a new result object applied to a new dataset that is assumed to
follow directly from the end of the model's original data. The new
results can then be used for analysis or forecasting.
Parameters
----------
endog : array_like
New observations from the modeled time-series process.
endog_quarterly : array_like, optional
New observations of quarterly variables. If provided, must be a
Pandas Series or DataFrame with a DatetimeIndex or PeriodIndex at
the quarterly frequency.
fit_kwargs : dict, optional
Keyword arguments to pass to `filter` or `smooth`.
retain_standardization : bool, optional
Whether or not to use the mean and standard deviations that were
used to standardize the data in the current model in the new model.
Default is True.
**kwargs
Keyword arguments may be used to modify model specification
arguments when created the new model object.
Returns
-------
results
Updated Results object, that includes results only for the new
dataset.
See Also
--------
append
apply
Notes
-----
The `endog` argument to this method should consist of new observations
that occurred directly after the last element of the model's original
`endog` array. For any other kind of dataset, see the `apply` method.
This method will apply filtering only to the new data provided by the
`endog` argument, which can be much faster than re-filtering the entire
dataset. However, the returned results object will only have results
for the new data. To retrieve results for both the new data and the
original data, see the `append` method.
"""
# Construct the combined dataset, if necessary
endog, k_endog_monthly = DynamicFactorMQ.construct_endog(
endog, endog_quarterly)
# Check for compatible dimensions
k_endog = endog.shape[1] if len(endog.shape) == 2 else 1
if (k_endog_monthly != self.model.k_endog_M or
k_endog != self.model.k_endog):
raise ValueError('Cannot append data of a different dimension to'
' a model.')
kwargs['k_endog_monthly'] = k_endog_monthly
return super().extend(
endog, fit_kwargs=fit_kwargs,
retain_standardization=retain_standardization, **kwargs) | Recreate the results object for new data that extends original data.
Creates a new result object applied to a new dataset that is assumed to
follow directly from the end of the model's original data. The new
results can then be used for analysis or forecasting.
Parameters
----------
endog : array_like
New observations from the modeled time-series process.
endog_quarterly : array_like, optional
New observations of quarterly variables. If provided, must be a
Pandas Series or DataFrame with a DatetimeIndex or PeriodIndex at
the quarterly frequency.
fit_kwargs : dict, optional
Keyword arguments to pass to `filter` or `smooth`.
retain_standardization : bool, optional
Whether or not to use the mean and standard deviations that were
used to standardize the data in the current model in the new model.
Default is True.
**kwargs
Keyword arguments may be used to modify model specification
arguments when created the new model object.
Returns
-------
results
Updated Results object, that includes results only for the new
dataset.
See Also
--------
append
apply
Notes
-----
The `endog` argument to this method should consist of new observations
that occurred directly after the last element of the model's original
`endog` array. For any other kind of dataset, see the `apply` method.
This method will apply filtering only to the new data provided by the
`endog` argument, which can be much faster than re-filtering the entire
dataset. However, the returned results object will only have results
for the new data. To retrieve results for both the new data and the
original data, see the `append` method. | extend | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def apply(self, endog, k_endog_monthly=None, endog_quarterly=None,
refit=False, fit_kwargs=None, copy_initialization=False,
retain_standardization=True, **kwargs):
"""
Apply the fitted parameters to new data unrelated to the original data.
Creates a new result object using the current fitted parameters,
applied to a completely new dataset that is assumed to be unrelated to
the model's original data. The new results can then be used for
analysis or forecasting.
Parameters
----------
endog : array_like
New observations from the modeled time-series process.
k_endog_monthly : int, optional
If specifying a monthly/quarterly mixed frequency model in which
the provided `endog` dataset contains both the monthly and
quarterly data, this variable should be used to indicate how many
of the variables are monthly.
endog_quarterly : array_like, optional
New observations of quarterly variables. If provided, must be a
Pandas Series or DataFrame with a DatetimeIndex or PeriodIndex at
the quarterly frequency.
refit : bool, optional
Whether to re-fit the parameters, using the new dataset.
Default is False (so parameters from the current results object
are used to create the new results object).
fit_kwargs : dict, optional
Keyword arguments to pass to `fit` (if `refit=True`) or `filter` /
`smooth`.
copy_initialization : bool, optional
Whether or not to copy the initialization from the current results
set to the new model. Default is False.
retain_standardization : bool, optional
Whether or not to use the mean and standard deviations that were
used to standardize the data in the current model in the new model.
Default is True.
**kwargs
Keyword arguments may be used to modify model specification
arguments when created the new model object.
Returns
-------
results
Updated Results object, that includes results only for the new
dataset.
See Also
--------
statsmodels.tsa.statespace.mlemodel.MLEResults.append
statsmodels.tsa.statespace.mlemodel.MLEResults.apply
Notes
-----
The `endog` argument to this method should consist of new observations
that are not necessarily related to the original model's `endog`
dataset. For observations that continue that original dataset by follow
directly after its last element, see the `append` and `extend` methods.
"""
mod = self.model.clone(endog, k_endog_monthly=k_endog_monthly,
endog_quarterly=endog_quarterly,
retain_standardization=retain_standardization,
**kwargs)
if copy_initialization:
init = initialization.Initialization.from_results(
self.filter_results)
mod.ssm.initialization = init
res = self._apply(mod, refit=refit, fit_kwargs=fit_kwargs)
return res | Apply the fitted parameters to new data unrelated to the original data.
Creates a new result object using the current fitted parameters,
applied to a completely new dataset that is assumed to be unrelated to
the model's original data. The new results can then be used for
analysis or forecasting.
Parameters
----------
endog : array_like
New observations from the modeled time-series process.
k_endog_monthly : int, optional
If specifying a monthly/quarterly mixed frequency model in which
the provided `endog` dataset contains both the monthly and
quarterly data, this variable should be used to indicate how many
of the variables are monthly.
endog_quarterly : array_like, optional
New observations of quarterly variables. If provided, must be a
Pandas Series or DataFrame with a DatetimeIndex or PeriodIndex at
the quarterly frequency.
refit : bool, optional
Whether to re-fit the parameters, using the new dataset.
Default is False (so parameters from the current results object
are used to create the new results object).
fit_kwargs : dict, optional
Keyword arguments to pass to `fit` (if `refit=True`) or `filter` /
`smooth`.
copy_initialization : bool, optional
Whether or not to copy the initialization from the current results
set to the new model. Default is False.
retain_standardization : bool, optional
Whether or not to use the mean and standard deviations that were
used to standardize the data in the current model in the new model.
Default is True.
**kwargs
Keyword arguments may be used to modify model specification
arguments when created the new model object.
Returns
-------
results
Updated Results object, that includes results only for the new
dataset.
See Also
--------
statsmodels.tsa.statespace.mlemodel.MLEResults.append
statsmodels.tsa.statespace.mlemodel.MLEResults.apply
Notes
-----
The `endog` argument to this method should consist of new observations
that are not necessarily related to the original model's `endog`
dataset. For observations that continue that original dataset by follow
directly after its last element, see the `append` and `extend` methods. | apply | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def summary(self, alpha=.05, start=None, title=None, model_name=None,
display_params=True, display_diagnostics=False,
display_params_as_list=False, truncate_endog_names=None,
display_max_endog=3):
"""
Summarize the Model.
Parameters
----------
alpha : float, optional
Significance level for the confidence intervals. Default is 0.05.
start : int, optional
Integer of the start observation. Default is 0.
title : str, optional
The title used for the summary table.
model_name : str, optional
The name of the model used. Default is to use model class name.
Returns
-------
summary : Summary instance
This holds the summary table and text, which can be printed or
converted to various output formats.
See Also
--------
statsmodels.iolib.summary.Summary
"""
mod = self.model
# Default title / model name
if title is None:
title = 'Dynamic Factor Results'
if model_name is None:
model_name = self.model._model_name
# Get endog names
endog_names = self.model._get_endog_names(
truncate=truncate_endog_names)
# Get extra elements for top summary table
extra_top_left = None
extra_top_right = []
mle_retvals = getattr(self, 'mle_retvals', None)
mle_settings = getattr(self, 'mle_settings', None)
if mle_settings is not None and mle_settings.method == 'em':
extra_top_right += [('EM Iterations', [f'{mle_retvals.iter}'])]
# Get the basic summary tables
summary = super().summary(
alpha=alpha, start=start, title=title, model_name=model_name,
display_params=(display_params and display_params_as_list),
display_diagnostics=display_diagnostics,
truncate_endog_names=truncate_endog_names,
display_max_endog=display_max_endog,
extra_top_left=extra_top_left, extra_top_right=extra_top_right)
# Get tables of parameters
table_ix = 1
if not display_params_as_list:
# Observation equation table
data = pd.DataFrame(
self.filter_results.design[:, mod._s['factors_L1'], 0],
index=endog_names, columns=mod.factor_names)
try:
data = data.map(lambda s: '%.2f' % s)
except AttributeError:
data = data.applymap(lambda s: '%.2f' % s)
# Idiosyncratic terms
# data[' '] = ' '
k_idio = 1
if mod.idiosyncratic_ar1:
data[' idiosyncratic: AR(1)'] = (
self.params[mod._p['idiosyncratic_ar1']])
k_idio += 1
data['var.'] = self.params[mod._p['idiosyncratic_var']]
# Ensure object dtype for string assignment
cols_to_cast = data.columns[-k_idio:]
data[cols_to_cast] = data[cols_to_cast].astype(object)
try:
data.iloc[:, -k_idio:] = data.iloc[:, -k_idio:].map(
lambda s: f'{s:.2f}')
except AttributeError:
data.iloc[:, -k_idio:] = data.iloc[:, -k_idio:].applymap(
lambda s: f'{s:.2f}')
data.index.name = 'Factor loadings:'
# Clear entries for non-loading factors
base_iloc = np.arange(mod.k_factors)
for i in range(mod.k_endog):
iloc = [j for j in base_iloc
if j not in mod._s.endog_factor_iloc[i]]
data.iloc[i, iloc] = '.'
data = data.reset_index()
# Build the table
params_data = data.values
params_header = data.columns.tolist()
params_stubs = None
title = 'Observation equation:'
table = SimpleTable(
params_data, params_header, params_stubs,
txt_fmt=fmt_params, title=title)
summary.tables.insert(table_ix, table)
table_ix += 1
# Factor transitions
ix1 = 0
ix2 = 0
for i in range(len(mod._s.factor_blocks)):
block = mod._s.factor_blocks[i]
ix2 += block.k_factors
T = self.filter_results.transition
lag_names = []
for j in range(block.factor_order):
lag_names += [f'L{j + 1}.{name}'
for name in block.factor_names]
data = pd.DataFrame(T[block.factors_L1, block.factors_ar, 0],
index=block.factor_names,
columns=lag_names)
data.index.name = ''
try:
data = data.map(lambda s: '%.2f' % s)
except AttributeError:
data = data.applymap(lambda s: '%.2f' % s)
Q = self.filter_results.state_cov
# data[' '] = ''
if block.k_factors == 1:
data[' error variance'] = Q[ix1, ix1]
else:
data[' error covariance'] = block.factor_names
for j in range(block.k_factors):
data[block.factor_names[j]] = Q[ix1:ix2, ix1 + j]
cols_to_cast = data.columns[-block.k_factors:]
data[cols_to_cast] = data[cols_to_cast].astype(object)
try:
formatted_vals = data.iloc[:, -block.k_factors:].map(
lambda s: f'{s:.2f}'
)
except AttributeError:
formatted_vals = data.iloc[:, -block.k_factors:].applymap(
lambda s: f'{s:.2f}'
)
data.iloc[:, -block.k_factors:] = formatted_vals
data = data.reset_index()
params_data = data.values
params_header = data.columns.tolist()
params_stubs = None
title = f'Transition: Factor block {i}'
table = SimpleTable(
params_data, params_header, params_stubs,
txt_fmt=fmt_params, title=title)
summary.tables.insert(table_ix, table)
table_ix += 1
ix1 = ix2
return summary | Summarize the Model.
Parameters
----------
alpha : float, optional
Significance level for the confidence intervals. Default is 0.05.
start : int, optional
Integer of the start observation. Default is 0.
title : str, optional
The title used for the summary table.
model_name : str, optional
The name of the model used. Default is to use model class name.
Returns
-------
summary : Summary instance
This holds the summary table and text, which can be printed or
converted to various output formats.
See Also
--------
statsmodels.iolib.summary.Summary | summary | python | statsmodels/statsmodels | statsmodels/tsa/statespace/dynamic_factor_mq.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/dynamic_factor_mq.py | BSD-3-Clause |
def data_revisions(self):
"""
Revisions to data points that existed in the previous dataset
Returns
-------
data_revisions : pd.DataFrame
Index is as MultiIndex consisting of `revision date` and
`revised variable`. The columns are:
- `observed (prev)`: the value of the data as it was observed
in the previous dataset.
- `revised`: the revised value of the data, as it is observed
in the new dataset
- `detailed impacts computed`: whether or not detailed impacts have
been computed in these NewsResults for this revision
See also
--------
data_updates
"""
# Save revisions data
data = pd.concat([
self.revised_all.rename('revised'),
self.revised_prev_all.rename('observed (prev)')
], axis=1).sort_index()
data['detailed impacts computed'] = (
self.revised_all.index.isin(self.revised.index))
return data | Revisions to data points that existed in the previous dataset
Returns
-------
data_revisions : pd.DataFrame
Index is as MultiIndex consisting of `revision date` and
`revised variable`. The columns are:
- `observed (prev)`: the value of the data as it was observed
in the previous dataset.
- `revised`: the revised value of the data, as it is observed
in the new dataset
- `detailed impacts computed`: whether or not detailed impacts have
been computed in these NewsResults for this revision
See also
--------
data_updates | data_revisions | python | statsmodels/statsmodels | statsmodels/tsa/statespace/news.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/news.py | BSD-3-Clause |
def data_updates(self):
"""
Updated data; new entries that did not exist in the previous dataset
Returns
-------
data_updates : pd.DataFrame
Index is as MultiIndex consisting of `update date` and
`updated variable`. The columns are:
- `forecast (prev)`: the previous forecast of the new entry,
based on the information available in the previous dataset
(recall that for these updated data points, the previous dataset
had no observed value for them at all)
- `observed`: the value of the new entry, as it is observed in the
new dataset
See also
--------
data_revisions
"""
data = pd.concat([
self.update_realized.rename('observed'),
self.update_forecasts.rename('forecast (prev)')
], axis=1).sort_index()
return data | Updated data; new entries that did not exist in the previous dataset
Returns
-------
data_updates : pd.DataFrame
Index is as MultiIndex consisting of `update date` and
`updated variable`. The columns are:
- `forecast (prev)`: the previous forecast of the new entry,
based on the information available in the previous dataset
(recall that for these updated data points, the previous dataset
had no observed value for them at all)
- `observed`: the value of the new entry, as it is observed in the
new dataset
See also
--------
data_revisions | data_updates | python | statsmodels/statsmodels | statsmodels/tsa/statespace/news.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/news.py | BSD-3-Clause |
def details_by_impact(self):
"""
Details of forecast revisions from news, organized by impacts first
Returns
-------
details : pd.DataFrame
Index is as MultiIndex consisting of:
- `impact date`: the date of the impact on the variable of interest
- `impacted variable`: the variable that is being impacted
- `update date`: the date of the data update, that results in
`news` that impacts the forecast of variables of interest
- `updated variable`: the variable being updated, that results in
`news` that impacts the forecast of variables of interest
The columns are:
- `forecast (prev)`: the previous forecast of the new entry,
based on the information available in the previous dataset
- `observed`: the value of the new entry, as it is observed in the
new dataset
- `news`: the news associated with the update (this is just the
forecast error: `observed` - `forecast (prev)`)
- `weight`: the weight describing how the `news` effects the
forecast of the variable of interest
- `impact`: the impact of the `news` on the forecast of the
variable of interest
Notes
-----
This table decomposes updated forecasts of variables of interest from
the `news` associated with each updated datapoint from the new data
release.
This table does not summarize the impacts or show the effect of
revisions. That information can be found in the `impacts` or
`revision_details_by_impact` tables.
This form of the details table is organized so that the impacted
dates / variables are first in the index. This is convenient for
slicing by impacted variables / dates to view the details of data
updates for a particular variable or date.
However, since the `forecast (prev)` and `observed` columns have a lot
of duplication, printing the entire table gives a result that is less
easy to parse than that produced by the `details_by_update` property.
`details_by_update` contains the same information but is organized to
be more convenient for displaying the entire table of detailed updates.
At the same time, `details_by_update` is less convenient for
subsetting.
See Also
--------
details_by_update
revision_details_by_update
impacts
"""
s = self.weights.stack(level=[0, 1], **FUTURE_STACK)
df = s.rename('weight').to_frame()
if len(self.updates_iloc):
df['forecast (prev)'] = self.update_forecasts
df['observed'] = self.update_realized
df['news'] = self.news
df['impact'] = df['news'] * df['weight']
else:
df['forecast (prev)'] = []
df['observed'] = []
df['news'] = []
df['impact'] = []
df = df[['observed', 'forecast (prev)', 'news', 'weight', 'impact']]
df = df.reorder_levels([2, 3, 0, 1]).sort_index()
if self.impacted_variable is not None and len(df) > 0:
df = df.loc[np.s_[:, self.impacted_variable], :]
mask = np.abs(df['impact']) > self.tolerance
return df[mask] | Details of forecast revisions from news, organized by impacts first
Returns
-------
details : pd.DataFrame
Index is as MultiIndex consisting of:
- `impact date`: the date of the impact on the variable of interest
- `impacted variable`: the variable that is being impacted
- `update date`: the date of the data update, that results in
`news` that impacts the forecast of variables of interest
- `updated variable`: the variable being updated, that results in
`news` that impacts the forecast of variables of interest
The columns are:
- `forecast (prev)`: the previous forecast of the new entry,
based on the information available in the previous dataset
- `observed`: the value of the new entry, as it is observed in the
new dataset
- `news`: the news associated with the update (this is just the
forecast error: `observed` - `forecast (prev)`)
- `weight`: the weight describing how the `news` effects the
forecast of the variable of interest
- `impact`: the impact of the `news` on the forecast of the
variable of interest
Notes
-----
This table decomposes updated forecasts of variables of interest from
the `news` associated with each updated datapoint from the new data
release.
This table does not summarize the impacts or show the effect of
revisions. That information can be found in the `impacts` or
`revision_details_by_impact` tables.
This form of the details table is organized so that the impacted
dates / variables are first in the index. This is convenient for
slicing by impacted variables / dates to view the details of data
updates for a particular variable or date.
However, since the `forecast (prev)` and `observed` columns have a lot
of duplication, printing the entire table gives a result that is less
easy to parse than that produced by the `details_by_update` property.
`details_by_update` contains the same information but is organized to
be more convenient for displaying the entire table of detailed updates.
At the same time, `details_by_update` is less convenient for
subsetting.
See Also
--------
details_by_update
revision_details_by_update
impacts | details_by_impact | python | statsmodels/statsmodels | statsmodels/tsa/statespace/news.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/news.py | BSD-3-Clause |
def revision_details_by_impact(self):
"""
Details of forecast revisions from revised data, organized by impacts
Returns
-------
details : pd.DataFrame
Index is as MultiIndex consisting of:
- `impact date`: the date of the impact on the variable of interest
- `impacted variable`: the variable that is being impacted
- `revision date`: the date of the data revision, that results in
`revision` that impacts the forecast of variables of interest
- `revised variable`: the variable being revised, that results in
`news` that impacts the forecast of variables of interest
The columns are:
- `observed (prev)`: the previous value of the observation, as it
was given in the previous dataset
- `revised`: the value of the revised entry, as it is observed in
the new dataset
- `revision`: the revision (this is `revised` - `observed (prev)`)
- `weight`: the weight describing how the `revision` effects the
forecast of the variable of interest
- `impact`: the impact of the `revision` on the forecast of the
variable of interest
Notes
-----
This table decomposes updated forecasts of variables of interest from
the `revision` associated with each revised datapoint from the new data
release.
This table does not summarize the impacts or show the effect of
new datapoints. That information can be found in the
`impacts` or `details_by_impact` tables.
Grouped impacts are shown in this table, with a "revision date" equal
to the last period prior to which detailed revisions were computed and
with "revised variable" set to the string "all prior revisions". For
these rows, all columns except "impact" will be set to NaNs.
This form of the details table is organized so that the impacted
dates / variables are first in the index. This is convenient for
slicing by impacted variables / dates to view the details of data
updates for a particular variable or date.
However, since the `observed (prev)` and `revised` columns have a lot
of duplication, printing the entire table gives a result that is less
easy to parse than that produced by the `details_by_revision` property.
`details_by_revision` contains the same information but is organized to
be more convenient for displaying the entire table of detailed
revisions. At the same time, `details_by_revision` is less convenient
for subsetting.
See Also
--------
details_by_revision
details_by_impact
impacts
"""
weights = self.revision_weights.stack(level=[0, 1], **FUTURE_STACK)
df = pd.concat([
self.revised.reindex(weights.index),
self.revised_prev.rename('observed (prev)').reindex(weights.index),
self.revisions.reindex(weights.index),
weights.rename('weight'),
(self.revisions.reindex(weights.index) * weights).rename('impact'),
], axis=1)
if self.n_revisions_grouped > 0:
df = pd.concat([df, self._revision_grouped_impacts])
# Explicitly set names for compatibility with pandas=1.2.5
df.index = df.index.set_names(
['revision date', 'revised variable',
'impact date', 'impacted variable'])
df = df.reorder_levels([2, 3, 0, 1]).sort_index()
if self.impacted_variable is not None and len(df) > 0:
df = df.loc[np.s_[:, self.impacted_variable], :]
mask = np.abs(df['impact']) > self.tolerance
return df[mask] | Details of forecast revisions from revised data, organized by impacts
Returns
-------
details : pd.DataFrame
Index is as MultiIndex consisting of:
- `impact date`: the date of the impact on the variable of interest
- `impacted variable`: the variable that is being impacted
- `revision date`: the date of the data revision, that results in
`revision` that impacts the forecast of variables of interest
- `revised variable`: the variable being revised, that results in
`news` that impacts the forecast of variables of interest
The columns are:
- `observed (prev)`: the previous value of the observation, as it
was given in the previous dataset
- `revised`: the value of the revised entry, as it is observed in
the new dataset
- `revision`: the revision (this is `revised` - `observed (prev)`)
- `weight`: the weight describing how the `revision` effects the
forecast of the variable of interest
- `impact`: the impact of the `revision` on the forecast of the
variable of interest
Notes
-----
This table decomposes updated forecasts of variables of interest from
the `revision` associated with each revised datapoint from the new data
release.
This table does not summarize the impacts or show the effect of
new datapoints. That information can be found in the
`impacts` or `details_by_impact` tables.
Grouped impacts are shown in this table, with a "revision date" equal
to the last period prior to which detailed revisions were computed and
with "revised variable" set to the string "all prior revisions". For
these rows, all columns except "impact" will be set to NaNs.
This form of the details table is organized so that the impacted
dates / variables are first in the index. This is convenient for
slicing by impacted variables / dates to view the details of data
updates for a particular variable or date.
However, since the `observed (prev)` and `revised` columns have a lot
of duplication, printing the entire table gives a result that is less
easy to parse than that produced by the `details_by_revision` property.
`details_by_revision` contains the same information but is organized to
be more convenient for displaying the entire table of detailed
revisions. At the same time, `details_by_revision` is less convenient
for subsetting.
See Also
--------
details_by_revision
details_by_impact
impacts | revision_details_by_impact | python | statsmodels/statsmodels | statsmodels/tsa/statespace/news.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/news.py | BSD-3-Clause |
def details_by_update(self):
"""
Details of forecast revisions from news, organized by updates first
Returns
-------
details : pd.DataFrame
Index is as MultiIndex consisting of:
- `update date`: the date of the data update, that results in
`news` that impacts the forecast of variables of interest
- `updated variable`: the variable being updated, that results in
`news` that impacts the forecast of variables of interest
- `forecast (prev)`: the previous forecast of the new entry,
based on the information available in the previous dataset
- `observed`: the value of the new entry, as it is observed in the
new dataset
- `impact date`: the date of the impact on the variable of interest
- `impacted variable`: the variable that is being impacted
The columns are:
- `news`: the news associated with the update (this is just the
forecast error: `observed` - `forecast (prev)`)
- `weight`: the weight describing how the `news` affects the
forecast of the variable of interest
- `impact`: the impact of the `news` on the forecast of the
variable of interest
Notes
-----
This table decomposes updated forecasts of variables of interest from
the `news` associated with each updated datapoint from the new data
release.
This table does not summarize the impacts or show the effect of
revisions. That information can be found in the `impacts` table.
This form of the details table is organized so that the updated
dates / variables are first in the index, and in this table the index
also contains the forecasts and observed values of the updates. This is
convenient for displaying the entire table of detailed updates because
it allows sparsifying duplicate entries.
However, since it includes forecasts and observed values in the index
of the table, it is not convenient for subsetting by the variable of
interest. Instead, the `details_by_impact` property is organized to
make slicing by impacted variables / dates easy. This allows, for
example, viewing the details of data updates on a particular variable
or date of interest.
See Also
--------
details_by_impact
impacts
"""
s = self.weights.stack(level=[0, 1], **FUTURE_STACK)
df = s.rename('weight').to_frame()
if len(self.updates_iloc):
df['forecast (prev)'] = self.update_forecasts
df['observed'] = self.update_realized
df['news'] = self.news
df['impact'] = df['news'] * df['weight']
else:
df['forecast (prev)'] = []
df['observed'] = []
df['news'] = []
df['impact'] = []
df = df[['forecast (prev)', 'observed', 'news',
'weight', 'impact']]
df = df.reset_index()
keys = ['update date', 'updated variable', 'observed',
'forecast (prev)', 'impact date', 'impacted variable']
df.index = pd.MultiIndex.from_arrays([df[key] for key in keys])
details = df.drop(keys, axis=1).sort_index()
if self.impacted_variable is not None and len(df) > 0:
details = details.loc[
np.s_[:, :, :, :, :, self.impacted_variable], :]
mask = np.abs(details['impact']) > self.tolerance
return details[mask] | Details of forecast revisions from news, organized by updates first
Returns
-------
details : pd.DataFrame
Index is as MultiIndex consisting of:
- `update date`: the date of the data update, that results in
`news` that impacts the forecast of variables of interest
- `updated variable`: the variable being updated, that results in
`news` that impacts the forecast of variables of interest
- `forecast (prev)`: the previous forecast of the new entry,
based on the information available in the previous dataset
- `observed`: the value of the new entry, as it is observed in the
new dataset
- `impact date`: the date of the impact on the variable of interest
- `impacted variable`: the variable that is being impacted
The columns are:
- `news`: the news associated with the update (this is just the
forecast error: `observed` - `forecast (prev)`)
- `weight`: the weight describing how the `news` affects the
forecast of the variable of interest
- `impact`: the impact of the `news` on the forecast of the
variable of interest
Notes
-----
This table decomposes updated forecasts of variables of interest from
the `news` associated with each updated datapoint from the new data
release.
This table does not summarize the impacts or show the effect of
revisions. That information can be found in the `impacts` table.
This form of the details table is organized so that the updated
dates / variables are first in the index, and in this table the index
also contains the forecasts and observed values of the updates. This is
convenient for displaying the entire table of detailed updates because
it allows sparsifying duplicate entries.
However, since it includes forecasts and observed values in the index
of the table, it is not convenient for subsetting by the variable of
interest. Instead, the `details_by_impact` property is organized to
make slicing by impacted variables / dates easy. This allows, for
example, viewing the details of data updates on a particular variable
or date of interest.
See Also
--------
details_by_impact
impacts | details_by_update | python | statsmodels/statsmodels | statsmodels/tsa/statespace/news.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/news.py | BSD-3-Clause |
def revision_details_by_update(self):
"""
Details of forecast revisions from revisions, organized by updates
Returns
-------
details : pd.DataFrame
Index is as MultiIndex consisting of:
- `revision date`: the date of the data revision, that results in
`revision` that impacts the forecast of variables of interest
- `revised variable`: the variable being revised, that results in
`news` that impacts the forecast of variables of interest
- `observed (prev)`: the previous value of the observation, as it
was given in the previous dataset
- `revised`: the value of the revised entry, as it is observed in
the new dataset
- `impact date`: the date of the impact on the variable of interest
- `impacted variable`: the variable that is being impacted
The columns are:
- `revision`: the revision (this is `revised` - `observed (prev)`)
- `weight`: the weight describing how the `revision` affects the
forecast of the variable of interest
- `impact`: the impact of the `revision` on the forecast of the
variable of interest
Notes
-----
This table decomposes updated forecasts of variables of interest from
the `revision` associated with each revised datapoint from the new data
release.
This table does not summarize the impacts or show the effect of
new datapoints, see `details_by_update` instead.
Grouped impacts are shown in this table, with a "revision date" equal
to the last period prior to which detailed revisions were computed and
with "revised variable" set to the string "all prior revisions". For
these rows, all columns except "impact" will be set to NaNs.
This form of the details table is organized so that the revision
dates / variables are first in the index, and in this table the index
also contains the previously observed and revised values. This is
convenient for displaying the entire table of detailed revisions
because it allows sparsifying duplicate entries.
However, since it includes previous observations and revisions in the
index of the table, it is not convenient for subsetting by the variable
of interest. Instead, the `revision_details_by_impact` property is
organized to make slicing by impacted variables / dates easy. This
allows, for example, viewing the details of data revisions on a
particular variable or date of interest.
See Also
--------
details_by_impact
impacts
"""
weights = self.revision_weights.stack(level=[0, 1], **FUTURE_STACK)
df = pd.concat([
self.revised_prev.rename('observed (prev)').reindex(weights.index),
self.revised.reindex(weights.index),
self.revisions.reindex(weights.index),
weights.rename('weight'),
(self.revisions.reindex(weights.index) * weights).rename('impact'),
], axis=1)
if self.n_revisions_grouped > 0:
df = pd.concat([df, self._revision_grouped_impacts])
# Explicitly set names for compatibility with pandas=1.2.5
df.index = df.index.set_names(
['revision date', 'revised variable',
'impact date', 'impacted variable'])
details = (df.set_index(['observed (prev)', 'revised'], append=True)
.reorder_levels([
'revision date', 'revised variable', 'revised',
'observed (prev)', 'impact date',
'impacted variable'])
.sort_index())
if self.impacted_variable is not None and len(df) > 0:
details = details.loc[
np.s_[:, :, :, :, :, self.impacted_variable], :]
mask = np.abs(details['impact']) > self.tolerance
return details[mask] | Details of forecast revisions from revisions, organized by updates
Returns
-------
details : pd.DataFrame
Index is as MultiIndex consisting of:
- `revision date`: the date of the data revision, that results in
`revision` that impacts the forecast of variables of interest
- `revised variable`: the variable being revised, that results in
`news` that impacts the forecast of variables of interest
- `observed (prev)`: the previous value of the observation, as it
was given in the previous dataset
- `revised`: the value of the revised entry, as it is observed in
the new dataset
- `impact date`: the date of the impact on the variable of interest
- `impacted variable`: the variable that is being impacted
The columns are:
- `revision`: the revision (this is `revised` - `observed (prev)`)
- `weight`: the weight describing how the `revision` affects the
forecast of the variable of interest
- `impact`: the impact of the `revision` on the forecast of the
variable of interest
Notes
-----
This table decomposes updated forecasts of variables of interest from
the `revision` associated with each revised datapoint from the new data
release.
This table does not summarize the impacts or show the effect of
new datapoints, see `details_by_update` instead.
Grouped impacts are shown in this table, with a "revision date" equal
to the last period prior to which detailed revisions were computed and
with "revised variable" set to the string "all prior revisions". For
these rows, all columns except "impact" will be set to NaNs.
This form of the details table is organized so that the revision
dates / variables are first in the index, and in this table the index
also contains the previously observed and revised values. This is
convenient for displaying the entire table of detailed revisions
because it allows sparsifying duplicate entries.
However, since it includes previous observations and revisions in the
index of the table, it is not convenient for subsetting by the variable
of interest. Instead, the `revision_details_by_impact` property is
organized to make slicing by impacted variables / dates easy. This
allows, for example, viewing the details of data revisions on a
particular variable or date of interest.
See Also
--------
details_by_impact
impacts | revision_details_by_update | python | statsmodels/statsmodels | statsmodels/tsa/statespace/news.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/news.py | BSD-3-Clause |
def impacts(self):
"""
Impacts from news and revisions on all dates / variables of interest
Returns
-------
impacts : pd.DataFrame
Index is as MultiIndex consisting of:
- `impact date`: the date of the impact on the variable of interest
- `impacted variable`: the variable that is being impacted
The columns are:
- `estimate (prev)`: the previous estimate / forecast of the
date / variable of interest.
- `impact of revisions`: the impact of all data revisions on
the estimate of the date / variable of interest.
- `impact of news`: the impact of all news on the estimate of
the date / variable of interest.
- `total impact`: the total impact of both revisions and news on
the estimate of the date / variable of interest.
- `estimate (new)`: the new estimate / forecast of the
date / variable of interest after taking into account the effects
of the revisions and news.
Notes
-----
This table decomposes updated forecasts of variables of interest into
the overall effect from revisions and news.
This table does not break down the detail by the updated
dates / variables. That information can be found in the
`details_by_impact` `details_by_update` tables.
See Also
--------
details_by_impact
details_by_update
"""
# Summary of impacts
impacts = pd.concat([
self.prev_impacted_forecasts.unstack().rename('estimate (prev)'),
self.revision_impacts.unstack().rename('impact of revisions'),
self.update_impacts.unstack().rename('impact of news'),
self.post_impacted_forecasts.unstack().rename('estimate (new)')],
axis=1)
impacts['impact of revisions'] = (
impacts['impact of revisions'].astype(float).fillna(0))
impacts['impact of news'] = (
impacts['impact of news'].astype(float).fillna(0))
impacts['total impact'] = (impacts['impact of revisions'] +
impacts['impact of news'])
impacts = impacts.reorder_levels([1, 0]).sort_index()
impacts.index.names = ['impact date', 'impacted variable']
impacts = impacts[['estimate (prev)', 'impact of revisions',
'impact of news', 'total impact', 'estimate (new)']]
if self.impacted_variable is not None:
impacts = impacts.loc[np.s_[:, self.impacted_variable], :]
tmp = np.abs(impacts[['impact of revisions', 'impact of news']])
mask = (tmp > self.tolerance).any(axis=1)
return impacts[mask] | Impacts from news and revisions on all dates / variables of interest
Returns
-------
impacts : pd.DataFrame
Index is as MultiIndex consisting of:
- `impact date`: the date of the impact on the variable of interest
- `impacted variable`: the variable that is being impacted
The columns are:
- `estimate (prev)`: the previous estimate / forecast of the
date / variable of interest.
- `impact of revisions`: the impact of all data revisions on
the estimate of the date / variable of interest.
- `impact of news`: the impact of all news on the estimate of
the date / variable of interest.
- `total impact`: the total impact of both revisions and news on
the estimate of the date / variable of interest.
- `estimate (new)`: the new estimate / forecast of the
date / variable of interest after taking into account the effects
of the revisions and news.
Notes
-----
This table decomposes updated forecasts of variables of interest into
the overall effect from revisions and news.
This table does not break down the detail by the updated
dates / variables. That information can be found in the
`details_by_impact` `details_by_update` tables.
See Also
--------
details_by_impact
details_by_update | impacts | python | statsmodels/statsmodels | statsmodels/tsa/statespace/news.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/news.py | BSD-3-Clause |
def summary_impacts(self, impact_date=None, impacted_variable=None,
groupby='impact date', show_revisions_columns=None,
sparsify=True, float_format='%.2f'):
"""
Create summary table with detailed impacts from news; by date, variable
Parameters
----------
impact_date : int, str, datetime, list, array, or slice, optional
Observation index label or slice of labels specifying particular
impact periods to display. The impact date(s) describe the periods
in which impacted variables were *affected* by the news. If this
argument is given, the output table will only show this impact date
or dates. Note that this argument is passed to the Pandas `loc`
accessor, and so it should correspond to the labels of the model's
index. If the model was created with data in a list or numpy array,
then these labels will be zero-indexes observation integers.
impacted_variable : str, list, array, or slice, optional
Observation variable label or slice of labels specifying particular
impacted variables to display. The impacted variable(s) describe
the variables that were *affected* by the news. If you do not know
the labels for the variables, check the `endog_names` attribute of
the model instance.
groupby : {impact date, impacted date}
The primary variable for grouping results in the impacts table. The
default is to group by update date.
show_revisions_columns : bool, optional
If set to False, the impacts table will not show the impacts from
data revisions or the total impacts. Default is to show the
revisions and totals columns if any revisions were made and
otherwise to hide them.
sparsify : bool, optional, default True
Set to False for the table to include every one of the multiindex
keys at each row.
float_format : str, optional
Formatter format string syntax for converting numbers to strings.
Default is '%.2f'.
Returns
-------
impacts_table : SimpleTable
Table describing total impacts from both revisions and news. See
the documentation for the `impacts` attribute for more details
about the index and columns.
See Also
--------
impacts
"""
# Squeeze for univariate models
if impacted_variable is None and self.k_endog == 1:
impacted_variable = self.endog_names[0]
# Default is to only show the revisions columns if there were any
# revisions (otherwise it would just be a column of zeros)
if show_revisions_columns is None:
show_revisions_columns = self.n_revisions > 0
# Select only the variables / dates of interest
s = list(np.s_[:, :])
if impact_date is not None:
s[0] = np.s_[impact_date]
if impacted_variable is not None:
s[1] = np.s_[impacted_variable]
s = tuple(s)
impacts = self.impacts.loc[s, :]
# Make the first index level the groupby level
groupby = groupby.lower()
if groupby in ['impacted variable', 'impacted_variable']:
impacts.index = impacts.index.swaplevel(1, 0)
elif groupby not in ['impact date', 'impact_date']:
raise ValueError('Invalid groupby for impacts table. Valid options'
' are "impact date" or "impacted variable".'
f'Got "{groupby}".')
impacts = impacts.sort_index()
# Drop the non-groupby level if there's only one value
tmp_index = impacts.index.remove_unused_levels()
k_vars = len(tmp_index.levels[1])
removed_level = None
if sparsify and k_vars == 1:
name = tmp_index.names[1]
value = tmp_index.levels[1][0]
removed_level = f'{name} = {value}'
impacts.index = tmp_index.droplevel(1)
try:
impacts = impacts.map(
lambda num: '' if pd.isnull(num) else float_format % num)
except AttributeError:
impacts = impacts.applymap(
lambda num: '' if pd.isnull(num) else float_format % num)
impacts = impacts.reset_index()
try:
impacts.iloc[:, 0] = impacts.iloc[:, 0].map(str)
except AttributeError:
impacts.iloc[:, 0] = impacts.iloc[:, 0].applymap(str)
else:
impacts = impacts.reset_index()
try:
impacts.iloc[:, :2] = impacts.iloc[:, :2].map(str)
impacts.iloc[:, 2:] = impacts.iloc[:, 2:].map(
lambda num: '' if pd.isnull(num) else float_format % num)
except AttributeError:
impacts.iloc[:, :2] = impacts.iloc[:, :2].applymap(str)
impacts.iloc[:, 2:] = impacts.iloc[:, 2:].applymap(
lambda num: '' if pd.isnull(num) else float_format % num)
# Sparsify the groupby column
if sparsify and groupby in impacts:
mask = impacts[groupby] == impacts[groupby].shift(1)
tmp = impacts.loc[mask, groupby]
if len(tmp) > 0:
impacts.loc[mask, groupby] = ''
# Drop revisions and totals columns if applicable
if not show_revisions_columns:
impacts.drop(['impact of revisions', 'total impact'], axis=1,
inplace=True)
params_data = impacts.values
params_header = impacts.columns.tolist()
params_stubs = None
title = 'Impacts'
if removed_level is not None:
join = 'on' if groupby == 'date' else 'for'
title += f' {join} [{removed_level}]'
impacts_table = SimpleTable(
params_data, params_header, params_stubs,
txt_fmt=fmt_params, title=title)
return impacts_table | Create summary table with detailed impacts from news; by date, variable
Parameters
----------
impact_date : int, str, datetime, list, array, or slice, optional
Observation index label or slice of labels specifying particular
impact periods to display. The impact date(s) describe the periods
in which impacted variables were *affected* by the news. If this
argument is given, the output table will only show this impact date
or dates. Note that this argument is passed to the Pandas `loc`
accessor, and so it should correspond to the labels of the model's
index. If the model was created with data in a list or numpy array,
then these labels will be zero-indexes observation integers.
impacted_variable : str, list, array, or slice, optional
Observation variable label or slice of labels specifying particular
impacted variables to display. The impacted variable(s) describe
the variables that were *affected* by the news. If you do not know
the labels for the variables, check the `endog_names` attribute of
the model instance.
groupby : {impact date, impacted date}
The primary variable for grouping results in the impacts table. The
default is to group by update date.
show_revisions_columns : bool, optional
If set to False, the impacts table will not show the impacts from
data revisions or the total impacts. Default is to show the
revisions and totals columns if any revisions were made and
otherwise to hide them.
sparsify : bool, optional, default True
Set to False for the table to include every one of the multiindex
keys at each row.
float_format : str, optional
Formatter format string syntax for converting numbers to strings.
Default is '%.2f'.
Returns
-------
impacts_table : SimpleTable
Table describing total impacts from both revisions and news. See
the documentation for the `impacts` attribute for more details
about the index and columns.
See Also
--------
impacts | summary_impacts | python | statsmodels/statsmodels | statsmodels/tsa/statespace/news.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/news.py | BSD-3-Clause |
def summary_details(self, source='news', impact_date=None,
impacted_variable=None, update_date=None,
updated_variable=None, groupby='update date',
sparsify=True, float_format='%.2f',
multiple_tables=False):
"""
Create summary table with detailed impacts; by date, variable
Parameters
----------
source : {news, revisions}
The source of impacts to summarize. Default is "news".
impact_date : int, str, datetime, list, array, or slice, optional
Observation index label or slice of labels specifying particular
impact periods to display. The impact date(s) describe the periods
in which impacted variables were *affected* by the news. If this
argument is given, the output table will only show this impact date
or dates. Note that this argument is passed to the Pandas `loc`
accessor, and so it should correspond to the labels of the model's
index. If the model was created with data in a list or numpy array,
then these labels will be zero-indexes observation integers.
impacted_variable : str, list, array, or slice, optional
Observation variable label or slice of labels specifying particular
impacted variables to display. The impacted variable(s) describe
the variables that were *affected* by the news. If you do not know
the labels for the variables, check the `endog_names` attribute of
the model instance.
update_date : int, str, datetime, list, array, or slice, optional
Observation index label or slice of labels specifying particular
updated periods to display. The updated date(s) describe the
periods in which the new data points were available that generated
the news). See the note on `impact_date` for details about what
these labels are.
updated_variable : str, list, array, or slice, optional
Observation variable label or slice of labels specifying particular
updated variables to display. The updated variable(s) describe the
variables that were *affected* by the news. If you do not know the
labels for the variables, check the `endog_names` attribute of the
model instance.
groupby : {update date, updated date, impact date, impacted date}
The primary variable for grouping results in the details table. The
default is to group by update date.
sparsify : bool, optional, default True
Set to False for the table to include every one of the multiindex
keys at each row.
float_format : str, optional
Formatter format string syntax for converting numbers to strings.
Default is '%.2f'.
multiple_tables : bool, optional
If set to True, this function will return a list of tables, one
table for each of the unique `groupby` levels. Default is False,
in which case this function returns a single table.
Returns
-------
details_table : SimpleTable or list of SimpleTable
Table or list of tables describing how the news from each update
(i.e. news from a particular variable / date) translates into
changes to the forecasts of each impacted variable variable / date.
This table contains information about the updates and about the
impacts. Updates are newly observed datapoints that were not
available in the previous results set. Each update leads to news,
and the news may cause changes in the forecasts of the impacted
variables. The amount that a particular piece of news (from an
update to some variable at some date) impacts a variable at some
date depends on weights that can be computed from the model
results.
The data contained in this table that refer to updates are:
- `update date` : The date at which a new datapoint was added.
- `updated variable` : The variable for which a new datapoint was
added.
- `forecast (prev)` : The value that had been forecast by the
previous model for the given updated variable and date.
- `observed` : The observed value of the new datapoint.
- `news` : The news is the difference between the observed value
and the previously forecast value for a given updated variable
and date.
The data contained in this table that refer to impacts are:
- `impact date` : A date associated with an impact.
- `impacted variable` : A variable that was impacted by the news.
- `weight` : The weight of news from a given `update date` and
`update variable` on a given `impacted variable` at a given
`impact date`.
- `impact` : The revision to the smoothed estimate / forecast of
the impacted variable at the impact date based specifically on
the news generated by the `updated variable` at the
`update date`.
See Also
--------
details_by_impact
details_by_update
"""
# Squeeze for univariate models
if self.k_endog == 1:
if impacted_variable is None:
impacted_variable = self.endog_names[0]
if updated_variable is None:
updated_variable = self.endog_names[0]
# Select only the variables / dates of interest
s = list(np.s_[:, :, :, :, :, :])
if impact_date is not None:
s[0] = np.s_[impact_date]
if impacted_variable is not None:
s[1] = np.s_[impacted_variable]
if update_date is not None:
s[2] = np.s_[update_date]
if updated_variable is not None:
s[3] = np.s_[updated_variable]
s = tuple(s)
if source == 'news':
details = self.details_by_impact.loc[s, :]
columns = {
'current': 'observed',
'prev': 'forecast (prev)',
'update date': 'update date',
'updated variable': 'updated variable',
'news': 'news',
}
elif source == 'revisions':
details = self.revision_details_by_impact.loc[s, :]
columns = {
'current': 'revised',
'prev': 'observed (prev)',
'update date': 'revision date',
'updated variable': 'revised variable',
'news': 'revision',
}
else:
raise ValueError(f'Invalid `source`: {source}. Must be "news" or'
' "revisions".')
# Make the first index level the groupby level
groupby = groupby.lower().replace('_', ' ')
groupby_overall = 'impact'
levels_order = [0, 1, 2, 3]
if groupby == 'update date':
levels_order = [2, 3, 0, 1]
groupby_overall = 'update'
elif groupby == 'updated variable':
levels_order = [3, 2, 1, 0]
groupby_overall = 'update'
elif groupby == 'impacted variable':
levels_order = [1, 0, 3, 2]
elif groupby != 'impact date':
raise ValueError('Invalid groupby for details table. Valid options'
' are "update date", "updated variable",'
' "impact date",or "impacted variable".'
f' Got "{groupby}".')
details.index = (details.index.reorder_levels(levels_order)
.remove_unused_levels())
details = details.sort_index()
# If our overall group-by is `update`, move forecast (prev) and
# observed into the index
base_levels = [0, 1, 2, 3]
if groupby_overall == 'update':
details.set_index([columns['current'], columns['prev']],
append=True, inplace=True)
details.index = details.index.reorder_levels([0, 1, 4, 5, 2, 3])
base_levels = [0, 1, 4, 5]
# Drop the non-groupby levels if there's only one value
tmp_index = details.index.remove_unused_levels()
n_levels = len(tmp_index.levels)
k_level_values = [len(tmp_index.levels[i]) for i in range(n_levels)]
removed_levels = []
if sparsify:
for i in sorted(base_levels)[::-1][:-1]:
if k_level_values[i] == 1:
name = tmp_index.names[i]
value = tmp_index.levels[i][0]
can_drop = (
(name == columns['update date']
and update_date is not None) or
(name == columns['updated variable']
and updated_variable is not None) or
(name == 'impact date'
and impact_date is not None) or
(name == 'impacted variable'
and (impacted_variable is not None or
self.impacted_variable is not None)))
if can_drop or not multiple_tables:
removed_levels.insert(0, f'{name} = {value}')
details.index = tmp_index = tmp_index.droplevel(i)
# Move everything to columns
details = details.reset_index()
# Function for formatting numbers
def str_format(num, mark_ones=False, mark_zeroes=False):
if pd.isnull(num):
out = ''
elif mark_ones and np.abs(1 - num) < self.tolerance:
out = '1.0'
elif mark_zeroes and np.abs(num) < self.tolerance:
out = '0'
else:
out = float_format % num
return out
# Function to create the table
def create_table(details, removed_levels):
# Convert everything to strings
for key in [columns['current'], columns['prev'], columns['news'],
'weight', 'impact']:
if key in details:
args = (
# mark_ones
True if key in ['weight'] else False,
# mark_zeroes
True if key in ['weight', 'impact'] else False)
details[key] = details[key].apply(str_format, args=args)
for key in [columns['update date'], 'impact date']:
if key in details:
details[key] = details[key].apply(str)
# Sparsify index columns
if sparsify:
sparsify_cols = [columns['update date'],
columns['updated variable'], 'impact date',
'impacted variable']
data_cols = [columns['current'], columns['prev']]
if groupby_overall == 'update':
# Put data columns first, since we need to do an additional
# check based on the other columns before sparsifying
sparsify_cols = data_cols + sparsify_cols
for key in sparsify_cols:
if key in details:
mask = details[key] == details[key].shift(1)
if key in data_cols:
if columns['update date'] in details:
tmp = details[columns['update date']]
mask &= tmp == tmp.shift(1)
if columns['updated variable'] in details:
tmp = details[columns['updated variable']]
mask &= tmp == tmp.shift(1)
details.loc[mask, key] = ''
params_data = details.values
params_header = [str(x) for x in details.columns.tolist()]
params_stubs = None
title = f"Details of {source}"
if len(removed_levels):
title += ' for [' + ', '.join(removed_levels) + ']'
return SimpleTable(params_data, params_header, params_stubs,
txt_fmt=fmt_params, title=title)
if multiple_tables:
details_table = []
for item in details[columns[groupby]].unique():
mask = details[columns[groupby]] == item
item_details = details[mask].drop(columns[groupby], axis=1)
item_removed_levels = (
[f'{columns[groupby]} = {item}'] + removed_levels)
details_table.append(create_table(item_details,
item_removed_levels))
else:
details_table = create_table(details, removed_levels)
return details_table | Create summary table with detailed impacts; by date, variable
Parameters
----------
source : {news, revisions}
The source of impacts to summarize. Default is "news".
impact_date : int, str, datetime, list, array, or slice, optional
Observation index label or slice of labels specifying particular
impact periods to display. The impact date(s) describe the periods
in which impacted variables were *affected* by the news. If this
argument is given, the output table will only show this impact date
or dates. Note that this argument is passed to the Pandas `loc`
accessor, and so it should correspond to the labels of the model's
index. If the model was created with data in a list or numpy array,
then these labels will be zero-indexes observation integers.
impacted_variable : str, list, array, or slice, optional
Observation variable label or slice of labels specifying particular
impacted variables to display. The impacted variable(s) describe
the variables that were *affected* by the news. If you do not know
the labels for the variables, check the `endog_names` attribute of
the model instance.
update_date : int, str, datetime, list, array, or slice, optional
Observation index label or slice of labels specifying particular
updated periods to display. The updated date(s) describe the
periods in which the new data points were available that generated
the news). See the note on `impact_date` for details about what
these labels are.
updated_variable : str, list, array, or slice, optional
Observation variable label or slice of labels specifying particular
updated variables to display. The updated variable(s) describe the
variables that were *affected* by the news. If you do not know the
labels for the variables, check the `endog_names` attribute of the
model instance.
groupby : {update date, updated date, impact date, impacted date}
The primary variable for grouping results in the details table. The
default is to group by update date.
sparsify : bool, optional, default True
Set to False for the table to include every one of the multiindex
keys at each row.
float_format : str, optional
Formatter format string syntax for converting numbers to strings.
Default is '%.2f'.
multiple_tables : bool, optional
If set to True, this function will return a list of tables, one
table for each of the unique `groupby` levels. Default is False,
in which case this function returns a single table.
Returns
-------
details_table : SimpleTable or list of SimpleTable
Table or list of tables describing how the news from each update
(i.e. news from a particular variable / date) translates into
changes to the forecasts of each impacted variable variable / date.
This table contains information about the updates and about the
impacts. Updates are newly observed datapoints that were not
available in the previous results set. Each update leads to news,
and the news may cause changes in the forecasts of the impacted
variables. The amount that a particular piece of news (from an
update to some variable at some date) impacts a variable at some
date depends on weights that can be computed from the model
results.
The data contained in this table that refer to updates are:
- `update date` : The date at which a new datapoint was added.
- `updated variable` : The variable for which a new datapoint was
added.
- `forecast (prev)` : The value that had been forecast by the
previous model for the given updated variable and date.
- `observed` : The observed value of the new datapoint.
- `news` : The news is the difference between the observed value
and the previously forecast value for a given updated variable
and date.
The data contained in this table that refer to impacts are:
- `impact date` : A date associated with an impact.
- `impacted variable` : A variable that was impacted by the news.
- `weight` : The weight of news from a given `update date` and
`update variable` on a given `impacted variable` at a given
`impact date`.
- `impact` : The revision to the smoothed estimate / forecast of
the impacted variable at the impact date based specifically on
the news generated by the `updated variable` at the
`update date`.
See Also
--------
details_by_impact
details_by_update | summary_details | python | statsmodels/statsmodels | statsmodels/tsa/statespace/news.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/news.py | BSD-3-Clause |
def summary_revisions(self, sparsify=True):
"""
Create summary table showing revisions to the previous results' data
Parameters
----------
sparsify : bool, optional, default True
Set to False for the table to include every one of the multiindex
keys at each row.
Returns
-------
revisions_table : SimpleTable
Table showing revisions to the previous results' data. Columns are:
- `revision date` : date associated with a revised data point
- `revised variable` : variable that was revised at `revision date`
- `observed (prev)` : the observed value prior to the revision
- `revised` : the new value after the revision
- `revision` : the new value after the revision
- `detailed impacts computed` : whether detailed impacts were
computed for this revision
"""
data = pd.merge(
self.data_revisions, self.revisions_all, left_index=True,
right_index=True).sort_index().reset_index()
data = data[['revision date', 'revised variable', 'observed (prev)',
'revision', 'detailed impacts computed']]
try:
data[['revision date', 'revised variable']] = (
data[['revision date', 'revised variable']].map(str))
data.iloc[:, 2:-1] = data.iloc[:, 2:-1].map(
lambda num: '' if pd.isnull(num) else '%.2f' % num)
except AttributeError:
data[['revision date', 'revised variable']] = (
data[['revision date', 'revised variable']].applymap(str))
data.iloc[:, 2:-1] = data.iloc[:, 2:-1].applymap(
lambda num: '' if pd.isnull(num) else '%.2f' % num)
# Sparsify the date column
if sparsify:
mask = data['revision date'] == data['revision date'].shift(1)
data.loc[mask, 'revision date'] = ''
params_data = data.values
params_header = data.columns.tolist()
params_stubs = None
title = 'Revisions to dataset:'
revisions_table = SimpleTable(
params_data, params_header, params_stubs,
txt_fmt=fmt_params, title=title)
return revisions_table | Create summary table showing revisions to the previous results' data
Parameters
----------
sparsify : bool, optional, default True
Set to False for the table to include every one of the multiindex
keys at each row.
Returns
-------
revisions_table : SimpleTable
Table showing revisions to the previous results' data. Columns are:
- `revision date` : date associated with a revised data point
- `revised variable` : variable that was revised at `revision date`
- `observed (prev)` : the observed value prior to the revision
- `revised` : the new value after the revision
- `revision` : the new value after the revision
- `detailed impacts computed` : whether detailed impacts were
computed for this revision | summary_revisions | python | statsmodels/statsmodels | statsmodels/tsa/statespace/news.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/news.py | BSD-3-Clause |
def summary_news(self, sparsify=True):
"""
Create summary table showing news from new data since previous results
Parameters
----------
sparsify : bool, optional, default True
Set to False for the table to include every one of the multiindex
keys at each row.
Returns
-------
updates_table : SimpleTable
Table showing new datapoints that were not in the previous results'
data. Columns are:
- `update date` : date associated with a new data point.
- `updated variable` : variable for which new data was added at
`update date`.
- `forecast (prev)` : the forecast value for the updated variable
at the update date in the previous results object (i.e. prior to
the data being available).
- `observed` : the observed value of the new datapoint.
See Also
--------
data_updates
"""
data = pd.merge(
self.data_updates, self.news, left_index=True,
right_index=True).sort_index().reset_index()
try:
data[['update date', 'updated variable']] = (
data[['update date', 'updated variable']].map(str))
data.iloc[:, 2:] = data.iloc[:, 2:].map(
lambda num: '' if pd.isnull(num) else '%.2f' % num)
except AttributeError:
data[['update date', 'updated variable']] = (
data[['update date', 'updated variable']].applymap(str))
data.iloc[:, 2:] = data.iloc[:, 2:].applymap(
lambda num: '' if pd.isnull(num) else '%.2f' % num)
# Sparsify the date column
if sparsify:
mask = data['update date'] == data['update date'].shift(1)
data.loc[mask, 'update date'] = ''
params_data = data.values
params_header = data.columns.tolist()
params_stubs = None
title = 'News from updated observations:'
updates_table = SimpleTable(
params_data, params_header, params_stubs,
txt_fmt=fmt_params, title=title)
return updates_table | Create summary table showing news from new data since previous results
Parameters
----------
sparsify : bool, optional, default True
Set to False for the table to include every one of the multiindex
keys at each row.
Returns
-------
updates_table : SimpleTable
Table showing new datapoints that were not in the previous results'
data. Columns are:
- `update date` : date associated with a new data point.
- `updated variable` : variable for which new data was added at
`update date`.
- `forecast (prev)` : the forecast value for the updated variable
at the update date in the previous results object (i.e. prior to
the data being available).
- `observed` : the observed value of the new datapoint.
See Also
--------
data_updates | summary_news | python | statsmodels/statsmodels | statsmodels/tsa/statespace/news.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/news.py | BSD-3-Clause |
def summary(self, impact_date=None, impacted_variable=None,
update_date=None, updated_variable=None,
revision_date=None, revised_variable=None,
impacts_groupby='impact date', details_groupby='update date',
show_revisions_columns=None, sparsify=True,
include_details_tables=None, include_revisions_tables=False,
float_format='%.2f'):
"""
Create summary tables describing news and impacts
Parameters
----------
impact_date : int, str, datetime, list, array, or slice, optional
Observation index label or slice of labels specifying particular
impact periods to display. The impact date(s) describe the periods
in which impacted variables were *affected* by the news. If this
argument is given, the impact and details tables will only show
this impact date or dates. Note that this argument is passed to the
Pandas `loc` accessor, and so it should correspond to the labels of
the model's index. If the model was created with data in a list or
numpy array, then these labels will be zero-indexes observation
integers.
impacted_variable : str, list, array, or slice, optional
Observation variable label or slice of labels specifying particular
impacted variables to display. The impacted variable(s) describe
the variables that were *affected* by the news. If you do not know
the labels for the variables, check the `endog_names` attribute of
the model instance.
update_date : int, str, datetime, list, array, or slice, optional
Observation index label or slice of labels specifying particular
updated periods to display. The updated date(s) describe the
periods in which the new data points were available that generated
the news). See the note on `impact_date` for details about what
these labels are.
updated_variable : str, list, array, or slice, optional
Observation variable label or slice of labels specifying particular
updated variables to display. The updated variable(s) describe the
variables that newly added in the updated dataset and which
generated the news. If you do not know the labels for the
variables, check the `endog_names` attribute of the model instance.
revision_date : int, str, datetime, list, array, or slice, optional
Observation index label or slice of labels specifying particular
revision periods to display. The revision date(s) describe the
periods in which the data points were revised. See the note on
`impact_date` for details about what these labels are.
revised_variable : str, list, array, or slice, optional
Observation variable label or slice of labels specifying particular
revised variables to display. The updated variable(s) describe the
variables that were *revised*. If you do not know the labels for
the variables, check the `endog_names` attribute of the model
instance.
impacts_groupby : {impact date, impacted date}
The primary variable for grouping results in the impacts table. The
default is to group by update date.
details_groupby : str
One of "update date", "updated date", "impact date", or
"impacted date". The primary variable for grouping results in the
details table. Only used if the details tables are included. The
default is to group by update date.
show_revisions_columns : bool, optional
If set to False, the impacts table will not show the impacts from
data revisions or the total impacts. Default is to show the
revisions and totals columns if any revisions were made and
otherwise to hide them.
sparsify : bool, optional, default True
Set to False for the table to include every one of the multiindex
keys at each row.
include_details_tables : bool, optional
If set to True, the summary will show tables describing the details
of how news from specific updates translate into specific impacts.
These tables can be very long, particularly in cases where there
were many updates and in multivariate models. The default is to
show detailed tables only for univariate models.
include_revisions_tables : bool, optional
If set to True, the summary will show tables describing the
revisions and updates that lead to impacts on variables of
interest.
float_format : str, optional
Formatter format string syntax for converting numbers to strings.
Default is '%.2f'.
Returns
-------
summary_tables : Summary
Summary tables describing news and impacts. Basic tables include:
- A table with general information about the sample.
- A table describing the impacts of revisions and news.
- Tables describing revisions in the dataset since the previous
results set (unless `include_revisions_tables=False`).
In univariate models or if `include_details_tables=True`, one or
more tables will additionally be included describing the details
of how news from specific updates translate into specific impacts.
See Also
--------
summary_impacts
summary_details
summary_revisions
summary_updates
"""
# Default for include_details_tables
if include_details_tables is None:
include_details_tables = (self.k_endog == 1)
# Model specification results
model = self.model.model
title = 'News'
def get_sample(model):
if model._index_dates:
mask = ~np.isnan(model.endog).all(axis=1)
ix = model._index[mask]
d = ix[0]
sample = ['%s' % d]
d = ix[-1]
sample += ['- ' + '%s' % d]
else:
sample = [str(0), ' - ' + str(model.nobs)]
return sample
previous_sample = get_sample(self.previous.model)
revised_sample = get_sample(self.updated.model)
# Standardize the model name as a list of str
model_name = model.__class__.__name__
# Top summary table
top_left = [('Model:', [model_name]),
('Date:', None),
('Time:', None)]
if self.state_index is not None:
k_states_used = len(self.state_index)
if k_states_used != self.model.model.k_states:
top_left.append(('# of included states:', [k_states_used]))
top_right = [
('Original sample:', [previous_sample[0]]),
('', [previous_sample[1]]),
('Update through:', [revised_sample[1][2:]]),
('# of revisions:', [len(self.revisions_ix)]),
('# of new datapoints:', [len(self.updates_ix)])]
summary = Summary()
self.model.endog_names = self.model.model.endog_names
summary.add_table_2cols(self, gleft=top_left, gright=top_right,
title=title)
table_ix = 1
# Impact table
summary.tables.insert(table_ix, self.summary_impacts(
impact_date=impact_date, impacted_variable=impacted_variable,
groupby=impacts_groupby,
show_revisions_columns=show_revisions_columns, sparsify=sparsify,
float_format=float_format))
table_ix += 1
# News table
if len(self.updates_iloc) > 0:
summary.tables.insert(
table_ix, self.summary_news(sparsify=sparsify))
table_ix += 1
# Detail tables
multiple_tables = (self.k_endog > 1)
details_tables = self.summary_details(
source='news',
impact_date=impact_date, impacted_variable=impacted_variable,
update_date=update_date, updated_variable=updated_variable,
groupby=details_groupby, sparsify=sparsify,
float_format=float_format, multiple_tables=multiple_tables)
if not multiple_tables:
details_tables = [details_tables]
if include_details_tables:
for table in details_tables:
summary.tables.insert(table_ix, table)
table_ix += 1
# Revisions
if include_revisions_tables and self.n_revisions > 0:
summary.tables.insert(
table_ix, self.summary_revisions(sparsify=sparsify))
table_ix += 1
# Revision detail tables
revision_details_tables = self.summary_details(
source='revisions',
impact_date=impact_date, impacted_variable=impacted_variable,
update_date=revision_date, updated_variable=revised_variable,
groupby=details_groupby, sparsify=sparsify,
float_format=float_format, multiple_tables=multiple_tables)
if not multiple_tables:
revision_details_tables = [revision_details_tables]
if include_details_tables:
for table in revision_details_tables:
summary.tables.insert(table_ix, table)
table_ix += 1
return summary | Create summary tables describing news and impacts
Parameters
----------
impact_date : int, str, datetime, list, array, or slice, optional
Observation index label or slice of labels specifying particular
impact periods to display. The impact date(s) describe the periods
in which impacted variables were *affected* by the news. If this
argument is given, the impact and details tables will only show
this impact date or dates. Note that this argument is passed to the
Pandas `loc` accessor, and so it should correspond to the labels of
the model's index. If the model was created with data in a list or
numpy array, then these labels will be zero-indexes observation
integers.
impacted_variable : str, list, array, or slice, optional
Observation variable label or slice of labels specifying particular
impacted variables to display. The impacted variable(s) describe
the variables that were *affected* by the news. If you do not know
the labels for the variables, check the `endog_names` attribute of
the model instance.
update_date : int, str, datetime, list, array, or slice, optional
Observation index label or slice of labels specifying particular
updated periods to display. The updated date(s) describe the
periods in which the new data points were available that generated
the news). See the note on `impact_date` for details about what
these labels are.
updated_variable : str, list, array, or slice, optional
Observation variable label or slice of labels specifying particular
updated variables to display. The updated variable(s) describe the
variables that newly added in the updated dataset and which
generated the news. If you do not know the labels for the
variables, check the `endog_names` attribute of the model instance.
revision_date : int, str, datetime, list, array, or slice, optional
Observation index label or slice of labels specifying particular
revision periods to display. The revision date(s) describe the
periods in which the data points were revised. See the note on
`impact_date` for details about what these labels are.
revised_variable : str, list, array, or slice, optional
Observation variable label or slice of labels specifying particular
revised variables to display. The updated variable(s) describe the
variables that were *revised*. If you do not know the labels for
the variables, check the `endog_names` attribute of the model
instance.
impacts_groupby : {impact date, impacted date}
The primary variable for grouping results in the impacts table. The
default is to group by update date.
details_groupby : str
One of "update date", "updated date", "impact date", or
"impacted date". The primary variable for grouping results in the
details table. Only used if the details tables are included. The
default is to group by update date.
show_revisions_columns : bool, optional
If set to False, the impacts table will not show the impacts from
data revisions or the total impacts. Default is to show the
revisions and totals columns if any revisions were made and
otherwise to hide them.
sparsify : bool, optional, default True
Set to False for the table to include every one of the multiindex
keys at each row.
include_details_tables : bool, optional
If set to True, the summary will show tables describing the details
of how news from specific updates translate into specific impacts.
These tables can be very long, particularly in cases where there
were many updates and in multivariate models. The default is to
show detailed tables only for univariate models.
include_revisions_tables : bool, optional
If set to True, the summary will show tables describing the
revisions and updates that lead to impacts on variables of
interest.
float_format : str, optional
Formatter format string syntax for converting numbers to strings.
Default is '%.2f'.
Returns
-------
summary_tables : Summary
Summary tables describing news and impacts. Basic tables include:
- A table with general information about the sample.
- A table describing the impacts of revisions and news.
- Tables describing revisions in the dataset since the previous
results set (unless `include_revisions_tables=False`).
In univariate models or if `include_details_tables=True`, one or
more tables will additionally be included describing the details
of how news from specific updates translate into specific impacts.
See Also
--------
summary_impacts
summary_details
summary_revisions
summary_updates | summary | python | statsmodels/statsmodels | statsmodels/tsa/statespace/news.py | https://github.com/statsmodels/statsmodels/blob/master/statsmodels/tsa/statespace/news.py | BSD-3-Clause |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.